Principles of Exercising Leadership

About this course

In todays time when numerous adaptive challenges are generated in family life, communities, societies and organization, the need for leadership is critical to mobilize people to cope up with these newly emerging problems to improve lives and living standard.

What you’ll learn

In this course, you will discover the following foundational principles and strategies:

  • Identifying challenges
  • Role of formal and informal authority
  • Stakeholders’ key perspectives
  • Build Trust and renew trust relationships
  • Conflict Management
  • Surviving in times of change
SectionCourse Content
1Introduction: The Need for Leadership
2Lead With, Beyond, and Without Authority
3Take Action: Think Politically
4Take Action: Build Trust
5Take Action: Orchestrate Conflict
6Anchor Yourself
7Conclusion: Staying Alive

STEM Education – a Controversial Topic

STEM Education – a Controversial Topic

STEM Education has always been a controversial topic; however, it is slowly gaining popularity among the educators. Since the early twentieth-century, primary education in the US has focused on the “three Rs” of reading, writing, and arithmetics. Educators and business leaders have increasingly pushed for the inclusion of more in-depth training in the sciences, engineering, and technical disciplines early in a student’s school years.

In the mid-2000s, this movement gained national prominence as questions over adequate STEM (science, technology, engineering, and math) education were raised in school districts around the country.

The proponents stress on STEM education at both the K–12 and college levels has centered around what they perceive as a “skills gap”—the idea that many students are not adequately prepared for jobs that increasingly emphasize technological skills, such as computer coding. Other examples of people working in STEM fields are chemists, design robots, and civil engineers.

A study conducted by Georgetown University’s Center for Higher Education and the Workforce, for example, found that there will be more than 2.4 million STEM job openings in the US by 2018, including 1.1 million new positions.

Origin of STEM and Focus

Many sources trace the root of the acronym “STEM” to Judith Ramaley, who was assistant director for human resources and education at the National Science Foundation from 2001 to 2004. Since then, the term initially known as “SMET” has become a rallying cry for advocates—including executives in Silicon Valley and President Barack Obama—to include more training in elementary and high school classrooms in skills such as computer coding.

After-school programs focused on STEM and partnerships with businesses to provide training for students in areas such as game design are also common.

Some scientists contend that the lack of adequate STEM education can impact how much the public knows about common scientific issues. As genetically modified crops and driverless cars gain footholds in Americans’ daily lives, they argue that understanding these issues allows people to make informed judgments.

In a 2015 Pew Research Center study, many Americans and members of the American Association for the Advancement of Science found the US education system in STEM fields to be average or below average compared to other industrialized countries.

Other researchers have noted that the idea of a STEM crisis may be overstated. A 2013 review of studies in the Institute of Electrical and Electronics Engineers’ (IEEE) Spectrum found that earning a bachelor’s degree in a STEM field does not necessarily lead to a job. While about 15 million US residents have at least a bachelor’s degree in a STEM subject, approximately 11.4 million of these degree holders work outside the field, researchers found.

Concerns Over STEM Subjects

A further concern surrounding STEM is that women and people of color are underrepresented in STEM occupations, such as scientists, engineers, or those working in highly skilled jobs at technology companies like Apple and Google.

The National Science Foundation notes that women made up 29 percent of the Workforce in science and engineering jobs in 2013, while underrepresented groups, including African Americans and Latinos, made up about 11 percent of the Workforce.

For female students interested in STEM topics at an early age, one concern is that they may be discouraged from pursuing that interest in high school and into college. For example, in 2006, female students earned more credits and had higher grade point averages than male students in high school math and science.

Still, only about 15 percent chose to major in STEM fields as college students, compared to 29 percent of male students. This is often referred to as the “STEM pipeline.”

Many researchers say this trend begins with students’ earliest experiences in school and the role models they encounter. Biases among teachers in STEM fields may influence the way the subjects are approached.

For example, some teachers may try to steer students toward particular topics—or away from others—based on concerns that they will not do as well or show less aptitude. This has been a long-running concern for educators, with a 1993 study in the journal Science and Children noting that science teachers may subtly promote female students’ “invisibility” by calling on them less frequently or addressing them by nameless often than male students.

Cultural perceptions of scientists on TV and in the movies also have an impact. A Western Michigan University study found that 58 percent of the scientists on TV were male characters. Conversations with middle school students found that girls had more “wishful identification” with both male and female scientists than boys. But without encouragement, such as from middle school science and math teachers, girls may not keep pursuing this interest, the researchers stated.

Competition Between STEM and Humanities

For some educators, a key concern with the growing focus on STEM education is that it could undercut funding for the arts and the humanities. This perspective has its roots in statements by officials in many states suggesting that a humanities degree is not a good “value” for students.

The focus on STEM is occurring amid a decline in funding for the arts in public education and many public universities’ humanities. In the US, the National Center for Education Statistics reports that the number of public school districts offering dance and drama classes dropped from 20 percent in 1999 to 3 and 4 percent, respectively, by 2009.

In 2013, IEEE found that the US spends more than $3 billion a year on 209 efforts to promote STEM education. In contrast, the 2014 budget for the National Endowment for the Arts and Humanities included budget increases of about $200,000 for each endowment, raising their funding to $145.5 million each.

With this decline in funding looming in the background, STEM’s focus is particularly alarming to educators. They argue that the skills acquired in humanities courses, such as learning to communicate with various audiences through writing and presentations, are transferrable to many fields, including STEM subjects.

In recent years, a solution is to incorporate the arts into STEM to create the combined STEAM field. Some STEAM projects could include designing a logo or a new design for a STEM product.

STEM in other Countries

Comparisons between the number of students focusing on STEM subjects in the US and students in other countries—particularly in Asia—have long been made. The National Research Council in its report found that in the US, fewer students pursue degrees in STEM subjects than students in other countries.

For example, 6 percent of US students major in engineering in college, compared to more than 40 percent in China. Another commonly used ranking known as the Program for International Student Assessment (PISA), last released in 2015, found that the US was ranked thirty-fifth in mathematics literacy and twenty-seventh in science, out of 65 countries.

However, among educators, the emphasis on test results to measure the quality of STEM education has sparked a large-scale debate. Officials at the National Education Association (NEA), for example, noted the PISA results do not take into account the fact that many students attend schools where a majority of the students live in poverty.

This can often impact how they do on the standardized tests used to determine the PISA results, the NEA representatives stated. Still, international competitiveness has been a significant issue for many promoters of STEM education. In 2010, President Obama focused on the United States’ ranking as he unveiled $250 million in additional funding for STEM, saying that “the nation that out-educates us today is going to out-compete us tomorrow.”

 

Does Standardized Testing Really Measure Academic Proficiency?

Standardized tests are exams that are used in a variety of contexts to determine knowledge and skills in certain areas. They are most commonly used in the United States to measure students' academic proficiencies in core subjects appropriate to kindergarten through twelfth grade (K12) levels. #exam #exams #examinations #student #study #examfever #studying #examprep #exampreparation

Standardized tests are exams that are used in a variety of contexts to determine knowledge and skills in certain areas. They are most commonly used in the United States to measure students’ academic proficiencies in core subjects appropriate to kindergarten through twelfth grade (K12) levels.

However, outside of K12 contexts, standardized testing serves various purposes, including use in college or graduate school admissions, hiring, and professional licensure for certain professions. State bar exams, for example, are used to grant someone permission to practice law.

The use of standardized tests is required as part of the K12 education policy, which is overseen by state governments and local school systems in the United States. The federal government monitors state progress and provides school districts grants under the Elementary and Secondary Education Act (ESEA) of 1965.

Congress created the National Assessment of Educational Progress (NAEP) in 1969 to survey educational proficiency across the country. Each year, the NAEP measures the performance of a sampling of students in each state on a variety of academic subjects. Results are published in the Nation’s Report Card.

Reauthorization and major amendments to ESEA include the No Child Left Behind Act (NCLB) of 2001 and Every Student Succeeds Act (ESSA) of 2015. NCLB increased the number of mandated tests and introduced high-stakes testing into the K12 education system. Testing is considered high stakes when outcomes are tied to graduation, district funding, or teacher employment and salary.

Under NCLB, many states began using test results when determining penalties and compensation for schools, educators, and administrators. ESSA limited federal requirements for standardized testing in response to misuse and over-testing. Critics argue that standardized testing is expensive and depletes education funds that could be more effectively applied elsewhere.

Critics also posit that many standardized tests are discriminatory and penalize students who come from marginalized communities or impoverished socioeconomic backgrounds.

PROS AND CONS OF USING STANDARDIZED TESTS TO MEASURE ACADEMIC PROFICIENCY

Pros

    • Standardized tests provide a reliable measure of academic ability to demonstrate to policymakers which schools and programs are most effective.
    • Information collected from standardized tests can help determine the most appropriate path for a student. Using this data can help create an educational experience that suits the student’s strengths and addresses their challenges.
    • Standardized tests help hold both students and teachers accountable for their performance. Test results can be analyzed easily, quickly, and objectively.

Cons

    • Research indicates that performance disparities between students from different racial, ethnic, and economic groups may result from standardized tests being biased toward students from specific backgrounds.
    • Money spent on testing could be used on resources for the classroom, student services, and other areas where schools are often underfunded.
    • Many students, parents, and teachers report high levels of stress and anxiety with standardized testing.

INFLUENCE OF STANDARDIZED TESTS ON CURRICULUM

Standardized testing can help educators analyze gaps in learning and improve instruction. When testing plays an outsized role, however, it can have a detrimental effect on student achievement.

Standardized tests are generally given in a controlled environment under specified time constraints using systematized multiple-choice or true/false questions, emphasizing rote memorization over critical thinking and creativity. Intensive testing can force teachers to spend instructional time on low-level skills aimed at improving testing performance.

This is referred to as “teaching to the test.” This process artificially boosts results, known as score inflation. Score inflation undermines the intention to accurately measure student abilities in favor of outcomes that positively reflect the teacher or school.

Critics of the education system’s extensive use of testing argue that time spent in the classroom would be used far more effectively by focusing on complex skills in several subjects and engaging students in a diverse range of lessons and activities.

According to the National Education Association (NEA), standardized testing can increase stress among teachers and students. High-stakes testing requirements under the NCLB pressured teachers and administrators to demonstrate gains they felt unrealistic but tied to salary increases or the threat of dismissal.

This led to widespread incidents of cheating by educators to boost scores. Critics argue that teaching to the test limits teachers’ expertise, autonomy, and professional creativity. They contend that test preparation burdens educators with increased work and distracts from each classroom’s specific needs.

A survey of Connecticut teachers conducted in 2020 revealed that 51 percent of respondents felt that government-mandated standardized testing had a negative effect on classroom instruction; only 3 percent indicated a positive impact. Standardized testing can also increase student anxiety.

A study published by the National Bureau of Economic Research in 2018 revealed that students felt increased stress levels during the week of standardized testing. Massive changes in the stress hormone cortisol had a negative effect on test scores.

IS STANDARDIZED TESTING A FAIR TESTING

Standardized testing can be a useful tool for pinpointing continuous learning or achievement gaps, which indicate disparities in academic performance and educational quality between and among different demographics of students. Measuring disparities between other student group scores reveal areas that need intervention to improve educational outcomes.

Overall, average standardized test scores in math and reading for grades four and eight declined from 2017 to 2019 based on the NAEP. The largest drop occurred in eighth-grade reading levels. Fourth-grade math scores were the only area that improved during that period. Between 1992 and 2019, white and Asian students consistently maintained high scores.

African American and Hispanic students scored lower but made some gains during that period, though results were mixed between 2017 and 2019. However, disparities between students who received free or reduced lunches (FRLs), indicating lower family incomes, and those who did not receive FRLs remained consistently wide over time.

Such achievement gaps reflect opportunity gaps, which refer to inequitable access to quality education systems and resources measured by race and ethnicity, gender, socioeconomic status, English-language proficiency, and school-district wealth. State governments and local school districts set policies, create education budgets, and oversee curriculum, giving them an essential role in addressing opportunity gaps.

A 2019 study from Stanford University linked achievement and opportunity gaps for Black and Hispanic students to the racial segregation that concentrates these students in schools with high levels of poverty. Research published in Race Ethnicity and Education in 2015 indicated that high-stakes testing when combined with school choice programs that reallocated public funding to private institutions, increased segregation and worsened inequities.

Parents, educators, and activists have expressed concern regarding the overall fairness of standardized testing. These concerns contend that cultural bias pervades test design, preparation, and implementation. These biases may be intentional or unintentional. Such biases lead to exams that favor native English-speaking students from white, upper- and middle-class households while placing culturally and socioeconomically diverse students at a disadvantage.

The implementation of high-stakes testing, such as mandatory exit exams tied to graduation eligibility, in states and schools with large populations of racial and ethnic minority students has also led to discrimination lawsuits.

Though the stated intention of such exams was to ensure that all students exhibited competencies in certain areas before graduating, the National Center for Fair and Open Testing (FairTest) asserts that exit exams tied to graduation eligibility lead to higher dropout rates, with those who fail disproportionately from marginalized groups, of lower socioeconomic status, or English language learners. In 2019, only eleven states required students to pass exit exams to obtain a high school diploma, down from at least twenty-seven a decade earlier.

STANDARDIZED TESTS IN COLLEGE ADMISSIONS

Many colleges require students to submit scores on either the ACT or SAT standardized tests as part of the application process. Advanced Placement (AP) exams to measure course mastery and obtain college credits are also standardized. High-stakes entrance exams face much of the same criticism as K12 exams in terms of their effectiveness in measuring student preparedness and predicting student success.

Education researchers also charge that the tests have gender and racial biases. Women have higher grade point averages and college graduation rates. According to FairTest, however, females’ average scores are thirty-six points lower than those of males for the math portion of the SAT. Inside Higher Ed reported in February 2019 that AP scores indicated a clear racial and ethnic gap.

Average test scores for Asian and white test takers were above the level needed to receive college-level credits. In contrast, average scores for Black/African American and Hispanic/Latinx students fell below the threshold.

Critics of standardizing tests note that students from households with higher incomes have historically performed better at these exams. A 2019 analysis published by the Brookings Institution linked average scores with household income. The students with the lowest average scores on the SAT came from households with less than $20,000 annual income, while students with the highest average scores had an annual family income of $200,000 or more.

In 2016, students with a family income of $80,000 or more had an ACT composite score of 4.1 points higher than students with a family income below $80,000. Researchers highlight how students from wealthier households are likely to access expensive preparatory courses and personal tutoring. For students living in lower-income households, tests may be cost-prohibitive, further widening the opportunity gap.

The requirement for undergraduate applicants to submit SAT and ACT scores for college admissions was based on a discrimination lawsuit against the University of California (UC) filed in December 2019. The claimants alleged the tests were biased against students of color, English-language learners, and students with disabilities.

UC administrators suspended the standardized test policy for undergraduate admissions throughout the entire UC system on May 21, 2020. The Board of Regents issued a plan to develop an in-house test by 2025 or eliminate the testing requirement. Many schools have reformed their admissions practices.

Along with the UC system, more than 1,200 US colleges and universities did not require applicants to submit exam scores for admission in Fall 2021, according to FairTest. Other academic institutions have de-emphasized these scores in their admissions evaluations.

WHILE YOU ARE HERE, TAKE A MINUTE TO ANSWER ANY OF THE QUESTION

    1. What are some potential effects of tying public school funding to student performance on standardized tests?
    2. How, if at all, do you think cultural biases in standardized testing place students from certain groups at a disadvantage? Explain your answer.
    3. In your opinion, should standardized tests continue to be part of the college admissions process? Why or why not?

REFORM IN STANDARDIZED TESTING

Opposition to over-testing and the misapplication of test results under NCLB grew among parents, educators, and legislators. Changes introduced under ESSA gave states greater flexibility to develop performance and accountability systems.

Many countries dropped or reduced the weight of student scores when judging teacher performance. Some reduced the amount or frequency of tests and limited the number of school hours dedicated to testing.

For example, in New Hampshire, educators developed a dynamic student assessment based on considerable knowledge and skill indicators, minimizing the impact of standardized test scores.

ESSA maintained the requirement for standardized testing for 95 percent of all students attending third grade to eighth grade and high school in each state. Parents, educators, and legislators’ movement succeeded in pushing governments to pass laws permitting students to opt-out of standardized tests with parental authorization.

All but three states implemented laws allowing parents to permit their children to opt-out, according to the NEA in 2020. Most students who opted out were white and affluent. Some civil rights groups have expressed opposition to the anti-testing opt-out movement.

They warn that if too many students sit the test out, valuable information about achievements and opportunity gaps will no longer be available, causing a reduction in access to programs and resources.

The 2019–2020 academic year was interrupted by the COVID-19 pandemic. Throughout the United States, school districts went on lockdown across the country to keep students and communities from spreading the novel coronavirus.

Widespread school closings and the switch to online learning interrupted school districts’ ability to administer the tests in a controlled environment. The US Department of Education (ED) waived federal requirements for standardized testing for all K12 students during the academic year by granting all fifty states an initial approval in March.

Education researchers expressed concern that, without consistent data, administrators would not know which schools need interventions, and policymakers would be placed at a disadvantage. In June 2020, Georgia became the first state to submit a waiver to the ED seeking to extend suspensions for testing requirements into the 2020–2021 academic year.

The Washington Post reported in April 2020 that the effect of the pandemic on student access to the SAT and ACT exams led to a significant number of colleges and universities changing their admissions policies to test-optional for fall 2021 applicants, with some schools considering eliminating the testing requirement permanently.

How to Live Longer And Reduce Effects of Aging

How to Live Longer And Reduce Effects of Aging

As America’s “baby boomers” move into middle age and more people live well beyond their 60’s, researchers are exploring why the body ages and how we might fend off aging’s effects. The general advice of physicians today on how to stave off the effects of aging has not changed much since 1822 when the English physician William Kitchiner prescribed behavior for “The Art of Invigorating and Prolonging Life”:

“Go to bed early, rise early. Take enough exercise as you can in the open air, without fatigue. Eat and drink moderately of plain and nourishing food. And especially, keep the mind diverted, and in as easy and cheerful a state as possible.”

But people today live much longer than they were in 1822 when most people could expect to live until about their mid-40’s. According to the National Center for Health Statistics, in the United States alone, the overall life expectancy (the average number of years a newborn is expected to live) has increased from 47 years in 1900 to 75 years in 1992. And the most significant increases in the average longevity of human beings had come about only since the early 1900s, when sanitation, nutrition, and medical care improved significantly, at least among developed nations.

This longer life expectancy has resulted in a growing elderly population. The U.S. Bureau of the Census reported in 1992 that 32 million people—or about 1 in 8 Americans—were over the age of 65. By the year 2035, the Census Bureau estimates that 1 in 4 Americans will be over age 65. Most scientists think that there is a biological limit to life—perhaps between 115 and 120 years—and that the ultimate average life expectancy is about 85 years. But others see no end in sight.

What Are The Effects Of Aging

Although the external signs of aging—gray hair, baldness, wrinkles, “age spots”—may distress people, it is the internal changes that can rob a person’s vigor. Generally, as people age, muscle mass diminishes, the body’s disease-fighting immune system becomes less efficient, and the heart and lungs become vulnerable to disease. The risk of cancer and other illnesses also increases. The physiological changes caused by aging eventually catch up with everyone who lives long enough. But the extent and timing of these changes vary considerably from one person to the next.

Some of the aging’s effects also differ in men and women. At about age 50, women experience menopause, the time when fertility and menstruation cease. Levels of the female hormone estrogen fall, while the levels of some other hormones rise. The drop in estrogen levels can make women become anxious and depressed and experience hot flashes (a sudden feeling of heat accompanied by dilation of the blood vessels and sweating). It can also contribute to osteoporosis (a decrease in bone density resulting in fractures, especially of the spinal vertebrae, hip, pelvis, and forearm).

In men, aging’s effect on the male hormone testosterone is less dramatic than the hormonal changes that occur in women. Although testosterone levels decrease with age, researchers have yet to determine the cause, effects, and extent of this decline.

Because most people now live a long life and must face the many changes that aging brings, researchers are devoting a great deal of effort to exploring antiaging methods. Some of their findings have resulted in a reaffirmation of the rewards of a healthy lifestyle, while others have led to the introduction of experimental treatments.

Does Nutrition Fight Aging

Reports published during 1991 and 1992 variously announced that some substance in broccoli seems to ward off cancer; vitamin C might protect senior citizens against heart disease and cataracts; vitamin E may boost the immune system of older people and reduce the build-up of fatty deposits in arteries; and vitamin B12 may help prevent senility (loss of intellectual faculties, beginning for the first time in old age).

Although such findings are encouraging, many experts believe that consuming significantly higher amounts of these vitamins is premature since long-term consequences of higher doses are unknown.

Nevertheless, doctors believe that the types of foods a person eats play an essential role in protecting against disease and other aging physical effects. For example, there is good evidence that adequate calcium intake can later reduce bone loss among older women. And numerous nutrition studies have indicated that a diet low in fat and high in fiber can decrease the risk of the two leading causes of death among senior citizens in the United States: cancer and cardiovascular (heart and blood vessel) disease.

In 1990, the National Institutes of Health (NIH) in Bethesda, Md., advised adults to consume low-fat diets, with a maximum of 30 percent of calories from fat. In 1991, the NIH made the same dietary recommendation for children as young as age 3. This advice was based largely on studies showing that plaque (fatty deposits that build up in arteries), which can lead to heart disease, can begin to form during childhood. In 1992, the U.S. Department of Agriculture issued new nutritional guidelines, suggesting that people should consume less fat by eating more fruits, grains, and vegetables—and less meat, dairy products, and oils.

Underscoring the new emphasis on nutrition’s role in maintaining health, the American Cancer Society in 1992 announced that it would commit more resources to research on nutrients that could prevent cancer. Many biogerontologists (researchers who study the biological processes associated with aging) and nutritionists believe that certain nutrients act as antioxidants, chemicals that may intercept and react with potentially damaging molecules called free radicals, and render them harmless. Among such antioxidants are beta-carotene (a substance the body turns into vitamin A) and vitamins C and E, all found in many fruits and vegetables.

Exercise For Anti Aging Not For Good Looks

Many studies show that aerobic or even moderate exercise can slow or reverse some effects of aging. For example, after an eight-year-long research on the impact of exercise on 13,344 men and women, researchers at the Institute for Aerobics in Dallas concluded in 1989 that walking briskly for only 30 minutes per day can prolong life and reduce the risk of death from heart disease and cancer.

In 1990, British scientists reported that middle-aged men who engaged in regular vigorous physical activity substantially reduced their heart attack risk. And researchers from the University of California at San Diego, who studied the effects of physical activity on more than 600 women aged 50 to 89 years, announced in February 1991 that moderate exercise significantly lowered blood pressure. High blood pressure is a significant risk factor for heart disease and stroke.

Furthermore, the benefits of physical activity can be reaped at any age. A 1990 study by gerontologist Maria Fiatarone at Harvard Medical School in Boston showed that exercise could benefit nursing-home residents over age 90. In just eight weeks of working out with weights, these older individuals increased their strength by 174 percent, boosted their walking speed by 48 percent, and pumped up their muscle mass an extra 9 percent.

There is also evidence that exercise may help prevent Type II diabetes—the most common form of diabetes mellitus. In this disorder, the body cannot respond efficiently to insulin, a hormone that is important in using and storing sugar. The risk of developing Type II diabetes increases with age, and complications can irreversibly damage the heart, blood vessels, kidneys, eyes, and nerves.

Researchers at the University of California at Berkeley in 1991 reported that among 6,000 healthy men, those who exercised the most vigorously were less than half as likely as those who were the least active to develop Type II diabetes. A study of 87,000 middle-aged women reported in June 1991 by researchers from Harvard Medical School came to similar conclusions.

Physical activity also helps fight depression by providing a psychological boost, and it may also help keep the mind functioning well. For example, research done at the Veterans Administration Medical Center in Salt Lake City, Utah, in 1984 found that a vigorous exercise regimen helped people between the ages of 55 and 70 improve their memory and response time on tests measuring mental abilities.

How To Maintain Intellectual Functioning

Findings that lifestyle factors may improve “mental fitness” are puncturing some long-standing myths about aging’s effects on the brain. Neurologists acknowledge that the brain loses some of its cells and shrinks somewhat with age, possibly causing memory impairment.

But the substantial mental decline is no longer considered an inevitable consequence of growing older. In fact, studies have shown that most older people who show marked signs of mental deterioration actually have some underlying disorder, such as depression or Alzheimer’s disease (a condition that causes profound mental deterioration).

In addition, scientists have revised their belief that the brain’s nerve cells cannot rejuvenate themselves. Researchers at the University of California at Irvine reported in 1991 that the brains of old rats were just as adept at repairing damaged nerve cells and growing new connections between them as were the brains of young rats.

Researchers have also found that when rats are kept isolated and sedentary, with nothing to interest them, the number of connections between the brain’s nerve cells declines. But when the animals are required to perform a complex task, such as finding their way through a maze, new connections sprout between their brain nerve cells. Researchers wonder whether mental stimulation might have a similar effect on the brains of human beings.

Can We Stop Aging

While some researchers seek ways to cope with aging, others are trying to understand the aging process itself, and they have proposed several hypotheses to explain this phenomenon. Genetic makeup (traits passed in genes from parents to their offspring) lies at the heart of a number of theories on aging.

Genes in the human body orchestrate growth and maturation and the maintenance and repair of cells. Studies have shown that these processes proceed simultaneously until about age 30. At that point, growth has stopped, and the body does not work as efficiently to repair or replace cells.

A cell’s genes tell the cell how and when to divide, and one theory of aging proposes that each of our many types of cells has a built-in limit to the number of times the cell can divide. Experiments to support this theory were first done in the 1960s by geneticists Leonard Hayflick and Paul Moorhead, then at the Wistar Institute in Philadelphia.

They reported that normal human embryo cells placed in laboratory cultures divided about 50 times before dying. Normal adult cells, however, divided only about 20 times. Hayflick’s experiment suggests that messages contained in genes control how long cells live, which may, in turn, dictate how long human bodies survive. Ironically, the only human cells that appear to be immortal—at least in laboratory cultures—are abnormal ones, such as cancer cells.

Because unusual longevity tends to run in families, some scientists believe that aging must have a genetic component. One theory holds that there are “aging genes” that determine the life span. But no one has pinpointed a gene or genes responsible for long life. However, researchers believe that the ongoing Human Genome Project, a massive international effort to identify all 50,000 to 100,000 human genes, is likely to turn up genes relevant to aging research. Therefore, to stop aging is out of control, but we can manage it and slow it to live the healthful life. 

Aging Due To Genes Damage

Aging might reflect the accumulation of errors in genes, according to some biogerontologists. Researchers suggest that external factors, such as pollutants or radiation—including the cosmic rays (subatomic particles from space) that continually bombard the Earth—and even the oxygen we breathe, can cause these errors by damaging the components of genes.

Genes are made of DNA (deoxyribonucleic acid), and most genes contain the recipe to create proteins. If external factors damage a cell’s DNA, the recipe can become garbled, and the proteins the cell makes may be abnormal. This can have serious consequences because proteins regulate thousands of cell activities.

A cell with abnormal proteins may not be able to perform its particular task adequately, whether it is to fight infection, refurbish body parts, digest food, or communicate with other cells. Proteins also form much of the body’s structural material, such as muscle and cartilage (the tissue that supports bone and other organs). No one has been able to show that protein errors increase with time. However, such errors might occur in just a few key proteins that have yet to be examined.

Role Of Proteins in Aging

Other researchers suggest that we age because the body’s ability to handle proteins deteriorates. Studies indicate that as the body gets older, it produces certain proteins more slowly than it did in the past. Furthermore, the aging body becomes less efficient at clearing out old protein molecules. And the longer a protein hangs around, the more likely it is to react with a blood sugar called glucose or with other proteins.

This theory, first proposed in 1981 by scientists at Rockefeller University in New York City, suggests that this phenomenon, called cross-linking, gums up the cell and interferes with its normal functions. But scientists do not yet understand why the body becomes less efficient at making and removing proteins in the first place.

What is Free-Radical Theory

Another theory of cell aging holds that cells “rust” as a result of interactions with oxygen. Just as oxygen takes its toll on iron, interacting with and chewing up the surface to create rust, the oxygen that is continuously fed through the body may take its toll on tissues. Highly reactive free-radical molecules routinely form when the cell uses oxygen to produce energy.

Free radicals can also form in other ways, such as through radiation exposure. However, they form free radicals interest biogerontologists because they are loose chemical cannons that readily react with other cell molecules, and these reactions can damage proteins, DNA, fats, and other cell components. This theory was first proposed in the 1950s by aging specialist Denham Harman of the University of Nebraska, and it has become a popular explanation for disorders ranging from cancer to dandruff.

Some scientists think that free-radical damage to the cells of the immune system may explain the elderly’s increased susceptibility to disease. Although the body generates antioxidants that neutralize free radicals and prevent most of their damage, free radicals take their toll over time, according to the theory.

Can We Live Longer by Eating Less

Researchers have long sought ways to extend the life span of humans. So far, reducing food consumption is the only known method of extending the life span of animals. Experiments using dietary restriction date back to studies done in 1935 by nutrition researcher Clive McCay of Cornell University in Ithaca, N.Y.

McCay reported that when rats were fed only 25 percent to 40 percent of their usual calorie intake, their maximum life span increased by as much as 60 percent. A 1991 study at Tufts University in Boston found that mice lived 29 percent longer when their food intake was cut by 40 percent.

Researchers have also found that rodents on restricted diets have good immune systems, are leaner, and show a much lower cancer rate. Such diets seem to induce in them a period of daily torpor during which metabolism drops and body temperature falls. In addition, the restricted diets boosted levels of some natural antioxidants, which may reduce free-radical damage.

Roy Walford of the University of California at Los Angeles is among the gerontologists identified with the study of dietary restriction. On Sept. 26, 1991, Walford and seven other researchers entered Biosphere II, an experimental and controversial self-contained colony in Oracle, Ariz. Walford intends to keep the people in Biosphere II on a low-calorie, high-nutrient diet for the duration of their two-year stay.

Although Walford argues that humans could gradually adjust to severe caloric restriction, most people would probably find the chance at a longer life, not worth the price. However, if researchers can determine just how dietary restriction works to extend life span, they may be able to devise a way for people to get the benefits of the diet without the sacrifice of sticking to it.

Youthful Qualities In Hormones

Researchers have also long sought substances that can slow or even reverse the aging process. For example, physicians have prescribed estrogen for postmenopausal women since the 1950s to offset physical and emotional changes caused by menopause. In 1991, a large-scale study by researchers at Harvard Medical School confirmed earlier findings that estrogen supplements could help older women live longer by protecting them from coronary artery disease, the chief cause of death for women over 50.

Furthermore, a study reported in 1991 by researchers at the University of Southern California in Los Angeles found that the overall death rate for women who had used estrogen for more than 15 years had decreased by 40 percent, compared with the death rate for women who had not used the hormone.

However, estrogen supplements may increase the risk of endometrial cancer, a rare but often fatal type of cancer that affects the lining of the uterus. To lessen this hazard, physicians recommend either frequent screening for this cancer or taking a second hormone, progesterone, which counteracts estrogen’s effects on uterine tissue. Estrogen supplements have also been linked with an increased risk of breast cancer, but progesterone may lower this risk, as well.

What Are Agents Of Growth

Inspired by estrogen’s successful use, scientists are testing other types of hormones to determine whether they will help counteract the effects of aging. Human growth hormone has entered the antiaging research arena and has shown some dramatic results in human tests. Usually, the body produces growth hormones in varying amounts throughout its lifetime.

During childhood and young adulthood, this hormone promotes growth. In adults, it enhances the body’s use of nutrients and supports the action of other hormones. About one-third of elderly people produce diminished amounts of growth hormone and, therefore, may have diminished muscle strength, according to researchers.

Gerontologist Daniel Rudman of the Medical College of Wisconsin in Milwaukee and his colleagues in 1990 reported results of a study in which 12 older men were given a genetically engineered version of human growth hormone three times a week for six months. During that time, they experienced a 9 percent increase in muscle mass, a 14 percent decrease in fat, and a 7 percent increase in skin thickness.

Unfortunately, the treatment was not without side effects. About one-third of the men experienced water-weight gain, breast enlargement, or carpal tunnel syndrome (a painful condition of the hand and wrist). And the benefits did not last. After the treatment was discontinued, the new muscle turned back to flab.

Other hormones under investigation include DHEA (dehydroepiandrosterone), an androgen, or male hormone, that is naturally produced by the testes in males and by the adrenal glands in both sexes. This hormone diminishes with the years. In tests on aging mice, DHEA caused a decline in the number of mammary cancers, a delay in the immune system’s deterioration, and increased life span.

Doctors in Europe use the hormone to treat psychological stress. Limited trials of DHEA in humans were underway in 1992 at various research centers in the United States to determine whether it can decrease cancer and heart disease risk, reduce stress, build muscles, and increase infection-fighting cells. Scientists have just begun to study this hormone in depth, so it will be many years before its long-term effects—both positive and negative—are known.

Scientists also have begun testing neurotrophic growth factors, natural substances that nourish nerve cells. Researchers are hoping that these agents will rejuvenate shrunken or damaged nerve cells or promote the growth of new ones. They have found that different types of neurotrophic growth factors target different nerve cells, making it conceivable that genetically engineered versions of these substances could be used to treat specific diseases associated with the brain and nervous system. Such conditions include Alzheimer’s disease, Parkinson’s disease, and multiple sclerosis.

Impact of Fighting Aging In Society

Although the benefits of fighting aging are evident for individuals, the effects of a large aging population on society are more questionable. For example, an ever-aging American society may further strain the health care system and economy.

Nevertheless, the human desire to live longer and healthier lives is driving research that may help extend life and cure the diseases that plague us in our later years. By the time today’s children approach old age, 1 million Americans will probably be over the age of 100. If they take advantage of what is known now about turning back the clock on aging, those centenarians of tomorrow may live active lives right to the end, genuinely realizing the satisfactions the golden years can hold.

Pollution – Its Causes, Effects, And Control

Pollution – Its Causes, Effects, And Control

Pollution as a result of human population growth threatens many of Earth’s ecosystems.

Human Population as a Source of Pollution

As the human population grows, pollution from human activity also increases. Many activities—such as driving automobiles, farming, manufacturing, and power generation—release pollutants into the air, water, or soil. Typical results of such pollution are changes in the chemistry of the environment.

These chemical changes affect the nearby environment and the people who live there—and areas hundreds or even thousands of kilometers from the place of release. For example, substances released into the air may be carried by the wind and be deposited far away by rain. Currents in rivers, lakes, and oceans spread pollutants that are dumped into the water. Pollution in the soil can seep into groundwater and appear later in wells. Scientists have found evidence of pollution everywhere on Earth, from the largest cities to the remote and isolated South Pole.

Danger to Animal and Plant Life

Animal and plant life is sensitive to changes in the environment. For example, scientists have discovered extreme sensitivity in animals and plants that communicate by releasing biochemical compounds called pheromones. Some species can detect and respond to pheromone concentrations of as little as one part in a trillion—the equivalent of one teaspoonful in a lake that is 1 square kilometer (0.4 square miles) in area and 1 to 2 meters (3 to 7 feet) deep.

Such chemical sensitivity suggests to scientists that organisms may be easily affected by small but sudden changes in the chemistry of air, water, and land. Plants and animals may adapt to changes as they evolve over thousands or millions of years. But in periods measured in a few decades or even centuries, such changes may prove highly disruptive to many forms of life, including human beings.

Major Pollutants and Their Effects On Ecosystem

There are many toxic (poisonous) pollutants, but the most well studied are radioactive elements and certain chemical compounds used to kill insects. Radioactive elements give off radiation that is harmful to plants and animals. They have been well studied because scientists can easily measure and track them with instruments that detect the radiation they give off. Such radioactive elements as strontium 90 were distributed worldwide in nuclear-bomb testing in the 1950s and early 1960s.

Strontium 90 chemically resembles the mineral calcium. Plants and animals absorb and store strontium 90 in tissues where calcium normally accumulates. In animals, strontium 90 accumulates in bone and marrow, the blood-cell-forming tissue, and can cause leukemia, a cancer of the blood. Small amounts of strontium 90 in the environment are a direct hazard to people.

Certain chemical pesticides used to control insects have also been well studied. Scientists can trace the chemicals’ effects because some compounds remain in the environment for a long time. In addition, they have been used in large amounts in many parts of the world.

Studies of radioactive and chemical contaminants have taught scientists a great deal about toxins’ hazards and their threat to people and nature. One of scientists’ most important discoveries was that toxins released into the environment not only circulate widely in air and water, but also may appear in living creatures in concentrations that are tens, hundreds, thousands, or hundreds of thousands of times higher than those measured in the air, water, or soil.

The concentrations of toxins in the food chain may be increased or decreased. For example, a single plant may retain only a small amount of a toxin on its leaves. A rabbit eating many such plants may absorb the toxin in all the plants. And when a wolf eats many rabbits throughout its lifetime, it absorbs the toxin in all the rabbits. In this way, the concentrations of a pollutant that is stored in animal tissues and not excreted may be dramatically more extensive in the tissues of some animals at the top of a food chain. This process is called biomagnification.

What Is DDT and Its Effects on Environment 

The chlorinated hydrocarbons, a group of insecticides, illustrate this effect. These insecticides include DDT (dichloro-diphenyl-trichloroethane), a compound banned in the United States in 1972. DDT sprayed into the air kills mosquitoes and other insect pests. However, scientists have found that some of the compounds remained in the air and were carried far away and sometimes fell with precipitation into bodies of water.

DDT dissolves in fat but not water, so if the toxin contaminated a lake, river, or stream, it would bond to the fish’s fatty tissue. In predators that ate the fish, the chemical accumulated. Farther up the food chain, in human beings or other animals that eat meat-eaters, the compound could reach concentrations hundreds to thousands or more times greater than the concentration in the general environment.

In this way, DDT—before it was banned—reached levels high enough to seriously disrupt the reproduction of many birds in North America, especially hawks, owls, and eagles. The chemical affected the development of the birds’ eggs. DDT caused the birds’ eggshells to be extremely thin and fragile, so that the young died before they could hatch. DDT’s use caused the peregrine falcon to become extinct throughout much of its normal range throughout the Eastern United States.

Many countries, especially in the tropics, still permit the use of DDT. The compound continues to cause worldwide problems, because winds deposit it far from the place of use. As recently as 1991, researchers reported finding DDT and other U.S.-banned pesticides in U.S. lakes.

“Ecologists consider DDT and radioactive substances such as strontium 90 to be representative of contaminants as a whole.”

Radioactivity such as that of strontium 90 causes health problems in people even at low levels.

“Plants and animals will be protected if the amount of radioactivity released into an area is low enough to be safe for human beings.”

Small amounts of toxins such as DDT and other poisons, on the other hand, may not be a direct threat to people. Yet scientists have learned that this type of pollutant can accumulate through biomagnification, contributing to the loss of plant and animal life and ultimately threatening human beings’ health.

Air Pollutions Cause Acid Rain

Scientists are collecting data concerning the effects of many other types of toxic pollutants. For example, air pollution can cause breathing difficulties and other health problems in people, aggravating diseases such as asthma and pneumonia, and contributing to the development of cancer and emphysema. Air pollution also harms plants and animals.

Th oxides of sulfur and nitrogen are considered two of the most serious air pollutants. A significant source of these compounds is the burning of fossil fuels (coal, oil, and natural gas) in industry and in transportation. The pollutants usually occur with high levels of other toxins such as lead, zinc, and ground-level ozone, a smog component formed by chemical reactions between car exhausts and sunlight.

Sulfur dioxide and nitrogen oxides also cause acidic precipitation, commonly called acid rain.

“Acid rain results when the airborne pollutants combine with moisture in the air to form sulfuric and nitric acids that fall back to Earth, usually in rain or snow.”

Since the late 1960s, numerous scientific studies have demonstrated acid rain’s effects on the environment. These studies have reported that acid rain hinders plant photosynthesis (the process by which plants make food from water, sunlight, and carbon dioxide). Acid rain also contributes to the death of trees, destroys life in lakes and rivers, and damages statues and other structures.

Heavy Metals are Also Source of Pollution

The other pollutants such as metallic elements called heavy metals are part of the study. These contaminants can pollute the air, water, and soil. They include lead, mercury, silver, zinc, iron, copper, nickel, chromium, and cadmium. Some coal is rich in heavy metals, and burning it in electric power stations, incinerators, steel mills, and motor vehicles may produce air pollution containing the metals. The elements enter the atmosphere as microscopic particles called particulates. These particulates then fall to Earth and contaminate soil and water.

Scientists are collecting evidence of the effects of heavy metals in the environment. Studies show that exposure to lead in soil or water can cause nervous-system damage in children, for example, if human beings eat mercury-poisoned fish, the effects can be deadly. In March 1991, Joel Schwartz, a scientist with the U.S. Environmental Protection Agency, reported that as many as 60,000 people in the United States might die prematurely each year due to particulate pollution. In August 1991, measurements of mercury levels in fish caught in several U.S. lakes prompted officials in 20 states to warn consumers against eating fish from those waters. Heavy metals also threaten the growth of forests by disrupting the supply of nutrients in the soil.

What are Water Pollutants

Besides acid rain and heavy metals, other pollutant such as waste from industries also contaminate rivers, lakes, streams, seas, and oceans. It is a particularly important cause of water pollution. Factories may dump waste containing toxic chemicals directly into bodies of water or into sewerage systems. Sewage itself is another major contaminant of water that can cause ecological problems and such human diseases as cholera and dysentery.

Marine life is also harmed by agricultural waste, chiefly runoff containing chemical fertilizers and pesticides. Finally, oil and other petroleum products spilled into water bodies, foul beaches, and killed sea birds and mammals, such as dolphins and whales.

Solid Waste In Water

Chemical pollutants released into water or spread through the air are often invisible to the human eye. But the growing masses of solid waste that people produce are an all-too-visible pollutant in the form of trash.

The EPA estimates that by 2000, the United States will generate about 175 million metric tons (193 short tons) of solid waste per year. According to the U.S. National Solid Wastes Management Association, an organization of businesses that collect, dispose of, and recycle trash, about 83 percent of U.S. solid waste goes into landfill dumps.

In most landfills, operators spread Earth over the most recent garbage to keep rats, flies, and other vermin away. But landfills still pose a widespread pollution hazard.

Apart from the land that landfills pollute, they can also poison underground reservoirs of water with metals and dangerous chemicals from packaging materials and other debris. This happens when rain seeps through garbage, dissolves the metals and chemicals, and carries them into the soil. Once in the ground, the compounds slowly filter down to enter water supplies—which are often used for drinking water. As solid wastes fill more and more landfills, this form of water pollution is an increasing concern.

How To Clear the Water, Land, And Air

To protect people and the environment, most developed nations have placed limits on the amount and types of pollution released into the environment. But laws and political boundaries cannot stop the spread of pollution through the air or the water. Therefore, nations and states with high pollution levels can adversely affect those with the strictest pollution laws.

Question of Safe Level of Pollutants

Another difficult question involves what level of pollution is safe. Many laws require that pollution levels not exceed those found to be harmful to people. But, as scientists have learned through the study of DDT and other pesticides, it may be necessary to protect plant and animal life to protect people.

Doing so would require much more restrictive standards than those based merely on protecting people from direct contamination. Yet experts say that enacting such tough standards is the only way to assure people’s protection from the poisoning of the environment that is now underway.

How To Take Charge Of Your Health?

How To Take Charge Of Your Health?

You must take charge of your health by adopting healthful behaviors and practicing prevention.

In today’s climate of medical cost-cutting and managed care, the watchword in health care is prevention. Health planners, analysts, and health-care professionals know that the financial and human costs associated with preventing illness are far less than the costs of treating illness. For this reason, many managed-care programs strongly emphasize preventive care.

You must take charge of your health with your doctor as a partner by practicing preventive health maintenance in your daily life. You can adopt behaviors that maintain or improve good health while rejecting those that may harm your health. If you belong to a managed-care health plan, such as an HMO, the task is made a little easier, because some programs provide incentives and opportunities for prevention and health maintenance.

Statistically, people who eat right, stay fit, and avoid smoking live longer, healthier lives than people who do not do these things. Although good health habits do not guarantee a long, healthy life, and bad habits do not ensure that you die young, you can stack the odds on your side by practicing healthy behaviors.

Hazards of Smoking to Health

The one action that will benefit health most is to refrain from smoking. A large body of scientific research well documents the hazards of smoking that smokers are about ten times more likely to catch lung cancer. They are at least ten times more likely to develop emphysema (a disabling chronic lung disease) than nonsmokers. Smokers are also more likely to die from cancers of the lung, throat, and mouth. They have significantly higher rates of heart disease than nonsmokers as well. Moreover, researchers have documented the dangers of second-hand smoke (smoke breathed in by nonsmokers). More and more evidence shows that second-hand smoke increases the risk of heart and lung disease and cancer for people exposed to smokers’ smoke.

Quitting smoking will benefit a person no matter how long they have been a smoker, and it will contribute to the health of those around them. Smokers often find, however, that they need help in quitting the habit. Fortunately, many managed-care health plans offer programs that help people quit smoking. Employers may also offer such programs or some reimbursement for them.

Benefits of a Balanced Diet

Another important way to reduce the chance of illness is to pay attention to diet. Healthy eating dramatically reduces the risk of heart disease, the number-one killer in the United States. The most important contributing factor to most heart disease is the buildup of cholesterol in the bloodstream. Deposits of cholesterol in arteries can reduce blood flow to the heart and cause a heart attack. Because diet affects the cholesterol level in the bloodstream, the best way to lower cholesterol is to avoid or reduce saturated fats in your diet. Saturated fats are the fats in meat, dairy products, and tropical oils, such as coconut or palm oil.

It is not a good idea to eliminate dietary fat completely, however. Fat produces energy and is an essential part of cells. Some fat acts as high-density lipoprotein, or HDL, a substance that helps rid the blood of harmful cholesterol.

A healthy diet consists of several fruits, vegetables, grains, and small quantities of meat and dairy products. The U.S. Department of Agriculture gives a food pyramid as a guide for eating healthy. The items at the pyramid’s narrow top—fats and sugars—are to be used sparingly, while those at the broad base—whole grains—should be eaten in larger quantities. The food pyramid also contains information about the number of daily servings that nutritionists recommend eating from each group. Your doctor, a nurse, or a nutritionist can help you understand and use the food pyramid to guide a healthy diet.

For good health, eating plenty of fiber is also important. Fiber is the portion of some vegetables, fruits, and grains that pass through the body undigested. Some studies have shown that fiber reduce cholesterol and prevent high blood pressure. A high-fiber diet helps protect against colon cancer and possibly other kinds of cancer, too. Nutritionists recommend eating 20 to 35 grams of fiber daily. High-fiber foods include fresh fruits, vegetables, and whole grains such as whole-wheat bread and brown rice.

Nutritionists also recommend a diet rich in antioxidants (nutrients that may protect against cancer). An antioxidant diet should include foods rich in vitamins C and E and beta-carotene. Some examples of such foods are broccoli, carrots, and tomatoes.

Maintaining a healthy weight is another way to reduce the risk of illness. There is a high risk of heart disease if you are overweight. It can also cause many other health problems, including a higher risk for diabetes and some types of cancer.

According to the U.S. Centers for Disease Control and Prevention in Atlanta, Georgia, about one-fourth of all Americans are overweight. Overweight is loosely defined as being heavier than the recommended weight for your height. Other factors affect an evaluation of appropriate weight, however. A muscular, big-boned individual, for example, can weigh more than the recommended weight for his or her height and still not be considered overweight. A doctor can help you calculate your ideal weight and plan a diet that will allow you to lose weight effectively and safely.

Exercises for Healthy Lifestyle

Regular exercise is another component of a healthy lifestyle. Many studies have shown that regular aerobic exercise (which increases the use of oxygen by the body) can prevent high blood pressure and heart disease or reduce the severity of these life-threatening problems. Cycling, jogging, skating, swimming, and fast walking are forms of aerobic exercise.

Aerobic exercise may not be strenuous or complicated. Simple exercises such as walking for 20 minutes, three times a week, may benefit health. In fact, studies have shown that a little exercise is almost as beneficial to the heart as strenuous exercise. Your physician can help you choose a regular aerobic activity routine that is appropriate for your age, weight, and fitness.

The Sun is a Danger

The sun is another threat to maintaining good health. The sun’s ultraviolet rays are a significant cause of skin cancer. According to the American Cancer Society, more than 800,000 Americans are diagnosed with some form of skin cancer each year. Although most skin cancers are not life-threatening, a few types are fatal. Melanoma (cancer of dark-pigmented cells) is one of the most dangerous types of cancer.

The best protection against all forms of skin cancer is to avoid direct sun exposure or to cover exposed skin with sunscreen. This protection is especially crucial to fair-skinned or red-headed people, who have a higher risk of skin cancer.

Regular Checkups and Screenings to Stay Healthy

No matter how much you cultivate healthy habits, you still need regular checkups by a physician. Moreover, as people age, they should have standard screening tests for common killers such as cancer, high blood pressure, and heart disease. Early detection of disease is often the best defense against serious illness.

In fact, according to the American Cancer Society, early detection of breast, tongue, mouth, colon, rectum, cervix, prostate, testis, and melanoma cancers would save about 115,000 lives annually. About 67 percent of people with these forms of cancer survive at least five years after diagnosis. However, the survival rate jumps to 95 percent for diagnosed and treated people during the earliest stages of illness.

Most managed-care plans promote regular checkups and screenings by offering these services at little or no out-of-pocket cost. This emphasis on keeping costs low through prevention is one of the great strengths of managed care. If you belong to a managed-care health plan, you can use this strength to stay in charge of your health and well-being.

Work and Family Balance and CoronaVirus

Work and family are highly interconnected: people’s work responsibilities greatly impact their family lives, and their family responsibilities greatly impact their work lives. In the social sciences, the family is the basic unit in the study of kinship, or how human societies structure their webs of social relations. While the definition of family varies across time and cultures, and families can take many forms, the term refers to a group of two or more people related by bonds of blood, law, identity, or interdependence. For statistical purposes, a family is defined as a such a group living in the same household. A family is a social unit influenced by politics, culture, and class. As such, different families look, live, and work in different ways. Work refers broadly to the production of goods and services valued by a society, or more narrowly to the activities or occupations people pursue to ensure their own or their family’s survival. Like family, work is also heavily influenced by culture, politics, and class.

According to data released by the US Department of Labor’s Bureau of Labor Statistics (BLS) in 2019, about 81 percent of the 82.5 million families in the United States had at least one employed member in 2018. That year, in about 49 percent of families headed by different-sex married couples, both spouses were employed. Over 40 percent of families included children under the age of eighteen years. Among all families with children, 91 percent had at least one employed parent. Among families with children headed by different-sex married couples, 63 percent had both parents employed. Comparable data was not available for families headed by same-sex married couples; same-sex marriages will be reflected by the BLS beginning with data collected in 2020.

MAIN IDEAS

  • While different families look, behave, and work in different ways, in general, a family is a group of two or more people related by blood, law, identity, or interdependence. Work refers to the activities people pursue to ensure their own or their family’s survival.
  • Work and family are highly interconnected. People’s work responsibilities greatly impact their family lives, and their family responsibilities greatly impact their work lives. Work and family are both influenced heavily by culture, politics, and class.
  • Work–family balance refers to how and how well people manage to meet the various demands of their occupational and family lives. Work–family conflict is when the demands of a person’s work and family roles are incompatible and cause problems in one or both spaces.
  • Issues that arise due to work–family conflict are different for single-parent and dual-parent households, and they vary based on race, ethnicity, gender, sexual orientation, income, and education level.
  • Solutions for helping people manage work–family balance may be instituted as workplace policies and benefits or as city, state, and federal legislation. Policies that enable flexible scheduling, guarantee paid parental and family leave, and support or subsidize childcare are particularly important to workers.
  • Both work and family life have been disrupted due to the COVID-19 pandemic that began in the United States in early 2020. With many schools and businesses closed and most states imposing stay-at-home orders, millions of families across the nation face significant additional financial and psychological stress.

 

Family composition and working conditions in twenty-first-century United States are very different from those of previous eras. According to social scientists, since the 1970s, married couples are marrying at later ages and are less likely to have children. Furthermore, households with children headed by single or cohabitating unmarried parents are much more common than in the past. According to a 2019 report from the Center for American Progress, in 1974, 14.6 percent of families with children were headed by a single mother and 1.4 percent were headed by a single father. By 2018 those figures had risen to 24.2 percent headed by single mothers and 7.8 percent headed by single fathers. Over the same time period, households with children headed by married couples decreased from 84 percent in 1974 to 68 percent in 2018.

Work–family balance refers to how and how well people manage to meet the various demands and challenges that arise in their occupational and home lives. Work–family balance differs from work–life balance, which refers to how work impacts individual health and well-being. In contrast, work–family balance considers the interplay between work and family, or how they affect and influence each other. The extent to which a person can or cannot meet both work and family demands while maintaining their own physical and mental health determines how well they are achieving or maintaining work–life balance. When the demands of a person’s work role and family role are incompatible and cause problems in one or both spaces, it is called work–family conflict.

Work–family balance and conflict differ significantly for families of different social and economic classes. Issues that arise due to work–family conflict are different for single-parent and dual-parent households, and they vary based on race, ethnicity, gender, sexual orientation, income, and education level. Middle-class occupations tend to provide more benefits and family supports, while most low-wage jobs lack benefits and flexibility. Many parents’ work schedules do not align with their children’s school schedules, making childcare a central issue when discussing work and family in the United States. Furthermore, as robust employee benefits packages have become a less common perk of full-time employment, the ability of families to cope financially with unexpected hardships such as illness and injury is dependent upon the primary earner’s workplace policies.

According to the Pew Research Center, in 2019 about half of working parents with children under eighteen years old indicated that being a working parent makes it harder for them to be a good parent. The share of working parents who answered they sometimes feel they cannot give 100 percent at work was 47 percent. Thirteen percent of working fathers and nineteen percent of working mothers indicated that they had personally been passed over for a promotion because they have children. Despite the challenges that many working parents face, however, most working parents felt their current employment arrangements were what was best for them, including 84 percent of full-time working mothers. Just 14 percent of full-time working mothers indicated that they would be better off working just part-time, and only 2 percent answered that not working for pay at all would be best.

With family composition and employment arrangements encompassing such a broad range of particular circumstances, US law and policy have a significant impact on citizens’ capacity to maintain healthy work–family balance. Policy solutions for helping people manage work–family balance may be instituted as workplace processes and benefits or as city, state, or federal legislation. Workplace policies that enable scheduling flexibility are particularly important to working parents, as they allow for the workday to be arranged according to childcare hours or for skipping an afternoon meeting to attend a child’s school or extracurricular functions. Further, employers with generous parental and family leave policies are heralded as pro-family workplaces. Workplace policies are particularly important for large corporations that may employ people in more than one US state because state policies can differ drastically depending on location.

Paid family leave legislation does not exist at the federal level despite high levels of support from the working public, and the United States remains the only member of the Organisation for Economic Co-operation and Development (OECD) that does not guarantee any paid maternity or parental leave. The Family and Medical Leave Act of 1993 (FMLA) does not apply to all workers and only entitles qualified workers to twelve weeks of unpaid leave per year for a specific set of reasons. According to a report published by the Congressional Research Service in 2019, 16 percent of people working in the private sector had access to paid family leave in 2018. As of May 2019, four US states had implemented statewide family leave insurance (FLI) programs and two more states and the District of Columbia had passed FLI programs but were awaiting implementation. Furthermore, federal financial assistance for childcare is limited to low-income households even as the costs of childcare have increased dramatically since the 1980s and average hourly wages for low- and middle-class workers have largely stagnated since the 1960s.

Sidebar: Hide

CRITICAL THINKING QUESTIONS

  • In what ways do income level and family composition affect different people’s capacities to manage or maintain work–family balance?

  • How do you think work–family balance differs for families without young children as compared to those with young children? Explain your answer.

  • In your opinion, should the US federal government require all public and private employers to grant twelve weeks of paid family or parental leave? Why or why not?

 

In January 2020, political, business, and public health leaders around the world were alerted to the emergence of a novel coronavirus in China. The virus evaded early attempts to contain it, and by late February 2020, there were cases of community spread COVID-19 (coronavirus disease 2019) in California. Some city and state governments, corporations, and other organizations had already begun closing schools, canceling international travel, and postponing large conferences and events by the time the World Health Organization declared a global pandemic on March 11. Responses to the pandemic varied in timing and scope, but by April most US states, the District of Columbia, the Navajo Nation, and Puerto Rico had issued stay-at-home orders for their residents.

Some nonessential businesses were able to continue operating by closing their physical offices and transitioning employees to working remotely from home. However, across the country, many nonessential businesses were made to pause production, close down, or limit their operations (e.g., delivery or pickup only), leading to massive furloughs and layoffs. Between March 15 and April 4, 16.8 million Americans filed for unemployment insurance. In three weeks, unemployment claims exceeded the 15 million claims accrued over the eighteen months of the Great Recession (2007–2009). While food banks across the country reported unprecedented levels of food assistance need, some locales issued moratoriums on evictions and foreclosures. At the federal level, the Department of Housing and Urban Development (HUD) halted foreclosures and evictions for people with government-backed mortgages for sixty days. Meanwhile, lawmakers at city and state levels have called for longer-term moratoriums on rent and mortgage collection. In New York state, a bill has been introduced that would suspend rent and mortgage payment for ninety days for families impacted by COVID-19.

For almost every American, youth and adults alike, both work and family life were disrupted or altered in drastic ways. Some parents found themselves attempting to maintain full-time work schedules at home while also overseeing their children’s schooling and keeping the household running smoothly. For other families, one or more people became unemployed or saw their incomes fall significantly. According to a study published by the University of Michigan in March 2020, US parents reported increased stress levels related to finances and parenting under stay-at-home orders. In a survey of over three hundred parents of children ages twelve years or younger, 52 percent indicated that financial concerns were affecting their parenting and 50 percent indicated that social isolation was interfering with their ability to parent.

While families navigated the financial, emotional, and day-to-day stressors of being isolated at home together, those with members who work in health care, emergency services, and other essential positions faced unique stressors. Because frontline health-care workers face a significantly increased risk of infection, some elected to isolate from their families to prevent unwittingly exposing them. For many families with a member employed in an essential position with a heightened risk of infection, however, quarantining that family member would not be possible for single parents or families that lacks sufficient space or financial resources.

Workplace Diversity – Its Benefits and Challenges

Workplace Diversity – Its Benefits and Challenges

What is Workplace Diversity?

A diverse workplace is an employment environment made up of employees with a broad range of personal characteristics, experiences, skills, and perspectives. Some of the traits that contribute to diversity in the American workforce include differences in gender, race, ethnicity, age, disability, sexual orientation, religion, education, socioeconomic background, and political beliefs. To increase workplace diversity, employers engage in initiatives to recognize and eliminate discrimination—the unfair treatment of individuals, including denial of opportunities, based on those individuals’ personal characteristics rather than on their job potential or performance. Yet creating a diverse workplace goes beyond protecting employees from discrimination and encompasses other factors such as challenging cultural biases, respecting and celebrating differences, valuing a wide range of contributions to the organization, and creating an atmosphere of inclusion and unity.

One of the main goals of workplace diversity initiatives is to ensure that the American workforce reflects the nation’s changing demographics. The US Bureau of Labor Statistics reports that, as of 2019, white workers made up approximately 78 percent of the labor force. By 2040 the percentage of non-Hispanic white Americans ages eighteen to sixty-four is projected to be 50 percent, while the rest of the population is projected to be 25 percent Latinx, 15 percent Black, 8 percent Asian, and smaller percentages of multiracial and Native American people.

Though the workforce is expected to become more diverse, in 2019 minority workers were more likely than whites to be unemployed or underemployed and less likely to rise through the corporate ranks to executive or senior management positions. Women were also less likely to be in executive or senior positions. The Bureau of Labor Statistics reported that in 2019 that almost 90 percent of corporate chief executives were white, and more than 72 percent were men. According to analyses conducted by the Stanford University Graduate School of Business and published in 2020, at Fortune 100 companies the percentage of chief executives that were white was 85 percent, and 94 percent were male.

Creating a diverse workplace offers employers valuable benefits, including the abilities to attract and retain talented employees, gain access to innovative ideas and new market insights, and increase competitiveness and profitability. Workplace diversity can also involve challenges, however, such as interpersonal conflicts and communication problems. Many companies employ diversity training programs, also referred to as cross-cultural training programs, to help employees overcome such issues. Such courses should rely on evidence-based training and be accompanied by structural and organizational changes that encourage diversity and open communication. Courses alone may fail to achieve their goals, and poorly managed courses may even strengthen bias.

MAIN IDEAS

  • A diverse workplace is made up of employees with varying life experiences, personal characteristics, skills, and perspectives.
  • Gender, cultural, and ethnic diversity are all associated with corporate profitability. Diverse teams are believed to be better equipped to recognize and react to new business opportunities.
  • Diversity initiatives can help companies attract more applicants, increasing the likelihood of hiring qualified and productive employees.
  • Diversity training programs must use evidence-based techniques and be paired with structural and organizational changes to be effective.
  • Critics of workplace diversity initiatives maintain that these programs encourage companies to hire and promote based on personal traits rather than on qualifications or job performance.
  • Employees with a higher sense of workplace belonging have a 56 percent increase in job performance and are less likely to leave the organization.

BENEFITS OF A DIVERSE WORKPLACE

Developing a diverse workforce begins with employee recruitment. Companies that pursue diversity in hiring gain access to a larger pool of applicants, which increases their chances of attracting the most qualified and productive employees. One 2019 ZipRecruiter survey found that workforce diversity was an important consideration for 86 percent of job seekers when weighing their employment options.

Working for a company that values diversity and inclusion can increase employee engagement and promote retention of talent. Diversity in the workplace can also improve a sense of belonging in the workplace. According to the Harvard Business Review, employees with a high sense of workplace belonging have a 56 percent increase in job performance and are 50 percent less likely to leave the organization.

By building teams of people with differing backgrounds, skills, knowledge, perspectives, and ways of thinking, companies can benefit from a vigorous exchange of ideas that improves group decision-making and performance. Studies have found that diversity promotes innovation and creative approaches to problem-solving. Homogeneous groups, in contrast, have a tendency to fall victim to groupthink, a phenomenon in which internal pressures to maintain group coherence and harmony discourage critical thinking and result in poor decisions.

Diverse groups are also better equipped to recognize and react to new business opportunities—especially international opportunities—by including members with local market knowledge, foreign language skills, and cultural sensitivity. Perhaps the most compelling reason for companies to prioritize workplace diversity is to increase sales and maximize profits. Diversity within organizations is key to gaining insight into markets and serving a broad range of customers. A number of studies have found parallels between workplace diversity and financial success. According to McKinsey & Company, a management consulting firm, companies in the top twenty-fifth percentile for gender diversity on executive teams are 21 percent more likely to be profitable, while companies with more culturally and ethnically diverse executive teams are 33 percent more likely to see better-than-average profits.

Gender diversity has been shown to benefit company performance. A 2020 report by McKinsey & Company found that companies with more than 30 percent women executives were more likely to outperform companies with fewer women in leadership. Some businesses, such as Procter & Gamble Company (P&G), have created gender equality initiatives. In 2017 P&G launched its #WeSeeEqual campaign in 2017 to advocate for gender equality in workplaces. The company has prioritized achieving a fifty-fifty gender representation within its workforce. Other companies, such as IBM, have prioritized making the workplace more accessible for people with disabilities. In 2017 IBM launched the Ignite ASD program, which is a training and employment placement program for people with autism spectrum disorders.

CHALLENGES OF CREATING A DIVERSE WORKPLACE

Workplace diversity also involves challenges and potential pitfalls. Diverse workforces and teams necessarily bring people together who have differing communication styles, interpersonal skills, and concepts of office etiquette. Without diversity training to help increase cultural awareness and understanding, such teams can struggle to communicate effectively and develop functional group dynamics.

The framing of diversity policies makes a difference in their effectiveness. A 2019 Harvard Business Review study found that workplace policies receive less support when they are framed as a means to improve racial diversity. In the study, almost 50 percent of the respondents supported a diversity policy when it was described as an effort to address discrimination; only 38 percent supported the policy when it was framed as an effort to improve diversity.

Many companies seeking to attract and retain a diverse workforce adopt policies or create mission statements expressing their commitment to diversity. Although the goal of these policies is to reduce discrimination and promote a fair work environment for women and minorities, research suggests that they may have the opposite effect. Members of dominant groups often assume that the system is fair because the company has a diversity policy in place, which may lead them to discount instances or patterns of discrimination.

Despite widespread implementation of diversity programs and initiatives, women and minorities still face hurdles when applying for jobs as well as challenges once they are hired. Studies show that minority applicants who mask their race on their resumes have better success at getting job interviews. Researchers at the Harvard Business School found that companies are more than twice as likely to call people of color for interviews if they remove any references to their race in their resumes than those who reveal their race in their applications.

Even when workplace diversity initiatives do result in hiring minority workers, some companies fail to create inclusive work environments in which these workers feel valued, respected, included, and safe. Many minority workers experience pressure to conform to majority styles of dress, manners of speaking, and standards of demeanor. In 2019 the Harvard Business Review found that African Americans often feel like outsiders at work and people of color overall report that they feel the need to project a workplace identity that differs from their authentic selves. This situation causes minority employees to feel alienated from coworkers and disengaged from the corporate culture, which reduces their loyalty to the company and their job performance.

CRITIQUES OF DIVERSITY POLICIES AND INITIATIVES

Diversity training can be expensive, and some programs are poorly designed and ineffective. Research shows, for example, that mandatory diversity training in which participants feel pressured to agree with specific criticisms of racial prejudice can strengthen employee bias and increase animosity toward underrepresented groups.

Some critics of workplace diversity policies and initiatives claim that such initiatives constitute a modern-day version of quotas for hiring and promoting employees with certain traits. They argue that giving minority workers special treatment is a form of discrimination that takes opportunities away from people who may possess stronger qualifications or demonstrate superior job performance. They contend that this reverse discrimination often contributes to resentment and hostility toward minority workers.

Some critics of diversity initiatives contend that statistics on promotion rates may not tell the whole story, especially when it comes to women in the workplace. Although women tend to advance in their careers at a slower pace than men, and the gender pay gap widens as women are promoted up the career ladder, critics argue that the difference should not be attributed to women facing discrimination and barriers to advancement. Instead, they claim that many women make choices that negatively affect their career trajectories because they prioritize family and household responsibilities or seek a better work-life balance. To support women’s role in the workplace and improve gender diversity, experts recommend parental-leave policies for both women and men as well as systems to help support and educate managers and staff about parental leave.

CRITICAL THINKING QUESTIONS
Do you agree with critics of workplace diversity initiatives that such programs are a form of reverse discrimination? Why or why not?
What steps, if any, should companies take to create work environments in which all workers feel included and safe?
In your opinion, could the conditions created by COVID-19 pandemic measures have a positive impact on disability inclusion in the workplace? Explain your reasoning.

IMPACT OF THE COVID-19 PANDEMIC

In 2020 the novel coronavirus (COVID-19) pandemic pushed a larger portion of the population to work remotely and led to layoffs in many industries. May 2020 research by McKinsey & Company explored whether inclusion and diversity efforts would be deprioritized by organizations during the crisis. McKinsey research found that seven million jobs held by Black Americans were threatened by temporary furloughs, layoffs, or reductions in hours or pay during the crisis. Women were also at a higher risk of suspensions, layoffs, or reduced hours during the pandemic. Women were more likely than men to take time off from work or leave their position during the pandemic because of the need to take care of children and other family members.

However, the pandemic forced many companies to take measures to be more accommodating to the diverse needs of their employees. For example, some companies began offering more flexible work schedules and virtual work options. The expansion of remote work and virtual recruiting improved accessibility for job seekers with disabilities, which could lead to more disability inclusion in the workplace.

The police killing of George Floyd in May 2020 sparked nationwide protests about police brutality and racial inequality. In response, many US companies announced new diversity initiatives and many pledged to make their office cultures more inclusive. For example, Adidas pledged that it would fill at least 30 percent of its open positions with Black or Latinx candidates. Similarly, Estée Lauder pledged to make the percentage of Black employees in its company match the percentage of Black Americans in the US population.

Government Stimulus Packages

What is Government Stimulus Packages?

In times of economic crisis, the United States government may use a variety of stimulus techniques to improve the economy and support struggling businesses and citizens. Stimuli can come in different forms, including tax credits, lower interest rates, or direct monetary payments.

Government economic stimulus programs are often controversial. They usually represent a significant expenditure on short notice, which requires that the government borrow money and increase the national debt. Some economic theorists and politicians also believe that business is a matter of “survival of the fittest” and that failing banks and businesses should be allowed to go under. However, others argue that the fallout from such failures causes too much financial pain and disruption to the many people who lose their jobs, savings, and homes in the process, and that the government should cushion these blows and reduce the effects of massive economic disruptions.

In March of 2020, the US Congress passed the largest economic stimulus package in American history in response to economic disruption caused by the novel coronavirus disease 2019 (COVID-19) pandemic.

PROS AND CONS OF PROVIDING LOANS AND CASH TO BUSINESSES DURING ECONOMIC CRISIS

Pros

Government stimulus packages allow essential businesses to continue to provide needed services without interruption. These companies can be protected from unforeseen financial calamity.
Providing companies with loans and other financial support during economic crisis allows them to retain workers. Without this support, these workers could be laid off and become dependent on state services.
During a crisis, obstacles can emerge that prevent the efficient flow of money and other capital, creating additional financial strain. Government stimulus packages may help maintain capital flow.

Cons

Government bailouts require ordinary taxpayers to subsidize the failings of wealthy corporations. Company executives often use these funds to enrich themselves.
Companies that are not managed efficiently enough to survive an economic crisis are weak. By receiving government support and remaining in the market, they are preventing superiorly managed companies from assuming their rightful place.
Government support for companies during economic crisis may lead to inefficient state-run industries, placing them in the hands of government officials who may be motivated by politics.

PRIMING THE PUMP

The US economy experiences regular cycles of expansion and contraction. During a period of negative growth, demand for goods and services falls as consumers and businesses cut back spending. Businesses may stop hiring, lay off employees, or go bankrupt and close. People spend less as they become unemployed, work for reduced wages, or are nervous about job loss. An economic downturn affects the entire supply chain: lower sales lead to production cutbacks by manufacturers, who buy fewer raw materials. If price and wage decreases do not improve spending and hiring, the downturn can become a deflationary spiral.

Fiscal stimulus refers to government spending that aims to halt this decline by giving people and businesses money to spend, “priming the pump” for renewed growth. Fiscal stimulus focuses on increasing demand. This differs from monetary stimulus, which emphasizes the supply side of the supply-and-demand cycle by putting more money into circulation. The Federal Reserve enacts monetary stimulus through actions that include lowering interest rates and initiating quantitative easing, where the Federal Reserve directly purchases government bonds or other financial assets.

Before the 1930s, the US federal government avoided intervening in recessions, in the belief that market cycles would naturally resolve on their own. However, after the 1929 stock market crash, the economy continued to contract for years with no signs of recovery. When Franklin Delano Roosevelt (1882–1945) became President in 1933, he and his advisors embraced the thinking of British economist John Maynard Keynes (1883–1946). Keynes argued that economic health was the product of amount of spending in the economy, and that governments can and should spend money to restart growth.

Roosevelt’s New Deal was the first large federal stimulus program. Among other economic reforms, the New Deal created hundreds of projects that provided jobs ranging from road and dam construction to creating art and literature.

Between 2007 and 2008, a massive recession was triggered by a banking crisis caused by mortgage defaults. As banks failed, the stock market plummeted, and unemployment rose rapidly. Congress responded with two large stimulus programs in 2008 and 2009. The Troubled Asset Relief Program (TARP) was passed as part of the Emergency Economic Stabilization Act of 2008 and focused most of its $700 billion in aid on stabilizing large corporations. While TARP staved off complete economic breakdown, unemployment kept increasing and economic contraction continued.

To restore economic growth, a second massive stimulus package, the American Recovery and Reinvestment Act of 2009 (ARRA) was passed in February 2009. ARRA allotted approximately $787 billion across a wide range of programs, including infrastructure and renewable energy projects, tax incentives, and aid to individuals through extended unemployment benefits and direct cash payments. ARRA is credited with halting the decline and initiating recovery.

In early 2017, Congress passed the Tax Cuts and Jobs Act of 2017 to further stimulate the economy through a complex set of individual and corporate tax cuts and incentives. A modest burst of economic growth followed, though critics argue that large corporations and the very wealthy reaped most of the benefits. Over the following two years, the act substantially increased the federal debt as tax revenues were substantially reduced. This trend was exacerbated in 2020 by the impact of the COVID-19 pandemic.

CORPORATE BAILOUTS AND SUPPORT FOR BUSINESS

In the twenty-first century, US fiscal stimulus packages have contained significant expenditures intended to help businesses, large and small. In the weeks after the terrorist attacks on September 11, 2001, airlines experienced a crippling loss of business while at the same time facing expensive requirements for procedure changes and equipment upgrades to protect from future attacks. On September 22, 2001, Congress passed the Air Transport Safety and System Stabilization Act, which provided five billion dollars in immediate cash assistance and ten billion dollars in guaranteed loans for US-based air carriers in danger of immediate bankruptcy.

Though their risky loan practices were believed to have caused the 2008 economic crisis, the failure of giant banks such as Goldman Sachs, Bank of America, and others threatened to take the entire financial system down with them if they completely collapsed. Because they were therefore considered “too big to fail” much of the TARP focused on shoring up these banks and American International Group (AIG), the insurance firm that had insured most of the defaulting loans, and making credit available to the many other businesses needing loans to remain afloat. The bailout also assisted some private companies, including automobile giants such as General Motors and Chrysler, with approximately $15 billion dollars in funds for restructuring. It also included money for housing lenders Fannie Mae and Freddie Mac. Much of the money was given in grants, loans, or investments in companies. Over the following decade, the majority of companies returned the money to the federal government through payments of principal and interest on loans or dividends on profits the companies made. In 2020 ProPublica reported that the government had made a $121 billion profit on outlay of money from the TARP.

The 2020 COVID-19 pandemic created a unique situation in which the lockdowns to prevent the spread of disease forced businesses to suspend or drastically cut back operations virtually overnight. Unlike previous bailouts where poor business decisions were believed to be at least partially responsible for the crises, in this case there was little debate that government assistance was required to keep businesses from going under and turning temporary employee layoffs into long-term unemployment for millions of workers.

The Coronavirus Aid, Relief, and Economic Security Act passed by Congress in March 2020 therefore allocated $500 billion in loan guarantees and grants for large corporations, including approximately $32 billion for airlines and the air transport sector. Around $377 billion was directed at businesses with fewer than 500 employees, including approximately $10 billion in direct grants of up to $10,000 to cover immediate operating costs and around $350 billion in forgivable loans provided businesses use the bulk of the money to keep employees on the payroll.

The Paycheck Protection Program (PPP) faced controversy as applications overwhelmed available funding within days. Though meant for small businesses to keep workers employed, many large corporations applied for and received the loans. Large corporations like the Los Angeles Lakers, Harvard Bioscience, Shake Shack, and Ruth’s Hospitality Group, owner of the Ruth’s Chris Steak House restaurant chain, received backlash when it was revealed that they had received large loans from the program, and responded by returning the funds. However, other large corporations kept the money with varying justifications.

CRITICAL THINKING QUESTIONS

  1. What are some different ways that economists believe the government can stimulate the economy?
  2. Under what circumstances, if any, do you think the government should intervene in the affairs of private businesses to stimulate the economy? Explain your response.
  3. In your opinion, are direct cash payments an efficient way to stimulate the economy? Why or why not?

DIRECT CASH PAYMENTS

Lawmakers often emphasize indirect forms of economic stimulus like tax cuts and credits, but these programs can take months or even years to put money in people’s hands, and usually benefit a narrow segment of the population. For example, “green rebates” that offer tax credits for replacing old appliances or furnaces with energy-efficient upgrades are only helpful for people who own their homes and have money to spend on improvements.

On the other hand, direct cash payments in the form of unemployment benefits, welfare programs like Supplemental Nutrition Assistance Program (SNAP), or stimulus checks provide funds quickly and reach more people. Moreover, researchers have found that people with lower incomes tend to spend direct payments immediately. This multiplies the impact as a single payment to one person passes on to community businesses who in turn can use it to pay employees and suppliers, creating economic activity far beyond the initial recipient. In contrast, funds directed at corporations or wealthier individuals often go into savings or long-term investments, providing little to no stimulus to the overall economy.

In 2001 Congress passed a tax reform package, the Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA) which included a retroactive rebate on income taxes paid in 2000. This was distributed in the form of checks of $300 for individual filers without dependents, $500 for single parents, and $600 for married joint filers. These direct payments, initially intended to counter to the deepening recession in the summer of 2001, took on heightened significance when the economy declined steeply following the September 11 terrorist attacks.

The 2009 ARRA stimulus package included several cash payment measures, including expanded unemployment coverage and lump-sum cash payments for Social Security and disability recipients and disabled veterans.

The CARES Act of 2020 contained several direct cash payment provisions. It extended unemployment coverage to thirty-nine weeks, adding thirteen weeks to the coverage period of most states, and increased payments by $600 per week through the end of July 2020. In addition, the law provided Economic Impact Payments of $1,200 for individuals and $2,400 for married couples, with $500 additional for each dependent child, directed mainly at people earning less than $75,000 per year for individuals or $150,000 for families.

In the 2010s interest has grown in Universal Basic Income (UBI) programs. Rather than widespread one-time payments during economic crises and an ongoing patchwork of various welfare programs, UBI plans propose provide all citizens with permanent direct monthly payments of enough money to cover basic living expenses, with no work requirements or limits and no conditions on how it is spent. Proponents such as entrepreneur Andrew Yang (1975–) and Facebook founder Mark Zuckerberg (1984–) argue that UBI is necessary to counteract current economic inequality and potential mass unemployment as automation and artificial intelligence replaces human workers in many sectors of the economy.

Coronavirus – What You Should Know

Everything You Need to Know About CoronaVirus | Updated

What are Coronaviruses?

Coronaviruses (abbreviated CoVs) are a family of enveloped, positive-sense single-stranded RNA viruses that infect both humans and animals. Coronaviruses got their name because their outer proteins look like a crown (“corona” means crown in Latin). There are four main subgroups of coronaviruses: alpha, beta, gamma, and delta.

Coronaviruses that can infect humans (called human coronaviruses) were first identified in the mid-1960s. There are seven known human coronaviruses: four common types—229E, NL63, OC43, and HKU1—and three severe types—Middle East respiratory syndrome (MERS)-CoV; severe acute respiratory syndrome (SARS)-CoV; and the 2019 novel coronavirus (SARS-CoV-2), which causes coronavirus disease 2019 (COVID-19). Infection with the four common coronaviruses is common worldwide, especially in young children and during cold and flu season.

Most viruses in general have animal reservoirs, animals that the virus can live in without causing symptoms or disease. Those animal reservoirs can then pass the virus along to other animals and even people, causing illness. According to the World Health Organization (WHO), a common animal reservoir for multiple coronaviruses, including SARS-CoV and MERS-CoV, are bats. Bats can then infect other animals, which can go on to infect humans and make them sick.

THE LATEST STATISTICAL DATA ON THE COVID-19 PANDEMIC

Because the numbers of cases and deaths change by the hour, the best places to get the latest information are from reputable websites that collate data in real time. For regularly updated COVID-19 data, please see the following sources:

Types of Coronaviruses

1. SEVERE ACUTE RESPIRATORY SYNDROME CORONAVIRUS (SARS-COV)

SARS-CoV originated in southern China in November 2002. It caused a worldwide outbreak between 2002 and 2003, resulting in 8,098 probable cases and 774 deaths (9.6% mortality). There haven’t been any SARS cases since 2004.

SARS-CoV was transmitted from horseshoe bats to civet cats, a small, lean nocturnal mammal. Infected civets were being sold for meat at local markets in southern China, which is how humans became infected with SARS-CoV.

SARS-CoV is readily spread person-to-person through close contact via respiratory droplets produced when someone who is sick sneezes or coughs. SARS-CoV can also survive in urine and feces for more than 2 days at room temperature, according to the World Health Organization (WHO). Most person-to-person transmission occurred in health care settings when adequate infection control measures were not followed.

2. MIDDLE EAST RESPIRATORY SYNDROME CORONAVIRUS (MERS-COV)

MERS-CoV originated in Saudi Arabia in 2012 and caused an outbreak localized to countries around the Arabian Peninsula. MERS infections are still occurring, with new cases being confirmed by the WHO as recently as January 13, 2020. Between 2012 and January 15, 2020, the WHO has reported 2,506 cases and 862 deaths (34.4% mortality).

MERS-CoV spread to humans from dromedary camels (also called Arabian camels, those with one hump on their back), which can carry the virus without getting sick. People who come in contact with camels that carry MERS-CoV, eat not fully cooked infected camel meat, drink unpasteurized infected camel milk, or drink infected camel urine (which is considered a medicine for various illnesses in the Middle East) can get sick with MERS.

There is limited evidence of person-to-person MERS-CoV transmission, with most such transmission happening after close contact with severely ill people in health care or household settings.

3. 2019 NOVEL CORONAVIRUS (SARS-COV-2) AND COVID-19

SARS-CoV-2, the newest human coronavirus, is currently causing a global pandemic of COVID-19. The virus is believed to have originated from Wuhan, China, though this is not a settled matter, and COVID-19 was first reported in December 2019.

Most of the people originally infected were sellers at the Huanan Seafood Wholesale Market in Wuhan, China, who sold live or recently killed fish, animals, and birds. The seafood market was quickly closed for cleaning and disinfection. Officials have yet to determine the animal that people got sick from.

SARS-CoV-2 is readily transmitted from person-to-person via respiratory droplets expressed when someone who is sick coughs, sneezes, talks, sings, and so on.

How Coronavirus Transmits?

Like other respiratory illnesses, most coronaviruses are transmitted via respiratory droplets when someone coughs or sneezes.

In late March and early April 2020, mounting research highlighted that a “significant portion” of people with COVID-19 lack symptoms. These people may unknowingly spread the disease due to their lack of symptoms (asymptomatically) or may spread the disease before they develop symptoms (pre-symptomatically). This prompted the CDC to recommend that the general public wear cloth face coverings in public where social distancing is difficult to maintain, such as grocery stores and pharmacies.

The basic reproduction number (also called Ro, pronounced R-naught) of a virus is how many people, on average, one sick person usually infects. The higher the number, the more infectious a virus is. In general, a number greater than 1 typically means an epidemic will increase. While the basic reproductive number is constantly changing number based on the latest information, the WHO estimates a preliminary Ro of 2.0–2.5 for COVID-19. This means that up to 2.5 people can be infected by every 1 person infected with COVID-19. For comparison, the transmission rate of SARS-CoV was estimated to be 2-3 and that of common influenza strains is around 1.3.

How to Prevent Coronavirus?

To prevent coronavirus infection, you should follow the same preventive measures you do for other respiratory viruses like the common cold and flu, such as:

  • Wash hands regularly and thoroughly with soap and water for 20 seconds
  • Use an alcohol-based hand sanitizer when you cannot wash your hands (or in addition to hand washing)
  • Don’t touch your eyes, nose, and mouth
  • Clean and disinfect objects regularly
  • Avoid close contact with people who are sick
  • Stay home if you have symptoms
  • Get your flu shot

Due to the active community transmission of SARS-CoV-2 in the U.S. (and other countries around the world), the following precautions should also be followed to prevent the spread of SARS-CoV-2 and COVID-19.

If you are healthy and in an area of active transmission:

  • Practice social distancing by minimizing contact with others and staying at least 6 feet away from others
  • Avoid crowds and public places—stay home as much as possible (work and complete schoolwork at home)
  • Avoid discretionary travel, shopping, and social visits
  • Avoid eating in restaurants and bars—use the drive-thru, pickup or delivery instead
  • Avoid visiting nursing homes, retirement facilities, and long-term care facilities
  • Wear a cloth face mask (but be sure to properly use it by washing your hands before applying, keeping the mask over your mouth and nose while in public, never touching the front of the mask, and washing the mask after every use).

If you are in the following high-risk groups:

  • If you are an older person or a person with underlying health conditions (such as diabetes, heart disease, asthma, a weakened immune system, etc.), stay home and avoid other people.
  • If you or anyone in your household are sick or has tested positive for COVID-19, keep the entire household at home and call your medical provider—do not go to work or school.

There are currently no licensed human coronavirus vaccines. A SARS-CoV vaccine was created after the 2002 outbreak, but never used due to the outbreak being controlled before the vaccine was fully developed. Vaccines are currently being developed against both MERS-CoV and SARS-CoV-2.

What are Symptoms of Coronavirus? (Updated)

Symptoms of a common coronavirus are similar to a cold or upper respiratory infection, such as:

  • Stuffy nose
  • Runny nose
  • Headache
  • Cough
  • Sore throat
  • Fever
  • General malaise (feeling unwell)

The more severe SARS-CoV, MERS-CoV, and SARS-CoV-2 strains can cause more severe symptoms and lower respiratory tract infections, such as pneumonia or bronchitis. More severe infections are more common in older people, infants, people with weakened immune systems, and people with underlying conditions such as heart disease.

SARS symptoms include:

  • Fever and chills
  • Body aches
  • Diarrhea
  • Other pneumonia symptoms (mucus-producing cough, shortness of breath, chest pain while coughing, nausea, vomiting, weakness)

MERS symptoms include:

  • Fever and chills
  • Cough
  • Shortness of breath
  • Other pneumonia symptoms (mucus-producing cough, chest pain while coughing, nausea, vomiting, diarrhea, weakness)

COVID-19 symptoms. According to the CDC, people may have COVID-19 if they experience cough and shortness of breath or difficulty breathing, or if they have at least two of these symptoms:

  • Fever and chills
  • Repeated shaking with chills
  • Muscle pain
  • Headache
  • Sore throat
  • New loss of taste or smell

How to Diagnose Coronavirus?

Laboratory tests on blood or respiratory specimens (sputum or bronchoalveolar lavage) can detect the presence of a coronavirus by looking for its genetic sequence. Diagnostic kits for SARS-CoV-2 were made available by the CDC on February 4, 2020 to ship to qualified U.S. and international laboratories. As of March 30, the FDA has granted more than 20 diagnostic EUAs for SARS-CoV-2, including testing kits from LabCorp, Quest Diagnostics, Roche, Cepheid (rapid point-of care), and Abbott Laboratories (rapid point-of-care).

What is Treatment of Coronavirus? (Updated)

There are currently no approved coronavirus-specific antiviral treatments. The treatment for most coronaviruses is similar to how you would treat the common cold: rest, fluids, and over-the-counter medications for fever, sore throat, and congestion. More seriously ill patients may require hospitalization to receive medical care to help relieve their symptoms and ensure their organs are functioning properly, such as using a ventilator to assist with breathing.

What Drugs are Under Development? (Updated)

SARS-CoV

There are currently no approved SARS-CoV-specific antivirals. There are also none in development because the outbreak was stopped by adequate infection control measures.

MERS-CoV

MERS-CoV-specific monoclonal antibodies created from a person who recovered from MERS are being pursued as antibody drugs.

A Phase 1 trial showed that a combination of two other monoclonal antibodies, called REGN3048 and REGN3051, were safe and well-tolerated in healthy adults.

Another Phase 1 clinical trial, conducted by the National Institute of Allergy and Infectious Disease (NIAID), showed that an experimental anti-MERS-CoV antibody, called SAB-301, was safe and well-tolerated in healthy adults. According to the NIAID, a Phase 2/3 trial is currently being planned for areas where MERS-CoV is still endemic, such as Saudi Arabia.

SARS-CoV-2

Currently available broad-spectrum antiviral drugs are being repurposed to see if they are effective against SARS-CoV-2. Corticosteroids, hydroxychloroquine (and the related chloroquine), Gilead’s antiviral drug candidate remdesivir, and AbbVie’s HIV drug Kaletra (lopinavir/ritonavir) are among the many drugs being studied in people with COVID-19. The first COVID-19 drug clinical trial began in the U.S. on February 25, 2020 studying Gilead’s remdesivir in COVID-19 patients. In March 2020, WHO launched the global “Solidarity” clinical trial to test remdesivir, hydroxychloroquine and chloroquine, lopinavir/ritonavir, and lopinavir/ritonavir plus interferon beta-1a in COVID-19 patients worldwide.

On June 15, the FDA revoked the emergency use authorization (EUA) to use hydroxychloroquine and chloroquine to treat COVID-19. The agency reported that “these medicines showed no benefit for decreasing the likelihood of death or speeding recovery.” On July 4, WHO discontinued its trials of hydroxychloroquine and lopinavir/ritonavirfor the same reasons.

Companies such as Sanofi, Regeneron, Lilly in collaboration with AbCellera, Ascletis, Takeda, and Vir Biotechnology are working on creating COVID-19-specific drugs.

Clinical trials treating COVID-19 patients with convalescent plasma (blood plasma taken from COVID-19 patients who have recovered) and hyperimmune globulin (a standardized, more concentrated version of convalescent plasma) began in the U.S. late March 2020.

What Vaccines are Available and Under Development?

SARS-CoV

A SARS-CoV vaccine was developed after the 2002 SARS outbreak. However, it was never used because the outbreak was under control and the virus’ spread was halted before the vaccine was ready. It took about 20 months from when the SARS-CoV genome data was available until the vaccine was ready for human trials.

MERS-CoV

In July 2019, results from a Phase 1 clinical study showed that Inovio Pharmaceuticals and GeneOne Life Science’s DNA vaccine candidate, called GLS-5300, was safe and well-tolerated in healthy adult volunteers. More than 85% of the 75 participants showed a detectable immune response to MERS-CoV comparable to MERS survivors after two doses of the vaccine candidate.

Another Phase 1 clinical trial started in December 2019 in Saudi Arabia studying the safety of another vaccine candidate, called chimpanzee adenovirus Oxford 1 (ChAdOx1), in healthy adults. The vaccine is made from a chimpanzee virus that cannot replicate and expresses a MERS protein.

SARS-CoV-2

The National Institute of Allergy and Infectious Disease (NIAID) and Moderna are collaborating to develop an mRNA vaccine. In January 2020, the company hoped to have a COVID-19 vaccine ready for human trials within three months, record-breaking speed if achieved. Moderna launched a Phase 1 clinical trial for their COVID-19 vaccine in March 2020, ahead of an already breakneck paced schedule.

The Coalition for Epidemic Preparedness Innovations (CEPI) is partially funding the NIAID and Moderna collaboration and two other vaccine development projects through the University of Queensland in Australia and with Inovio Pharmaceuticals. Inovio launched a Phase 1 clinical trial of their COVID-19 vaccine in early April 2020. That trial was the third COVID-19 vaccine trial to be launched, after Moderna and the China-based company CanSinoBio launched their Phase 1 vaccine trials in mid-March 2020.

NIAID is also collaborating with Oxford University to develop an adenovirus-vectored vaccine against COVID-19. Other companies who have jumped into the COVID-19 vaccine development race include Johnson & Johnson, Sanofi, GlaxoSmithKline (GSK), Pfizer in collaboration with BioNTech, Vaccitech, Hoth Therapeutics, and Arcturus Therapeutics.