Professor/Lecturer Ellmann's Course Materials Page

Government roles
Home
microeconomics
FREEDOM
macroeconomics
economic thought
MBA/MA - Anglo-American University International Finance
ERASMUS - International Finance
MBA - Money and Financial Markets
ERASMUS Money & Banking
M.A. Public Policy Economic Sociology
Ethics
On the Origin of Facts

I. The Reach of Regulation

In the life cycle of an American business, the first step is the least regulated of all. An entrepreneur seeking to form a new business need only register the company and record it with state tax authorities. Those entering specific occupations may require licenses or certifications, but no permission is required to create a company.

Another set of laws and rules govern the balance of the rights of employees to keep their jobs and the rights of employers to fire workers who aren’t performing acceptably. The rules favor the employer. In most U.S. states, people are considered “at will” employees, meaning they can be discharged whenever the employer chooses, except under some specific situations where the workers’ rights are protected. People may not be fired because of their race, religion, gender, age, or sexual preference, although terminated employees will need to show that they were wrongfully discharged if they want to recover their jobs. The federal Equal Employment Opportunity Commission, created in 1961, can sue employers to defend workers against unjust firing.

A federal whistle-blower law protects employees who disclose their employers’ illegal activities. If an employer has cheated the federal government, a whistle blower may receive between 15 and 30 percent of the money recovered by the government because of the company’s wrongful conduct. In one exceptional case, a former sales manager of a leading U.S. drug company received $45 million in 2008 as his share of the payment by the company that settled a federal investigation into alleged improper marketing of drugs widely used in the government’s Medicaid program for low-income patients.

For more than a century, Americans have debated how far the federal government should go to prevent dominant companies from undermining economic competition. Regulation of businesses has usually been of one or two types. Economic regulations have tried to combat abuses by monopolies and, at times, establish “fair” prices for specific commodities. Social regulations aim to protect the public from unsafe food or drugs, for example, or to improve the safety of motorists in their cars.

Federal regulation arrived with the railroad age in the 19th century. The power of railroad owners to set interstate shipping rates to their advantage led to widespread complaints and protests about discriminatory treatment that favored some customers and penalized others. In response, the Interstate Commerce Commission, the United States’ first economic regulatory agency, was created in 1887. Congress gave it the authority to determine “reasonable” maximum rates and require that rates be published to prevent secret rate agreements.

The ICC set a pattern that would be followed by other federal regulatory agencies. Its commissioners were full-time regulators, expected to make independent, fact-based decisions, and it played an influential role for nearly a century before its powers were reduced in the movement toward government deregulation. The agency was abolished in 1995.

Another early regulatory agency was the Federal Trade Commission, established in 1914. It shared antitrust responsibility with the U.S. Justice Department for preventing abuses by powerful companies that could dominate their industries either singly or acting with other companies. By the end of the 19th century, the concerns about economic power had focused on a series of dominant monopolies that controlled commerce in industries as diverse as oil, steel, and tobacco, and whose operations were often cloaked in secrecy because of hidden ownership interests. The monopolies typically took the form of “trusts,” with shareholders giving control of their companies to a board of trustees in return for a share of the profits in the form of dividends.

More than 2,000 mergers were made between 1897 and 1901, when Theodore Roosevelt became president and began his campaign of trust-busting against the “malefactors of great wealth,” as he called the business tycoons he targeted. Under Roosevelt and his successor, President William Howard Taft, the federal government won antitrust lawsuits against most of the major monopolies, breaking up more than 100, including John D. Rockefeller’s Standard Oil trust; J.P. Morgan’s Northern Securities Company, which dominated the railroad business in the Northwest; and James B. Duke’s American Tobacco trust.

Congress in 1898 gave workers the right to organize labor unions and authorized government mediation of conflicts between labor and management. During the New Deal, Congress enacted the National Labor Relations Act of 1935 (usually called the Wagner Act after one of its sponsors), which legalized the rights of most private-sector workers to form labor unions, to bargain with management over wages and working conditions, and to strike to obtain their demands. A federal agency, the National Labor Relations Board, was established to oversee union elections and address unfair labor complaints. The Fair Labor Standards Act was passed in 1938, establishing a national minimum wage, forbidding “oppressive” child labor, and providing for overtime pay in designated occupations. It declared the goal of assuring “a minimum standard of living necessary for the health, efficiency, and general well-being of workers.” But it also allowed employers to replace striking workers.

In the 1930s and the decades that followed, Congress created a host of specialized regulatory agencies. The Federal Power Commission (later renamed the Federal Energy Regulatory Commission) was created in 1930 as an independent regulatory agency which would oversee wholesale electricity sales. The Federal Communications Commission was established in 1934 to regulate the telephone and broadcast industries. The Securities and Exchange Commission in 1934 was given responsibility for overseeing securities markets. These were followed by the National Labor Relations Board in 1935, the Civil Aeronautics Board in 1940, and the Consumer Product Safety Commission in 1975. Commissioners of these agencies were appointed by the president. They had to come from both major political parties and had staggered terms that began in different years, limiting the executive branch’s ability to replace all the commissioners at once and hence its influence over the regulators.

II. ANTI-TRUST (US example)

The government’s antitrust authority came from two laws, the Sherman Antitrust Act of 1890 and the Clayton Act of 1914. These laws, based on common law sanctions against monopolies dating from Roman times, had different goals. The Sherman Act attacked conspiracies among companies to fix prices and restrain trade, and it empowered the federal government to break up monopolies into smaller companies. The Clayton Act was directed against specific anticompetitive actions, and it gave the government the right to review large mergers of companies that could undermine competition.

Although antitrust prosecutions are rare, anticompetitive schemes have not disappeared, as economist Joseph Stiglitz says. He cites efforts by the Archer Daniels Midland company in the 1990s in cooperation with several Asian partners to monopolize the sale of several feed products and additives. ADM, one of the largest agribusiness firms in the world, was fined $100 million, and several executives went to prison.

But the use of antitrust laws outside the criminal realm has been anything but simple. How far should government go to protect competition, and what does competition really mean? Thinkers of different ideological temperaments have contested this, with courts, particularly the Supreme Court, playing the pivotal role. From the start, there was clear focus on the conduct of dominant firms, not their size and power alone; Theodore Roosevelt famously observed that there were both “good trusts” and “bad trusts.”

In 1911, the Supreme Court set down its “rule of reason” in antitrust disputes, holding that only unreasonable restraints of trade—those that had no clear economic purpose—were illegal under the Sherman Act. A company that gained a monopoly by producing better products or following a better strategy would not be vulnerable to antitrust action. But the use of antitrust law to deal with dominant companies remained an unsettled issue. Federal judges hearing cases over the decades have tended to respect long-standing legal precedents, a principle known by its Latin name, stare decisis.

Court rulings at times have reflected changes in philosophy or doctrine as new judges were appointed by new presidents to replace retiring or deceased judges. And the judiciary tends also to reflect the temperament of its times. In 1936, during the New Deal era, Congress passed a new antitrust law, the Robinson-Patman Act, “to protect the independent merchant and the manufacturer from whom he buys,” according to Representative Wright Patman, who co-authored the bill. In this view, the goal of antitrust law was to maintain a balance between large national manufacturing and retailing companies on one side, and the small businesses that then formed the economic center of most communities on the other.

This idea—that the law should preserve a competitive balance in the nation’s commerce by restraining dominant firms regardless of their conduct—was reinforced by court decisions into the 1970s. At the peak of this trend, the U.S. government was pursuing antitrust cases against IBM Corporation, the largest computer manufacturer, and AT&T Corporation, the national telephone monopoly.

III. Competition

...In the 1980s, the Reagan administration adopted a different philosophy, one advocated by academics at the University of Chicago. The “Chicago school” economists argued that antitrust law should, above all, protect competition by putting consumers’ interests first: A single powerful firm that lowers product prices may hurt competitors, but it benefits consumers and therefore should not run afoul of the antitrust law.

Robert H. Bork, an antitrust authority and federal appeals court judge, argued that “it would be hard to demonstrate that the independent druggist or the grocery man is any more solid and virtuous a citizen than the local manager of a chain operation.” The argument that small businesses deserved special protection from chain stores “is an ugly demand for class privileges.”

This shift in policy was reflected in a climactic antitrust case against the Microsoft Corporation. President Bill Clinton’s Justice Department filed an antitrust suit in 1998 against Microsoft, which controlled 90 percent of the market for personal computer operating systems software. Microsoft allegedly had used its market power to dominate a crucial new application for computers—the browser software that links users to the Internet.

A federal judge ruled against Microsoft, but his decision was overruled by a higher appeals court judge. A key factor in the latter decision was that Microsoft offered its browser software for free. While that hurt its much smaller competitors, consumers benefited, and maximizing consumer interests served the larger interests of the economy, the court ruled. Competition and innovation would keep competition healthy, according to this theory. President George W. Bush decided not to continue the Justice Department’s case against Microsoft.

IV. Environment

Widespread social regulation began with the New Deal employment and labor laws but expanded in the 1960s and 1970s. Both Democratic and Republican presidents joined with Congress to act on a wide range of social concerns.

Perhaps the most striking example of how public opinion affects U.S. government processes was the sudden growth of the environmental movement as a powerful political force in that period. Conservation of natural resources had motivated political activists since the late 19th century, when California preservationist John Muir led campaigns to protect wilderness areas and founded the Sierra Club as a grassroots lobbying organization for his cause.

The movement surged in new directions in the 1960s following publication of a best-selling book, Silent Spring, written by government biologist Rachel Carson. She warned that the growing use of chemical pesticides was causing far-reaching damage to birds, other species, and the natural environment. They could threaten human health as well, she said. The chemical industry attacked Carson as an alarmist and disputed her claims. But her warnings, amplified by media coverage, won powerful support from citizens and the U.S. government. The movement led to a ban on the widely used pesticide DDT and the formation of the U.S. Environmental Protection Agency in 1970 to enforce federal environmental regulation.

Unlike the independent agencies created in the 1930s, the EPA was made a part of the executive branch, subject to the president’s direction. This approach was followed later with other new agencies, such as the Occupational Safety and Health Administration (OSHA) in 1970 to prevent workplace accidents and illnesses, and the Consumer Product Safety Commission in 1972 to regulate unsafe products. Because of the increased presidential control, these agencies’ regulatory policies often change with the arrival of a new president.

Federal regulations have had profound impacts in reducing health risks facing industrial and shipyard workers; improving the safety of medicines, children’s toys, and motor vehicles; and improving the cleanliness and quality of lakes, rivers, and the air. OSHA, for example, requires employers to create a workplace that is “free from recognized hazards” that cause or could cause death or serious harm. The OSHA legislation has been used by the government, often following demands by labor unions, to control workers’ exposure to a range of industrial chemicals that cause or may cause cancer.

Debate about such regulation has often centered on whether there is adequate scientific evidence to justify government action and whether compliance costs paid by businesses and their consumers are worth the environmental gain. Academic and business critics of Rachel Carson, for example, argued that eliminating DDT removed the most effective pesticide in the fight against mosquitoes that spread malaria. In her time, Carson—who urged that DDT be controlled, not eliminated—tipped the public debate in favor of precautionary government regulation that could address serious threats, even though some scientific or economic issues were still being debated. The current debate over climate change has reached a similar point.

As historians have observed, U.S. government priorities on economic and social issues have seldom taken a straight, unbroken path, but instead have followed the swings of public opinion between a desire for more regulation and one for unfettered economic growth. In the 1960s, a period when Americans challenged the status quo on a number of fronts, many were willing to discount the industry viewpoint in the debate over pesticide regulation and to support federal intervention to protect the environment. In the 1980s, opinion reversed direction again.

V. Banking

Since the first years of the American republic, federal and state lawmakers and government officials have struggled to determine the right level of regulation and government control over the banking system. When banks can respond to market forces, innovation and competitive services multiply. But competition’s downside has been a succession of banking crises and financial panics. Overly aggressive lending and speculative risk-taking that led to these crises have, in turn, led to political demands for tighter controls over interest rates and banking practices. A new chapter in this debate began in response to the 2008 financial crisis.

The U.S. banking and finance industries have been remade over the past quarter-century by globalization, deregulation, and technology. Consumers can draw cash from automated teller machines, pay bills and switch funds between checking and savings accounts over the Internet, and shop online for home loans. As services have expanded, the number of banks has contracted dramatically. Between 1984 and 2003, the number of independent banks and savings associations shrunk by half, according to one study. In 1984, a relative handful of large banks, with assets of $10 billion or more, held 42 percent of all U.S. banking assets. By 2003, that figure was 73 percent.

New computer systems to manage banking operations gave an advantage to large banks that could afford them. The dramatic expansion of world trade and cross-border financial transactions led the largest banks to seek a global presence. New markets arose in Asia and other regions as banking and investment transactions flowed instantly across oceans. These trends called for and were fueled by a steady deregulation of U.S. banking and finance rules.

Historically, the banking industry has been split between smaller, state-chartered banks that claimed close ties to their communities, and larger national banks whose leaders sought to expand by opening multistate branch offices, saying their size made them more secure and efficient. This split echoes in some ways the debates in America’s early days between Alexander Hamilton and Thomas Jefferson over urban and rural interests.

Community banks prevailed early in the 20th century, but were devastated by the 1930s banking crisis; their limited assets left them particularly vulnerable. The country’s urbanization after World War II reduced the political power of rural legislators, undermining their ability to defend smaller banks, and in 1980 banking deregulation got under way.

Until the 1980s, U.S. commercial banks faced limits on the levels of interest rates they could charge borrowers or pay to customers who deposited money. They could not take part in the securities or insurance businesses. And their size was restricted as well. All states protected banks within their borders by forbidding entry by banks headquartered in other states. Many states also protected small community banks with rules restricting the number of branch offices that big banks could open inside the state. Almost all of these regulations were removed after 1980, leaving a banking industry that was more competitive, more concentrated, more freewheeling and more risk-taking—and more vulnerable to catastrophic failures.

As banks expanded geographically, they sought also to enter new financial arenas, including ones forbidden to them by New Deal-era legislation that separated parts of the commercial banking and securities industries. Banks were permitted to reenter the securities business in 1999, and many major banks subsequently created unregulated divisions, called special investment vehicles, in order to invest in speculative mortgage-backed securities and other housing-related investments.

Congressional advocates of a looser regulatory regime argued that greater bank freedom would produce more modern, efficient, and innovative markets. For a time, it arguably did. The U.S. financial sector led the way during a period of unprecedented international expansion of banking and securities transactions.

A McKinsey Global Institute study reported that from 2000 to 2008, the sum of all financial assets—bank deposits, stocks, and private and government bonds—soared from $92 trillion to $167 trillion, an average annual gain of 9 percent and one that far exceeded the growth in world economic output. Alan Greenspan, chairman of the Federal Reserve Board during most of that period, said that global financial markets had grown too large and complex for regulators adequately to oversee them. It was for Congress, he argued, to pass new laws should it wish closer oversight. But as economist Mark Zandi, author of Financial Shock, a book about the 2008 crash, says, “Legislators and the White House were looking for less oversight, not more.”

At this writing, the 2008 financial crisis appears to have reversed the philosophical trend toward greater reliance on markets and the assumptions about financial deregulation that had increasingly held sway in the United States since the end of the 1970s. A public backlash against multi-million dollar bonuses and lavish lifestyles enjoyed by leaders of failed Wall Street firms fed demands for tighter regulation. Greenspan himself, who retired in 2006, told a congressional committee two years later that “those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity, myself especially, are in a state of shocked disbelief.”

Enter supporting content here