I was commissioned to write “Doctor Wall Street: How The American Health Care System Got So Sick” as a popular pamphlet for a health-care foundation, which then changed its mind and turned the rights back to me. Thankfully, Z Magazine is publishing it as a centerfold that can be removed and distributed to others. Hopefully, it will help arm Americans in their contemporary struggle for good health care to all. –J. Brecher

When ordinary Americans seek care for their health, they come up against a most peculiar system. The U.S. has some of the most advanced medical science in the world. It spends more of its resources on health care than any other country in the world. Yet Americans’ health is rated near the bottom of developed countries. In some of the poorest countries in the world people live longer and fewer die in infancy than in the U.S. Americans spend nearly twice as much as Japanese on health care, but Japanese live on average four years longer.

The American health-care system spends one-third of its cost on paperwork, waste, and profit over and above the cost of actually providing health care. Yet nearly one-third of Americans are without health insurance over the course of a year. In all other developed countries, more than 85 percent of citizens have health coverage under public programs.

The American health-care system is so complex that even experts—let alone ordinary people trying to find care for themselves and their loved ones—are unable to fully understand it. It is highly bureaucratic. This “system” is balkanized into medical fiefdoms, making it difficult to access care and caregivers and to maintain continuity of care. People who have good health benefits in one company or State are afraid to change jobs or locations because they will lose their health benefits.

The American health-care system is full of inequalities. People who work for one company may have quality insurance while those who work for a similar company have none. People who would have Medicaid insurance in one State are denied it in another. While on average 70 percent of Americans have private health coverage, 50 percent of African Americans and 60 percent of Hispanics don’t.

The quality of care provided by the system is uneven. While health-care personnel are often regarded as excellent both by patients and by independent evaluators, they are subject to constant pressure and speedup. And people are often refused treatment they need by managed care officials who are not even doctors.

Despite its high cost to individuals, employers, and society, this system leaves many people feeling desperately insecure. They worry: what will happen to me if I get sick?

How Did It Get This Way?

The American health-care system is incomprehensible if we try to understand it as a way to meet Americans’ need for health care. But it becomes easier to understand when we recognize that it was not designed primarily by or for the people who were likely to need health care. Rather it was constructed by private interests who shape the system for their own benefit. At various times those interests include employers, doctors and other medical professionals, insurance companies, unions, and profit and non-profit health service providers.

But if private interests have shaped the health-care system, why does it protect ordinary people at all? In the background of this story is a hidden reality. For a century, the American people have increasingly believed that health care should be guaranteed as a basic human right and have demanded that it be available for all. Doctors, employers, and politicians have had to pursue their interests by tacking against this powerful wind. When the people have spoken up forcefully, the health-care system has been pushed toward better meeting their needs. It has happened before and it can happen again.

For thousands of years, human beings have sought care for their health. Ancient physical and psychological methods like healing herbs and healing social rituals are still part of today’s health promotion techniques. These techniques were utilized by families, religious leaders, and specialized healers of many kinds.

But in the 19th century, one group of healers came to dominate American health care—professional physicians. They established a powerful, unified organization, reaching from county medical societies to the American Medical Association. They passed laws forbidding anyone but licensed physicians to practice medicine. Under those laws they controlled the licensing process. They controlled medical education, thereby controlling who could become a doctor.

In 1929 over 80 percent of all the money spent on doctors, hospitals, immunizations, and other health care was paid directly by patients. Less than one-fifth was spent by government, charity, and private industry. Not surprisingly, nearly half of those who earned less than $2,000 a year received no care whatsoever. For poor and working people, the major buffers against loss of health and life were likely to be the mutual aid programs of unions and fraternal organizations. Or, if they were lucky, they might find a physician who volunteered pro bono in a county clinic.

Doctors fought fiercely against any kind of health care that they did not control. For example, when, after World War I, the New York state health commissioner suggested a modest network of rural health centers, physicians vehemently opposed the plan because, “Too much power is given to the laity and too little to the medical profession…too much power is given to the County Boards of Supervisors and Mayors of cities…. Too much power is given to the State Department of Health…. Too little recognition and power is given to the medical profession.”

The Great Depression in the 1930s saw the rise of a movement for health-care security and community-based health care. Unions, community organizations, and progressive medical professionals established clinics to meet community needs. When President Franklin Roosevelt’s New Deal considered a Social Security program, national health insurance was originally an integral part of the plan. The Administration’s task force on Social securityS legislation noted that private insurance was “totally inadequate to meet the needs of the population.” It called national health insurance “the most immediately practicable and financially possible form of economic security.”

But while pensions and unemployment insurance sailed through Congress, the plans for health insurance were quickly dropped. Why? According to the president’s staffers, “extreme care is necessary to avoid the organized opposition of the medical profession.”

During the 1930s and increasingly after World War II, pre-paid group health organizations began to burgeon. But doctors successfully lobbied for state laws that required any group wishing to form a nonprofit health plan to receive approval from the state medical society or even have a majority of doctors on the board of directors. When President Harry Truman proposed a national health insurance plan in 1948, the AMA launched a $1.5 million public relations campaign against it, the most expensive up to that time in American history.

For more than a century, the power of the medical profession largely prevented communities, government, unions, and corporations from developing alternatives to health care controlled by individual physicians.

Fringe Benefits

As far back as the 1920s, a few big employers had offered health insurance plans to some of their workers. But by 1935, only about 2 million people were covered by private health insurance and on the eve of World War II there were only 48 job-based health plans in the entire country.

The rise of unions in the 1930s and 1940s led to the first great expansion of health care. Ironically, it did not produce a national plan providing health care to all like those in virtually all other developed countries. Instead, the special conditions of World War II produced the system of job-based health benefits we know today.

In 1942 the U.S. set up a National War Labor Board. It had the power to set a cap on all wage increases. But it let employers circumvent the cap by offering “fringe benefits”—notably health insurance. The fringe benefits received a huge tax subsidy and they were treated as tax deductible expenses for corporations, but not as taxable income for workers.

The result was revolutionary. Companies and unions quickly negotiated new health insurance plans. Some were run by Blue Cross, Blue Shield and private insurance companies. Others were Taft-Hartley funds run jointly by management and unions. By 1950, half of all companies with less than 250 workers and two-thirds of all companies with more than 250 workers offered health insurance of one kind or another. By 1965, nearly three-quarters of the population were covered by some kind of private health insurance.

This private, job-based insurance covered millions of workers who had never had health care insurance before. But this victory also set patterns that are responsible for many of the problems the health care system faces today. Because this private system was tied to employment, it did not provide health insurance for all. Millions of people outside the workforce were without coverage. Those most likely to be covered were salaried or unionized white men in northern industrial states. Two-thirds of those with incomes under $2,000 a year were not covered; so were nearly half of nonwhites and those over 65.

Employer-based plans tied workers to their jobs— something that benefited employers, but not workers or the economy as a whole. The quality of the coverage was spotty—some plans were excellent, others completely inadequate. Doctors accepted this revolution because it didn’t challenge their power, but as a result the system provided no public control over medical costs.

This revolution had a subtle political effect as well. By giving much of the workforce health benefits, it reduced the incentive for them to pursue a system of universal care. And it gave unions a stake in the private, employer- based health-care system. As one opponent of publicly financed health care put it, “the greatest bulwark” against “the socialization of medicine” was “furthering the progress already made by voluntary health insurance plans.”

The Three-Layer Cake

In 1958 a little known Rhode Island congressperson named Aime Forand introduced a proposal to subsidize hospital costs for the elderly on Social Security. Unexpectedly, within a year it evoked a sudden groundswell of support. When a Senate subcommittee on aging held hearings around the country, one staffer recalled that when the elderly came to testify, “They talked about medical care.” Soon congresspeople were receiving more mail on medical care for the elderly than any other legislation. A historian of American medicine wrote, “In the entire history of the campaign for national health insurance, this was the first time that a groundswell of grassroots support forced the issue onto the national agenda.”

In 1965, Congress passed Medicare. This combined a variety of proposals into a “three-layer cake” based on Social Security. Medicare Part A provided hospitalization insurance for the elderly. Medicare Part B provided voluntary supplemental coverage for doctor’s charges. Medicaid, corresponding to the welfare system, provided medical care for the poor.

The American Medical Association opposed Medicare to the bitter end. But the private insurers were glad to avoid “the aged, those employed in groups too small to be insured, the self employed, the rural population, the physically substandard” and others unlikely to be profitably insured. So they were willing to have the government take over responsibility for those who were too poor or too old to be able to pay for insurance.

The passage of Medicare was one of the biggest steps not only for health care, but for economic justice in U.S. history. But the new system had several problems. First, Medicare picked up the costs of those most likely to be sick, while leaving the younger, steadily-employed workers to be “cherry picked” by the private insurance companies. As a result, insurance companies made profits while the public sector bore the costs.

Second, Medicare left control with the private insurers, doctors, and health care companies. As one observer put it, government “surrendered direct control of the program and its costs.” Doctors could charge their “customary, reasonable, and prevailing” fees and order as many tests, drugs, and procedures as they wished.

President Lyndon Johnson’s economic advisors put the problem this way: the Medicare law was “politically astute,” but it created a program that was “twice as difficult to administer as it needed to be” and “almost guaranteed [to be] highly inflationary.” They were proved right.

Rising Costs (and Profits)

With the success of Medicare, the public was ready to move on to a system of universal health care coverage. In 1971 two-thirds of the public supported national health insurance.

But the drive for national health insurance ran head-on into the rising cost of health care. Between 1966 and 1990 health spending per person, correcting for inflation, grew from $700 a year to $2,500 a year. The share of the national economy going to health care increased from 6 percent to nearly 13 percent.

Workers had to pay for a growing share of their insurance. Just between 1979 and 1984, the number of large firms that required deductibles grew from 14 to 52 percent.

Meanwhile, the number of uninsured nearly doubled from 10 percent in 1965 to nearly 20 percent in 1990—85 percent of the uninsured were workers and their families concentrated in the fast-growing service, small company, and part-time segments of the economy. The proportion of the poor, the young, women, blacks, and Latinos without health insurance soared.

Starting in the 1980s U.S. business came up with a new medical system to reduce its soaring costs. Instead of individual doctors providing health care, medical services would be provided through new organizations variously called Health Maintenance Organizations (HMOs), networks, and the like. These organizations provided “managed care.” Their managers negotiate with doctors, hospitals, and labs for lower rates. They limit individuals’ choice of providers. They decide what services members can get.

While the managers of HMOs are often doctors, ordinary practicing doctors are under the thumb of HMO bureaucrats who tell them what services they can provide and how much they can charge. Doctors are often given bonuses and other incentives if they provide fewer services—and are threatened with being dropped if they provide “too many.” The average office visit is now about ten minutes.

While the first HMOs were non-profits and a few remain so, today most are huge chains operated for profit. (Ten percent of all HMO members are in Aetna’s U.S. Healthcare HMO alone.) They have been joined by chains of for-profit hospitals like HCA Healthgroup, which owned 300 hospitals by 2001. The stock of these corporations is avidly traded on Wall Street. Their success is measured not by the health of their members, but by the profits they can provide to their investors.

The speed of the “managed care revolution” was remarkable. In 1985 less than 10 percent of Americans were in managed care plans while 15 years later, 90 percent of those with health insurance through their jobs were in a managed care plan.

Managed care has been highly profitable for corporations, but it has serious drawbacks for people. Just as in the rest of the world, health care in the U.S. is rationed. But instead of being rationed by public policy on the basis of fairness and need, it is now rationed by employers and HMO managers on the basis of what is most profitable for their investors. Meanwhile, rising co-pays and deductibles are forcing people to go without health care even when they are supposedly insured.

Under managed care, many people feel at the mercy of their HMOs. Their outrage has led to the passage of a wide range of state patient rights laws. The HMOs are now trying to overturn these laws in court.

Today’s Health-Care Crisis

The “managed care revolution” controlled health care costs, but—as predicted—only for only a short while. Health care as a proportion of GDP grew from 9 percent in 1980 to more than 15 percent in 2003. Workers’ monthly contributions for family health benefits nearly quadrupled from 20 years ago, even with adjustment for inflation. Just between 2000 and 2003, employee contributions for health premiums increased by 50 percent. Between 2000 and 2006, the cost of health insurance increased by 73 percent. Today the U.S. spends more than $7,000 per person on health care—more than twice as much as in 1987 and more than twice as much as other industrialized nations.

Between 1980 and 2003, the proportion of private sector workers with job-based insurance decreased from more than two-thirds to less than half. Between 2000 and 2003, the proportion of private-sector workers who have health insurance from their employers fell from 52 percent to 45 percent. Meanwhile, the role of private interests in shaping health care has, if anything, intensified. Sections of the 2003 Medicare drug bill, for example, were actually drafted by the big drug companies.

Two opposite responses are developing to today’s health care crisis. One is to reduce the responsibility of both employers and government by declaring that health care is an individual responsibility. Some new health care legislation includes an “individual mandate,” which penalizes those who don’t secure a minimum package of health benefits. In the 2006 Massachusetts health care law, for example, individuals who do not have health insurance coverage by July 2007 will have to pay a penalty on income taxes.

At the federal level the same idea of individual responsibility appears as a plan for “Health Savings Accounts” (HSAs). Individuals will essentially pay into their own accounts and then buy their own insurance. In a bizarre abandonment of the fundamental idea of insurance as a way of spreading risk, advocates of MSAs proclaim their virtue is that the healthy “need not subsidize the sick.” Such a system would make sense only if we knew in advance which of us will be sick.

The other alternative is to join the rest of the world by establishing a system in which health care is a public benefit available equally to everyone. At the federal level, such an approach is often expressed as “Medicare for all.” At a state level it is embedded in plans being debated all over the country for “single payer” universal health care. Many of those who have supported private health care in the past, including many unions, doctors, and small businesses, are now supporting this alternative to “Doctor Wall Street.”

Whose health did the U.S. health care system develop to protect? The present U.S. system—so different from those in the rest of the world—did not develop through some inevitable process. Rather, it came about through the interplay of powerful historical actors who were often motivated more by greed and self-interest than by a desire to meet the health needs of all Americans.

At one time the medical system empowered doctors and disempowered everyone else, including ordinary health care consumers. But now the system disempowers not only patients, but even doctors. Instead, it empowers—and enriches—profit-seekers.

Great improvements in U.S. health care have been possible when the voices of those who need it have been heard. Indeed, the reason we have health insurance, Medicare, and patients’ rights laws is because millions of people fought for them.


Jeremy Brecher is an historian whose books include Strike!, Globalization from Below, and, co-edited with Brendan Smith and Jill Cutler, In the Name of Democracy: American War Crimes in Iraq and Beyond. He has received five regional Emmy Awards for his documentary film work.