Introduction

This article borrows its title from the novel A Fine Balance (1995), by Indo-Canadian author Rohinton Mistry (b. 1952). Mistry traces the story of four Bombay residents who are each profoundly affected by the “the Emergency”, a highly controversial period in India’s history that occurred between 1975 and 1977. During the Emergency, declared by Prime Minister Indira Gandhi (1917–1984) in the face of escalating political opposition, the civil liberties of Indians were ruthlessly compromised ostensibly to “get rid of poverty” (Garibi! Hatoa!), Gandhi’s populist slogan during the 1971 election.

Mistry’s four protagonists come from three different tiers of India’s complex class system, yet for a time, they coexist harmoniously. While two of them are tailors from the untouchable Chamaar caste, another is the son of middle-class shop owners, and the final protagonist is a wealthy Parsi. But the Emergency casts a dark shadow over them, as it continues to do for many Indians, as Tarlo’s (2003) anthropological work reveals. While attempting to eliminate poverty in the world’s largest democracy was undoubtedly a noble end, many of the means by which Gandhi’s government attempted to do so were brutal. Politicians, protesters and journalists were jailed, elections were suspended, slum clearance projects were callously advanced and, most distressingly, a policy of mass, often coercive, sterilization was enforced. The impact of these last two measures on Mistry’s characters becomes the fulcrum on which the novel turns from nascent hopes of overcoming class divisions to despair and disaster. For them, the costs incurred by the government’s attempt to “get rid of poverty” greatly outweigh any possible benefits accrued by such policies.

When it comes to health, we often think of balance in terms of individuals, whether it be balancing humours, a balanced diet, work-life balance or being mentally balanced. But the concept of balance can also be a useful way of understanding how different societies have approached public health. Public health can be seen as a sort of a social contract, a balancing act whereby certain individual freedoms are curtailed to achieve the broader goal of improved health for the entire population. While there are countless examples of states striking this “fine balance”, thus achieving much better health with little or tolerable cost to personal freedom (for example, sanitation, sewage and drinking water projects, pasteurization, vaccination campaigns, clean air acts, anti-smoking legislation, seat-belt laws and accessibility legislation), there are many other cases where not only freedoms but also lives have been compromised for very little gain, or gains that were not worth the cost. Although the widespread eugenic excesses of the twentieth century (ranging from the sterilization campaigns of India’s Emergency to the extermination of many categories of “unfit” people in Nazi Germany) are only some of the most horrific example of failing to achieve a balanced approach to public health, many subtler instances of unbalanced approaches to public health can also be identified. These include, on the one hand, restricting the medical or palliative use of illicit drugs, such as cannabis, and tacitly or actively encouraging the prescription of other drugs, such as methylphenidate for hyperactivity, on the other.

Debate also continues to rage in many countries about whether the restrictions to personal freedoms imposed by no-smoking zones, sugar and fat taxes, minimum pricing for alcohol and peanut-free schools, to name but a few, are worth the promised benefits of less lung cancer, less childhood obesity, less alcohol-related disease and fewer allergic reactions. Even when the link between such measures and improved health are clear and established—which they rarely are—individuals whose habits must change as a result of such restrictions often resent such intrusions into their personal lives, in spite of the assurance of better health or longer life expectancy, as the existence of smoker advocacy groups, such as Forest, indicates. Changing other entrenched habits that incur enormous health costs, such as driving automobiles, is rarely considered at all (there were over 35,000 motor vehicle deaths in the United States in 2012, down from over 55,000 deaths in 1972—largely due to seat belt legislation and safer vehicles—but 2.5 million Americans were nevertheless treated in hospital for traffic accident injuries, resulting in US$80 billion in healthcare and productivity costs).

Although most countries have contested how far personal freedoms must be restricted to produce better public health, it has arguably been in the United States where striking a balance between individualism and public health initiatives has been the most contentious. On the one hand, individualism has long thought to be a defining virtue of the American people, as Alexis de Tocqueville (1805–1859) argued in Democracy in America (1835–1840) and as sociologists, such as David Reisman (1909–2002), Seymour Martin Lipset (1922–2006) and Robert N Bellah (1927–2013), have re-articulated since (Reisman, 1950; Lipset, 1960). In the influential bestseller Habits of the Heart, Bellah et al. described how de Tocqueville viewed American individualism “with a mixture of admiration and anxiety” (Bellah et al., 1985, vii). Writing 140 years later, Bellah et al. feared “that this individualism may have grown cancerous … threatening the survival of freedom itself” (Bellah et al., 1985, vii). With regard to health, perhaps the best example of entrenched American individualism has been the history of failed attempts to develop a universal public healthcare system. As many observers, including sociologist Starr (1982, 2011) and historians Blumenthal and Morone (2009) have described, one of the major impediments to public healthcare in the United States has been the fear that such a system would undermine the freedom of physicians to practice medicine as individual actors. Such arguments about medical autonomy were also made before the passage of the National Insurance Act (1913) and the NHS Act (1946) in the United Kingdom. The price of maintaining individualism in American medicine has fallen to the poorest Americans. Despite the passage of Medicare and Medicaid in 1965 (which provided healthcare insurance for the very poor and the elderly), the number of Americans without any health insurance exceeded 40 million by the 2000s. Although the Patient Protection and Affordable Care Act (2010), or Obamacare, has reduced this figure by over 10 million, its future is unclear.

On the other hand, the United States also has a history of curtailing certain individual freedoms with the intention of bettering or protecting American society as a whole. While the McCarthyism of the early 1950s might be the most infamous example of such restrictions, two other prominent examples relate directly to health: Prohibition and the War on Drugs. A victory for the Progressive Era temperance movement, Prohibition was believed to be central not only to improving public morals, but also for public health and safety. Similarly, the War on Drugs, launched by President Richard Nixon (1913–1994) in 1971 at a press conference where he called drug abuse “Public Enemy Number One”, can be interpreted in part as an attempt to improve public health by restricting freedoms related to intoxicant use. New York City’s Sugary Drinks Portion Cap Rule, brought in by Mayor Bill de Blasio in 2013 to reduce sugar intake and thus rates of obesity, is another example of such a restriction.

Instances such as these show how Americans have attempted, with varying degrees of success, to strike a fine balance between preserving individualism and improving public health by restricting specific individual freedoms. In what follows, I examine yet another example where the virtues of American individualism were questioned in the name of public health, specifically public mental health. During the postwar period—and in a concerted attempt to prevent mental illness—American psychiatrists, social scientists and politicians began emphasizing the link between socioeconomic factors and mental illness under the banner of a new approach to psychiatry coined “social psychiatry”. Although many of the prophylactic initiatives centred on urban renewal, eliminating poverty and improving education, often implicit, and sometimes explicit, in both the theory and practice of social psychiatry was the notion that American individualism was not beneficial to mental health, and especially so for the most disadvantaged Americans. In order for the United States to overcome the perceived wave of mental disorder threatening to engulf American society, the balance between individualism and public mental health had to shift. Such a shift also had implications for psychiatrists, their relationship with other mental health workers and their monopoly on psychiatric knowledge. Rather than working as independent actors with absolute authority over diagnosis and treatment, psychiatrists working in the Community Mental Health Centres (CMHCs) that would become the physical locus of social psychiatric theory now had to share this responsibility with other mental health workers. While broadening the basis of psychiatric expertise in this way had many potential benefits in theory, it posed many unanticipated problems in practice. The fact that psychiatrists were themselves often unable to balance their own individualism with the communitarianism inherent in social psychiatry indicates the considerable problems in encouraging ordinary Americans to do the same. Indeed, by the middle of the “me” decade of the 1970s, the neo-liberalism of the Reagan presidency and the expansion of biological psychiatry and psychopharmacology made manifest in the third edition of the Diagnostic and Statistical Manual of Mental Disorders (1980), individualism reasserted its hold over psychiatry once more. I conclude by arguing that, while the ideas behind social psychiatry might have been dismissed by the 1980s, they have taken on renewed urgency in recent years, as concerns about the social determinants of mental health and the escalating rates of mental illness are increasing once more.

The war on mental illness

India was not the only country to attempt boldly “get rid of poverty” during the second half of the twentieth century. During the 1960s, President Lyndon B Johnson (1908–1973) declared an “unconditional war on poverty” in his State of the Union address (Johnson, 1964). Inspired by factors such as Michael Harrington’s The Other America (1962), photographs of poverty in places such as Appalachia and, later, the conditions in urban ghettos, first John F Kennedy (1917–1963) and then Johnson sought to, as Johnson declared in his 1964 State of the Union Address, “not only to relieve the symptom of poverty, but to cure it and, above all, to prevent it”. Part of Johnson’s Great Society social reforms, the War in Poverty was waged largely in terms of new federal programmes, ranging from the Head Start educational initiative and the Social Security Act to the passage of Medicare and Medicaid in 1965.

The range of the social welfare reforms introduced by Johnson demonstrate how the War on Poverty was fought on many fronts, including healthcare, education, welfare, promoting employment and career development, and ensuring basic nutrition. But the War on Poverty was not the only health and welfare policy battle taken on by the Kennedy and Johnson administrations during the 1960s. Closely linked to the War on Poverty and, arguably, yet another instigation for it, was a related fight: the battle against mental illness. As Kennedy himself described in an influential Message to Congress on Mental Illness and Mental Retardation:

we must seek out the causes of mental illness and of mental retardation and eradicate them. Here, more than in any other area, ‘an ounce of prevention is worth more than a pound of cure.’ For prevention is far more desirable for all concerned. … Prevention will require both selected specific programs directed especially at known causes, and the general strengthening of our fundamental community, social welfare, and educational programs which can do much to eliminate or correct the harsh environmental conditions which often are associated with mental retardation and mental illness. (Kennedy, 1963)

It is apt that Kennedy used military metaphors to describe the fight against mental illness during the postwar period. As historian Grob (1991) has contended, the Second World War was a pivotal event in the history of American psychiatry. Before the war, American psychiatrists were still largely employed in mental hospitals and had little influence on public health policy or public opinion (Scull, 2011b); afterwards, psychiatrists were intimately involved in debates not only about mental health, but also about the direction of American society itself. While there were only 35 psychiatrists employed by the Army Medical Corps at the beginning of the war, by the end, 2,400 physicians had been assigned to psychiatry. The reason for this was that it became apparent that mental illness was much more prominent in the American military—and American society—than previously thought. There were two primary reasons for this: the first was that 12 per cent of all men who volunteered for military duty were rejected on psychiatric grounds, amounting to more than a million people, six times the rejection figure for the First World War (Pols and Oak, 2007). Although historian Naoko Wake has highlighted how a considerable percentage of this figure amounted to homosexuals (which was considered a mental illness), it nonetheless suggested that far more Americans were mentally ill than previously thought (Wake, 2007). The second reason, as Jackson (2013) and others have described, was increased recognition of combat stress, highlighted in Grinker and Spiegel’s study Men Under Stress (1945). The American military saw over 1 million hospital admissions for neuropsychiatric symptoms. To tackle this enormous drain on military manpower, psychiatrists were enlisted to study the problem and offer solutions.

By war’s end, American psychiatrists and politicians had been convinced first, that mental illness was much more rampant in American society than previously thought; second, that the cause of mental illness was often to be found in the environment; third, that the only way to address the tide of mental disorder was through preventive action; and fourth, that American psychiatry was equipped to deal with the situation. Adding urgency to such concerns was the fact that the asylum population in the United States was on the increase, approaching 500,000 by 1946. Indicative of how seriously the problem was taken was the passage of the National Mental Health Act in 1946, which led to the foundation of the National Institute of Mental Health (NIMH) in 1949 and, by 1955, the Mental Health Study Act, which led to the Joint Commission on Mental Illness and Health (JCMIH) and later the Joint Commission on the Mental Health of Children (JCMHC). The reports of both these Commissions—Action for Mental Health (1961) and Crisis in Child Health: Challenge for the 1970s (1969)—similarly emphasized the scope of the problem and the need for “a radical reconstruction of the present system” (Lourie, 1966: 1280).

Despite all the enthusiasm for psychiatry in the postwar period, theoretically and clinically, the discipline was divided. As Pressman (1998) has described, psychosurgery reached its zenith in this period, as did other heroic biomedical treatments, such as Electroconvulsive Therapy and insulin shock therapy (Shorter and Healy, 2007). Psychoanalysis was also entering a period of dominance, with nearly all American psychiatry departments requiring trainee psychiatrists to be trained as psychoanalysts (Hale, 1995). The most influential branch of psychiatry in political terms, however, was one that has been largely forgotten today in the United States: social psychiatry.

Social psychiatry is a term that has been defined and used in many ways, including as a catch-all for many psychodynamic concepts and approaches that privilege the important of the social environment as it impacts the aetiology, diagnosis and treatment of mental illness. These include the community psychiatry of Gerald Caplan (1917–2008), the therapeutic community of Maxwell Jones (1907–1990) and the transcultural or cross-cultural psychiatry of people such as anthropologist Marvin Opler (1914–1981), psychiatrist Arthur Kleinman (b. 1941) and others (Jones, 1952; Opler, 1959; Caplan, 1961, 1964; Kleinman, 1977; Bains, 2005). From a methodological and theoretical perspective, social psychiatry was also highly and genuinely interdisciplinary, with psychiatrists being heavily influenced by and working on projects with social scientists, such as anthropologists, sociologists and psychologists (Scull, 2011a, b, c). Finally, the definition of social psychiatry differed somewhat on either sides of the Atlantic, with American psychiatrists tending to focus on community mental health and British psychiatrists emphasizing therapeutic communities.

Because of this protean quality, when psychiatrists speak of social psychiatry today, they often speak about quite different things. But there is perhaps an element of hindsight to this. During the 1950s, social psychiatry’s core nature was perhaps best and most pithily encapsulated in the words of Scottish psychiatrist Sir David Henderson (1884–1965), whose words were quoted in the British social psychiatrist Joshua Bierer’s (1901–1984) editorial for the 2nd volume of the International Journal of Social Psychiatry (Henderson, 1956): “social psychiatry is first and foremost a preventive psychiatry”. In this way, social psychiatry was intimately involved with psychiatric epidemiology or determining the causes of mental disorder in populations, a theme that the historian Rhodri Hayward has analysed extensively (Hayward, 2009, 2014). When Kennedy spoke of seeking out and eradicating the causes of mental illness, he (or his speech writer) had social psychiatry very much in mind.

So, what were these causes? As Kennedy’s speech indicated, “harsh environmental conditions” were often blamed (Kennedy, 1963). Poverty, class inequality, racial inequality, overcrowding, social exclusion, violence and poor education were all associated with the emergence of mental illness, as the leading social psychiatric texts of the period claimed. The first of these was Mental Disorder in Urban Areas, written by Chicago School sociologists H Warren Dunham (1906–1985) and Robert Faris (1907–1998), which analysed where people admitted to Chicago asylums had lived before admittance. It found that schizophrenia, in particular, was connected with the disorganized, chaotic and unstable life in the slums surrounding the central business district. Although critics suggested instead that schizophrenics “drifted” to such slum districts, Faris and Dunham (1939) anticipated and dismissed this possibility (Meyerson, 1940; Parkin, 1964). The pair also found similar findings in the smaller city of Providence, RI, as did other researchers in a series of unpublished comparison studies focussing on other mid-Western cities, including Kansas City, Omaha, Milwaukee, Peoria and St. Louis (Schroeder, 1942).

Another influential study that emerged following the Second World War also analysed poverty, but through the lens of class. The research for Social Class and Mental Illness, published in 1958, was funded by one of the first major NIMH grants in 1950, and was a genuinely interdisciplinary project, written by sociologist August Hollingshead (1907–1980) and psychiatrist Frederick Redlich (1910–2004). Focussing on the small city of New Haven, CT, Hollingshead and Redlich began their book by stating that: “Americans prefer to avoid the two facts of life studied in this book: social class and mental illness” (Hollingshead and Redlich, 1958: 3). The pair’s research combined a “macroscopic” survey of all those who sought psychiatric treatment with “microscopic” study of 50 individuals. They also delved into New Haven’s history to identify five tiers of society, which matched those that they identified. The lowest tiers of society, consisting chiefly of recent immigrants and more longstanding “Swamp Yankees”, were disproportionately saddled with not only the highest rates of mental illness (especially psychosis), but also with the least access to care.

Mental Health in the Metropolis, the first volume of the Midtown Manhattan Study, also echoed such conclusions (Srole et al., 1962). Although the study shocked Manhattanites and others with the finding that only 18 per cent of the survey population (1,660 white, non-Puerto-Rican adults between the ages of 18 and 59 living in Midtown Manhattan) exhibited no symptoms of mental disorder, the authors contended that urban environments were not inherently pathological to mental health (Srole et al., 1962: 138). Instead, a host of environmental factors, ranging from immigration status and the socioeconomic status of one’s parents, were cited as particularly influential. Just as in New Haven, mental health and illness was linked closely to one’s socioeconomic status. Whereas 30 per cent of those in the highest stratum of society could consider themselves mentally “well”, only 4.6 per cent of the lowest stratum were so fortunate. In contrast, while only 12.5 per cent of those in the highest stratum were “impaired” by mental illness, with 0 per cent being completely “incapacitated”, 47.3 per cent of those in the lowest stratum were, with 9.3 per cent “incapacitated”. In other words, “the mental health contrast between the top and bottom strata could hardly be more sharply drawn” (Srole et al., 1962: 230–231).

Recognizing such associations was one thing; doing something about them was another. Faith in psychiatry was encouraging, but ultimately, as Hollingshead and Redlich noted, society had a role to play as well:

Psychiatry is becoming a major trouble shooter in modern society; promises and hopes are great, at times too great; fulfilment of them will come only if we are we guided by the spirit of science and by a strong social conscience. … Solution of the mental health problem is one of the greatest challenges of our time. Is our society ready to meet this challenge? (Hollingshead and Redlich, 1958: 380)

Similarly, in the introduction to Mental Disorder in Urban Areas, Canadian Chicago School urban sociologist Ernest Burgess (1886–1966), reasoned that: “If social conditions are actually precipitating factors in causation, control of conditions making for stress and strain in industry and society will become a chief objective of a constructive program of mental hygiene” (Burgess, 1939: xvii).

But what did these relatively vague notions of “strong social conscience” and “control of conditions” actually mean? The authors of Mental Health in the Metropolis were somewhat blunter:

Ultimately indicated here may be interventions into the downward spiral of compounded tragedy, wherein those handicapped in personality or social assets from childhood on are trapped as adults at or near the poverty level, there to find themselves enmeshed in a web of burdens that tend to precipitate (or intensify) mental and somatic morbidity; in turn, such precipitations propel the descent deeper into chronic, personality-crushing indigency. (Srole et al., 1962: 236)

In other words, what was required was a sort of psychiatric “war on poverty”. It would be inaccurate, however, to claim that social psychiatry was perceived by social psychiatrists and supportive politicians at the time as espousing a form of “socialist” psychiatry. Although there were certainly psychiatrists, such as Matthew Dumont, Assistant Chief of NIMH Center for Studies of Metropolitan Mental Health Problems, and others who identified with the radical psychiatry movement, who sought the solution to the American mental health crisis in socialist ideology (Dumont, 1968; Richert, 2014), such views were just that: radical. They did not reflect the views of a majority of self-described social psychiatrists who might have seen themselves on the left side of the political spectrum, but—perhaps hypocritically—did not desire wholesale political and economic change in the United States.

Society as patient

Although frank discussions of socialism might have been rare in social psychiatric circles, that does not mean that other profound changes to American society were not mooted in the hope of preventing mental illness. Significant among these was the idea that the United States had to re-balance the relationship between individuals and society as a whole. In the introduction to a collection of his essays entitled Society as Patient (1950), for example, Rockefeller Foundation administrator and vice-president of the Josiah Macy Foundation Lawrence K Frank (1890–1968) claimed that, not only was “our culture is sick, mentally disordered, and in need of treatment” (Frank, 1950: 1), and that:

The individual, instead of seeking his own personal salvation and security, must recognize his almost complete dependence upon the group life and see his only hope in and through cultural reorganization …. Today, we are moving toward a reinstatement of the ancient doctrine of group responsibility and a recognized status for the individual, with increasing individual subordination and allegiance to the group …. We are, indeed, asked to give up these time-honored beliefs in human volition and responsibility, but only to replace them with a larger and humanly more valuable belief in cultural self-determination, social volition, and group responsibility. (Frank, 1950: 7–8)

Frank’s call for a reconsideration of individualism in American society amounted to a critique of one of the pillars of American democracy. In his pamphlet American Individualism (1922), and then during his successful presidential campaign in 1928, President Herbert Hoover (1874–1964) claimed that after the First World War, when “the Federal Government had become a centralized despot which undertook unprecedented responsibilities, assumed autocratic powers, and took over the business of citizens”, transforming the United States “temporarily into a socialist state”, Americans were faced with a “choice between the American system of rugged individualism and a European philosophy of diametrically opposed doctrines of paternalism and state socialism” (Hoover, 1922, 1928).

While the New Deal policies of Franklin Delano Roosevelt (1882–1945) and the need for renewed federal control of the economy during the Second World War undermined Hoover’s rugged individualism to an extent, faith in individualism was strengthened yet again during the early years of the Cold War, as the key values and “exceptionalism” (Lipsett, 1996: 1) of the United States were juxtaposed against those of the USSR. It is worth noting that while the Great Society programmes of Kennedy and Johnson were socially progressive and required an enormous increases in federal revenues, they were funded through boosting the economy with tax cuts, rather than taxing corporations and the wealthy. Kennedy proposed and Johnson delivered the lowering of taxes by 20 per cent, including lowering the highest rate from 91 to 70 per cent and lowering corporate tax rates from 52 to 48 per cent between 1963 and 1965, with the result that federal revenues rose from $94 billion in 1961 to $150 billion in 1967 (Andrew III, 1998: 14–15). In this way, Kennedy and Johnson’s decision to reduce taxes can be seen both as an attempt to placate business interests and Republicans in Washington, and as a reaffirmation of the belief that the best way to allow people to get ahead was to unshackle them economically.

It was also felt by some theorists, including the anthropologist Oscar Lewis (1914–1970) and the politician and sociologist Daniel Moynihan (1927–2003), that the poor also had to be freed from the so-called “culture of poverty” or “ghetto culture” that hindered generation after generation from attaining the drive, confidence and initiative to succeed (Lewis, 1959, 1961; Moynihan, 1965). Unlike Ragged Dick and the other rags to riches nineteenth-century heroes of Horatio Alger (1832–1899), something was believed to prevent the poor of the postwar period from dragging themselves up by their own bootstraps (Alger, 1868). The war on poverty, therefore, was not as much about redistributing income as it was about fostering “middle-class ideals” of individualism and enterprise in the poor (Andrew III, 1998: 58). The funding of and the theories behind the Great Society notwithstanding, the initiative as a whole, the most substantial series of American welfare programmes since the 1930s or since, nevertheless symbolized the notion that American society had to be rebalanced somehow both in terms of rich and poor and in terms of individual and society. Unlike its predecessor, Kennedy’s New Frontier—which although it combined both anti-poverty welfare programmes and foreign policy initiatives, evoked the image of the individualistic American pioneer or frontiersman—the Great Society, by virtue of its very name, acknowledged the importance of society, rather than just the individual. Society was more than a loose collection of disinterested individuals; it was a cohesive, organic, holistic unit, almost an organism whose health was in dire need of improvement.

Community as healer

Social psychiatrists also believed that unfettered individualism had to be checked if preventive psychiatry was to be achieved. Such thinking was best represented in the rise of the community mental health movement, which became the practical application of social psychiatric theory in the postwar period. Both social psychiatry and the community mental health movement signified a pronounced shift from focussing on the mental health of individuals to the mental health of populations. According to psychologist Herbert Dorken (1926–2012), who directed a pioneering community mental health service in Minnesota during the 1950s: “Comprehensive community mental health programs follow the pattern of public health philosophy which places the need for community service paramount to individual considerations” (Dorken, 1962: 335). Underlying this shift were profound changes in terms of the aetiology and the treatment of mental illness, and what this implied for psychiatrists and other mental health professionals. Before the rise of social psychiatry, the cause of mental illness was associated firmly with the individual and his/her immediate family environment, regardless of whether the specific explanation was found to be in hereditary factors, organic brain damage (for example, perinatal brain damage or post-encephalitic disorder) or the intra-familial conflicts identified by psychoanalysts. Even those causes that had an environmental component, such as alcohol-induced psychosis or general paresis of the insane (from syphilis), were thought to be due fundamentally to the moral shortcomings of such individuals, rather than the social environment itself. Although neurasthenia was thought to be a consequence of urbanization, technological advances and the hustle and bustle of modern life, the specific cause was to be found in the neurasthenic’s inability to cope with such changes. While sensitive, middle or upper class and Protestant businessmen, professionals and society women were thought to be vulnerable, the working classes, African Americans and Catholics were not (Schuster, 2011: 22). The treatment for neurasthenia and other mental illnesses also remained focussed on treating specific patients (for example, rest cures, exercise and time in the outdoors for neurasthenia) or relying on institutionalization. The community was neither seen as part of the problem or the solution.

It could be argued that a similar shift occurred during the mental hygiene and child guidance movements of the early twentieth century, when there was similar interest in the social environment and the mental health of populations. But the focus of child guidance and mental hygiene experts tended to remain centred on the individual or their immediate family, rather than broader social factors, as the title of The Individual Delinquent, child psychiatrist William Healy’s (1869–1963) influential book indicates (Healy, 1915; Jones, 1999: 61). Similarly, the psychiatric social workers (PSWs) employed in child guidance and mental hygiene clinics also concentrated primarily on understanding the potentially pathological role of the family, rather than the broader community. Furthermore, institutionalization remained as the predominant solution for countless cases as the numerous advertisements for asylums in medical journals, such as the Journal of the American Medical Association, and the burgeoning population of institutions testify. Rather than seeing the community as essential to treatment, as was the case in community mental health, patients continued to be removed from the community and placed in often remote hospitals. Psychiatrists remained in their privileged positions, either superintending such hospitals, providing individual psychotherapy or other forms of treatment or serving as the unquestioned head of the interdisciplinary teams (consisting of psychiatrists, social workers and psychologists) that manned child guidance and mental hygiene clinics.

Those behind social psychiatry and community mental health envisaged most of these aspects of psychiatric theory and practice changing, not least shifting the focus from individuals and their immediate families to communities and the broader social environment. It would be wrong, however, to overstate these intellectual and ideological transitions, and, in turn, underestimate the economic rationale behind community mental health and preventive psychiatry. The combination of teeming mental asylums and the growing perception that mental disorder was more prevalent in American society than previously thought meant that it was also simply too expensive to keep the mentally ill in psychiatric institutions for extended periods of time. In his 1963 speech to Congress, Kennedy estimated these costs at “$2.4 billion a year in direct public outlays for services—about $1.8 billion for mental illness and $600 million for mental retardation” (Kennedy, 1963). Moreover, the state of mental asylums was increasingly coming into question. In 1948, Albert Deustsch (1905–1961), a historian and journalist who wrote one of the first histories of American psychiatry (Deutsch, 1937), wrote The Shame of the States after touring 40 asylums in Pennsylvania, Ohio, New York and California (Deutsch, 1948). As his title suggested, conditions in many state hospitals were filthy, overcrowded and unhealthy, with scant expectations for patients to experience any form of cure or recovery. Deutsch’s findings attracted a great deal of media and medical attention and were followed by both academic and cultural attacks on the asylum, ranging from Irving Goffman’s (1922–1982) Asylums (1961) to Ken Kesey’s (1935–2001) One Flew Over the Cuckoo’s Nest (1962).

The alternative to institutionalized care was thought to be found in the community and, specifically CMHC, the building of which were funded by the Community Mental Health Act, passed on 31 October 1963, just weeks before Kennedy’s assassination. An amendment, nicknamed the Oswald Bill (after Kennedy’s assassin, Lee Harvey Oswald, whose actions, the legislators thought, could have been prevented had he benefited from the presence of such centres), was passed by President Johnson to staff such centres with psychiatrists, nurses, social workers and “paraprofessional” community mental health workers, who would work directly in the community. Not only was the community beginning to be seen as a more appropriate and effective setting to treat patients but, given the rise of environmental explanations for mental illness, CMHCs were also seen as a site for prevention. This included primary prevention, or “efforts to reduce the incidence of psychiatric disorder in a community”, secondary prevention, or “reducing the duration and severity of the disorders which do occur” and tertiary prevention, or the “maximum possible reduction of impairment caused by fully developed disorders” (Karno and Schwartz, 1974: 7). The newly emergent antipsychotic and antidepressant drugs (such as chlorpromazine and imipramine), which had been used not only to stabilize patients in institutions, but also as an aid to the primary goal of effective psychotherapy, also played a similar bridging role in the deinstitutionalization process, allowing patients to transition effectively to care in the community (Healy, 1998). Although many social psychiatrists believed that drugs were important tools, they were secondary to the ultimate goal of prevention, just as they had often been perceived to be a useful means to the end of psychotherapy, rather than an end in themselves.

The shift from asylums to CMHCs had major implications for psychiatric practice. While most psychiatrists before the Second World War worked in asylums, an increasing number found themselves in private practice, providing individual psychotherapy to clients wealthy enough to pay for their services. One of the reasons for this was that, during the 1930s, hundreds of psychoanalysts fled Nazi Germany for the United States and the United Kingdom. This emigration not only “transferred the epicentre of psychoanalysis from Europe to the United States” but also allowed psychoanalysis to dominate American psychiatric thought and practice by 1945 (Kirsner, 2007: 83). Although most social psychiatrists had a background in psychoanalysis and supported its tenets in principle, they recognized three main problems with the psychoanalytic dominance of American psychiatry when it came to preventive psychiatry. First, since the poor lacked the time, money and, some also argued, the cultural and educational refinement for psychotherapy (Cole et al., 1962; Moore et al., 1963), they tended not to be the target audience for most psychiatrists in private practice (Hersch, 1968). Second, psychoanalysis was not particularly preventive particularly in terms of the mental health of populations. Although psychoanalytic theory might have infiltrated parenting advice manuals during the postwar period, its focus was predominantly on treatment. Given the sheer number of mentally ill Americans and the link between mental disorder and deprivation, however, many social psychiatrists doubted that this was the most effective or efficient approach. As child psychiatrist Leon Eisenberg described, there were “more people struggling in the stream of life than we can rescue with our present tactics” (Eisenberg, 1966: 23). Third, the research of psychoanalysts focussed primarily on describing individual case studies, including the underlying causes of their mental illness and the course of therapy. While these studies were valuable in terms of eliciting how specific psychoanalytical factors could impact upon mental health (and make for fascinating reading for historians), they rarely made extrapolations that would apply to populations and therefore made only piecemeal contributions to psychiatric epidemiology.

To be more effective and efficient in times of both challenge and opportunity, psychiatrists were called upon by leaders, such as American Psychiatric Association (APA) president CH Hardin Branch (1908–1990), to begin concentrating on community mental health rather than being concerned solely with the mental health of individuals (Branch, 1963). But, as Harry R Brickman, Programme Chief LA County Mental Health Services described, this was not a simple transition:

a delicate balance must be set and maintained between the modest, but ultrasafe position that mental health is nothing more than clinical services, and the perhaps over-ambitious, but more daring position that mental health services can and should eventuate in a more humane and emotionally health community. (in Karno and Schwartz, 1974: viii)

Part of this shift in focus involved psychiatrists understanding more about and becoming more involved in the communities in which they practiced. As Robert H Felix (1904–1990), the first director of NIMH described: “To be fully effective, a good mental health program must include some provision for social action so that the total community environment is a mentally healthy one” (Felix quoted in Torrey, 2014: 47). While it was never particularly clear what was meant by such “social action”, it was clear that it involved psychiatrists, as well as other mental health professionals. In an address to the APA entitled “The Image of the Psychiatrist: Past, Present, and Future”, Felix stressed that psychiatrists would have to become more civically active, becoming immersed in their patients’ communities if the prevention of mental illness was to be achieved (Felix, 1964), a call that was repeatedly made in the pages of the American Journal of Psychiatry. Although a host of leaders within American psychiatry, including not only Felix, but also most presidents of the APA during the 1950s and 1960s, supported such calls psychiatric social action, they did encounter resistance and scepticism. As Elizabeth Ann Danto has demonstrated in her fine analysis of Sigmund Freud (1856–1939) and the free clinics provided in Vienna, Berlin and elsewhere during the 1920s and 1930s, such social action was not incommensurable with psychoanalysis, despite its emphasis on the individual patient and the individual therapist (Danto, 2005). But as the community mental health movement gained momentum during the late 1950s and early 1960s, many psychoanalysts began to resent the expectation that they turn their attention to the community.

One example of this can be found in a series of letters to the editor of the American Journal of Psychiatry that followed a 1963 letter from leading forensic psychiatrist Henry A Davidson (1905–1973). In his correspondence, Davidson recommended that psychiatrists in private practice, who he estimated charged $50 per hour, volunteer their time working in understaffed public hospitals community clinics, adding that it was hypocritical for psychiatrists to complain about clergy, psychologists and social workers impinging on their territory by providing therapy to the underprivileged when they were unwilling to work with such patients at a reduced rate (Davidson, 1963a). Davidson’s suggestion echoed one made the year before by Leo H Bartemeier (1895–1982), Chairman of the American Medical Association’s Council on Mental Health, in an article about what the findings of JCMIH implied for psychiatry. Aiming his comments at “individual psychiatrists working in private practice”, Bartemeier stated that the “long-stated criticism against psychiatrists in private practice” was that they “were isolated from the rest of the community”, and that they should “devote more of their working hours to community clinic services” (Bartemeier, 1962: 973). In a commentary that appeared in the volume his letter, Davidson explained that his suggestion was greeted with pronounced disapproval from his fellow psychiatrists, who argued that this undermined the right of psychiatrists to earn a decent living (Davidson, 1963b). As one respondent complained, Davidson’s suggestion was indicative of a “Robin Hood complex” that den[ied] elementary economic and political facts of life (Davidson, 1963b: 192). Reiterating the need to focus on individual patients and their specific issues (in addition to respecting the rights of individual psychiatrists to build up successful private practices), the writer added: “Individual psychotherapy is the only treatment that roots out the trouble. You can’t apply this on a mass basis” (Davidson, 1963b: 192). Although Davidson retorted that he hoped that most psychiatrists were not so “selfish” and APA president Jack R Ewalt (1910–1998) would subsequently add in his “President’s Page” that he supported Davidson’s suggestion, it was clear that many resisted the call of community mental health and saw it as a threat to their earning potential and their ability to provide effective psychotherapy (Davidson, 1963b; Ewalt, 1963).

The emergence of community mental healthcare meant that psychiatrists were also expected to become more community-minded in a way that relinquished some of their independence and authority, a different sort of balancing act. In other words, they were expected to share their psychiatric authority with other mental health workers—some professional, some not—and, in the process, relinquish some of their control over what was considered psychiatric knowledge. To a degree, this had already happened voluntarily with respect to the social scientists (mainly anthropologists and sociologists) who had participated in many of the pioneering social psychiatry studies. As already mentioned, Mental Disorder in Urban Areas was written by two sociologists, and Social Class and Mental Illness was written by a sociologist–psychiatrist team, and funded by a NIMH grant. In particular after the sudden death of project founder Thomas AC Rennie in 1955, the Midtown Manhattan Project (also funded by NIMH) was spearheaded by sociologist Leo Srole (1908–1993), with support from anthropologist Marvin Opler (1914–1981) and numerous social science researchers. Srole, in particular, dealt with the media storm that followed the publication of Mental Health in the Metropolis in 1962. The work of other social scientists, most notably, the sociologist Erving Goffmann (1922–1982), also influenced the burgeoning community mental health movement enormously.

It was one thing for psychiatrists to appreciate theoretical insights from social scientists, some of whom were funded by NIMH and other funding bodies. It was quite another to secede clinical control and knowledge to other mental health workers. CMHCs were explicitly multidisciplinary clinics, with psychiatrists working in teams with not only with other professionals, such as psychologists, PSWs and psychiatric nurses—as had been the case in child guidance and mental hygiene clinics—but also so-called paraprofessionals or non-professionals. Such “indigenous” paraprofessionals were often employed “in impoverished and ethnic minority communities” because “it became apparent that white, Anglo-American, English-speaking-only professionals were often very limited in their sensitivity to, understanding of, comfort in and communicative skills with such communities” (Karno and Schwartz, 1974: 170). Others were former service users. Given the cultural, political and economic disconnect between psychiatrists in CMHCs and the communities they served, paraprofessionals were introduced to bridge these gaps and encourage community members not to be suspicious of the centres. Federal, state and local funding was made available to train these community mental health workers or “psychiatric technicians” in “basic interviewing, counseling and reality-assisting skills” (Karno and Schwartz, 1974: 170, 175). As with most aspects of community mental healthcare, there was an economic, as well as a practical and ideological, rationale for such paraprofessionals, as they received low wages or, in many cases, worked on a voluntary basis. In this way, paraprofessionals provided an alternative to more highly paid and often unobtainable PSWs, which could produce bitterness and dampen morale in CMHCs.

Paraprofessionals were brought in partly because they served an ambassadorial role for CMHCs, but also because they were thought to have specialized knowledge about their community and the social problems that beset it. As such, their local, cultural and practical expertise helped to balance the theoretical knowledge provided by psychiatrists and other professionals. But who was really the expert, the university-trained psychiatrist or the indigenous paraprofessional or non-professional? A series of articles detailing the role of non-professionals at Lincoln Hospital Mental Health Services (LHMHS) in the Southeast Bronx illustrates how mental health professionals often struggled to share expertise and authority. The first article stressed how the employment of “indigenous non-professionals” was part of the centre’s “innovative” approach to preventing mental disorder in a “highly disadvantaged” part of New York City (Reissman and Hallowitz, 1967: 1408). The “naturalness” of the “non-professionals” allowed for an “informal atmosphere” that underlined the “open-door policy” of the centre and enabled “freer contact and communication on the part of ‘clients’ from the area” (Reissman and Hallowitz, 1967: 1409). Paraprofessionals worked in the community to identify people who potentially needed support, functioned as liaison officers between the community and LHMHS, and educated the community about mental health, ideally reducing the stigma of mental illness. The “non-professional workers” also provided local knowledge to help clients avoid “bottlenecks and red tape” in accessing services and offered “encouragement and support”, which allowed clients “to maintain motivation, dignity, and self-esteem” (Reissman and Hallowitz, 1967: 1409). In fact, one of the stated aims of the centre was “to demonstrate that indigenous non-professionals under professional supervision can be trained to provide meaningful service for a disadvantaged population”; another was “to transform clients into helpers and active citizens” (Reissman and Hallowitz, 1967: 1409).

The optimistic tone present in this initial article eroded somewhat in subsequent papers. The authors stressed that the paraprofessionals, now called “mental health aides”, were not junior professional mental health workers and that most lacked “formal education” (Hallowitz and Reissman, 1967: 769). They were “not sophisticated about mental health problems”, but rather were “savvy” because of their “struggle to survive”; as such, they were best placed to play the role of “good friend”, “good neighbour”, “model”, “potential counsellor” and “sustainer of hope” (Hallowitz and Reissman, 1967: 769). Moreover, many “aides, coming as they do from a disadvantaged population, bring to the job many of the same strong feelings toward the power structure as is evident in the target population” (Hallowitz and Reissman, 1967: 775). In turn, professionals were “reluctant to give responsibility to the non-professional and to allow him much independence of action or judgement” (Hallowitz and Reissman, 1967: 775). As a result, professionals and non-professionals contested who was truly the expert in CMHCs. While non-professionals tended to adopt an “anti-intellectual attitude” and emphasize that it was only they that knew “what was really going on”, professionals were often unable to shed the mantle of authority and adapt to more informal and less structured approaches to management (Hallowitz and Reissman, 1967: 775). Despite these problems, the authors nevertheless reported that their work with non-professionals was “most encouraging” (Hallowitz and Reissman, 1967: 777).

A final unpublished paper about LHMHS presented to the National Association of Social Workers conference in 1968 by one of the authors of the previous papers, social worker Emmanuel Hallowitz (1920–2001), suggested otherwise. Hallowitz noted that although “a spate of books and papers extolling the virtues of the ‘indigenous non-professional’ not only as a new source of manpower but also as an agent of change both within the community and within the institution” had been published recently, the use of community mental health workers was also problematic (Hallowitz, 1968). Contrary to “myth … the poor do not necessarily have special knowledge, insight, or intuitions not available to the more affluent” (Hallowitz, 1968). It “should not be a stunning discovery”, Hallowitz added, that “a good sociologist or anthropologist who has gained community acceptance can understand the dynamics of the community much better than the nonprofessionals” (Hallowitz, 1968). Ultimately, Hallowitz’ paper questioned whether professionals were in fact willing to share authority, expertise and responsibility with non-professionals. Community mental health workers could make superficial decisions about centre décor, furniture and opening hours, but it had to “be anticipated and accepted that in their growing sense of power they will make unreasonable, if not irrational, demands and that they will abuse their power” (Hallowitz, 1968). In any matters of import, workers should “not under the misapprehension that they will decide” (emphasis in original; Hallowitz, 1968).

The case of LHMHS indicates that, just as psychiatrist were often unwilling to compromise their private practices to support the community mental health movement, so too were they, and other mental health professionals, hesitant about sharing their power and expertise. The knowledge, attitudes and theories of individual professionals were not to be trumped by insights from the community. Another indication of the discomfort many psychiatrists felt regarding the new relationships was an escalation in the rates of burnout for psychiatrists working in CMHCs. A survey of 214 psychiatrists in 1987, for instance, revealed that while the most common factor in attracting psychiatrists to CMHC work was community service, serving the indigent and doing worthwhile work, the most common reasons for psychiatrists to leave CMHCs were issues related to their role and value within the centres (Vaccaro and Clark, 1987). Altruism and community-mindedness might have drawn many psychiatrists to CMHCs, but uneasiness about ceding their independence contributed to many ultimately leaving.

Ordinary Americans were also expected to re-balance their relationship with the communities in which they lived for the sake of mental health. Mental health was not just the responsibility of individual citizens or individual mental health professionals, but it was also the responsibility of communities (Ewalt, 1955). As with much postwar thinking about mental health, such notions had their root in wartime experience. In Psychiatry in a Troubled World: Yesterday’s War and Today’s Challenge, for example, William C Menninger (1899–1966), who had been Chief Consultant in Neuropsychiatry to the Surgeon General of the Army (1943–1946), insisted that:

In the sacrifice of some of his individuality, [soldiers] found the compensation of comradeship that rarely develops in civilian life. The resulting security and satisfaction were an important component of his mental health. In the experience he found a new kind of unselfishness. He discovered a rare unity in human relationships that erased differences in creed and color and in social, economic, and educational backgrounds. (Menninger, 1948: 353)

Unfortunately, many civilians lacked such group connections or even the benefit of close friends, leaving them feeling unloved, insecure and unwanted. Referring to the German social psychologist Erich Fromm’s (1900–1980) Escape to Freedom (1941), sociologist Claude C Bowman (1909–1988) added that:

Modern man has won a succession of battles for freedom but, looking back from the vantage point of the twentieth century, it appears that there were liabilities inherent in these victories …. Men are lonely today because these emancipating triumphs severed the ‘primary ties’ that united them with others in the pre-individualistic period. We now how more individuality in democratic societies but this advantage has been purchased at a large psychological price. (Bowman, 1955)

Loneliness was one such price of too much individuality; the other price came in the form of social problems which, in turn, would result in more mental disorder. In this way, individualism had the potential to damage psychologically those who benefited from it in a material sense as well as those who suffered from it.

Conclusion

During the mid-twentieth century, American social psychiatrists argued that society needed to be re-balanced to prevent mental illness from overwhelming American society. Although such a balancing act was often described in terms of distribution of resources, it was more often described more subtly in terms of the balance between individuality and communitarianism. But this balancing act never occurred. By the 1970s, however, a host of factors had begun to undermine the social psychiatry movement. The escalation of the Vietnam War and the resignation of President Johnson sapped both economic and political resources. Nixon, though he was interested in healthcare, was not keen on psychiatry, nor were psychiatrists keen on him. CMHCs were also hampered by lack of resource, lack of coordination with existing mental hospitals, and escalating racial and cultural tensions. Many of the deinstitutionalized returned to their community only to end up in prisons or become homeless. Perhaps most damning was the blunt measure that rates of mental illness continued to rise. While these increases were partly due to the emergence of new disorders during the postwar period, such as Attention Deficit Hyperactivity Disorder, and the softening of diagnostic criteria for other disorders, such as mild depression, they did not indicate that social psychiatry had succeeded in preventing much mental illness (Smith, 2012). As many of the people who would have been previously institutionalized were now in plain sight, often lacking adequate treatment, the public was also more aware of those with serious mental health problems. Whereas the preventive aspects of social psychiatry had dominated discussions of mental health policy throughout the 1950s and 1960s, funding for community mental health was reduced throughout the 1970s. Although Jimmy Carter’s Mental Health Systems Act was passed to reverse this trend in 1980, most of this piece of legislation was repealed by President Ronald Reagan in 1981, effectively ending federal funding of community mental healthcare.

The ideological ground was also fluctuating. On the one hand, psychiatrists such as RD Laing (1927–1989) and Thomas Szasz (1920–2012) were—in very different ways and for different reasons—questioning the very notion of mental illness itself (Laing, 1960; Szasz, 1961). On the other, the emergence of psychopharmacological best-sellers, such as Milltown, Ritalin and Valium, encouraged many psychiatrists to re-embrace biological psychiatry, neurology and genetics. The biological psychiatry that emerged in the 1970s adjusted the psychiatric gaze from fixating on the mental health of populations to focussing once again on individuals. This was a timely shift as the United States entered the narcissistic “me” decade and as Americans eagerly adopted the role as mental health consumers (Lunbeck, 2014).

Historians, too, have either ignored or rejected the ambitions of social psychiatry and related environmental approaches to mental health, none more so than psychiatrist-cum-historian E Fuller Torrey in his recent scathing indictment of environmental approaches to mental health (Torrey, 2014). I would argue, however, that such assessments suffer from present-centredness and an over-reliance on hindsight. A more careful reassessment of community mental healthcare indicates that although CMHCs might have failed in practice, that was not because the theory behind it was invalid or potentially workable had CMHCs received sufficient support. Similarly, social psychiatry’s focus on social and economic factors remains relevant to mental health, just as it is to other chronic diseases, ranging from cancer and heart disease to diabetes and obesity. Cementing these links further is not necessary; what is needed is better thinking about what to do about the association. Rather than rejecting social psychiatry and community mental health out of hand because of past failures, one role of the historian can be to determine what aspects of it bear further scrutiny and might remain relevant in the current context.

It could be that, while most social psychiatrists were not advocating socialist psychiatry, the changes they suggested regarding changing the balance between individuals and society were radical. If individual psychiatrists were not particularly willing to sacrifice their ability to practice as individual actors, was it realistic to assume the same of ordinary Americans? In a country so unwilling to give up the right to bear arms, perhaps such a transition was bound to failure. Moreover, the experience of other countries, including India in “getting rid of poverty” also suggests that the cost of such initiatives are not worth the perceived benefits. Perhaps what was needed was a subtler solution.

Today, in the wake of the global economic slowdown, rising rates of mental illness and disaffection with psychopharmacology, the idea that there are social determinants of mental health is taking root once more. But, while there is some flirtation with left-leaning politicians, such as Democratic candidate Bernie Sanders (b. 1941), a radical political shift in the United States is unlikely. That does not mean, however, that there are not potential solutions that might be beneficial for mental health while simultaneously providing a better balance between individualism and society. One possibility is that of a guaranteed basic income (GBI), which provides every person an automatic unconditional income from the government whether they work or not. With it, individuals can work for a higher wage at a job of their choosing, start their own business, transform their skills and interests (music, art, writing) into a career or devote themselves to parenting, caring for relatives, volunteer or other unpaid, but vital, occupations. With respect to mental health, GBI has the potential to not only raise people out of poverty and reduce the stress associated with fluctuating income, but also allow people more control over their lives, more ability to give back to their communities and engage with others, thus reducing the social exclusion identified by many social psychiatrists. In this way, it offers opportunities for people to retain their individuality and express it even further through a socially progressive policy.

Today, advocates of GBI include socialist sociologist Erik Olin Wright (b. 1947), but it was also an idea of the American Revolution, being suggested by Thomas Paine (1737–1809) in Agrarian Justice (1797). It was also espoused by social psychiatrists 50 years ago, specifically the members of JCMHC. In a 1969 report sent to Congress, state governors, NIMH and the Secretary for Health, Education and Welfare entitled Crisis in Child Mental Health: Challenge for the 1970s, JCMHC included GBI in their list of recommendations to prevent child mental illness. Although the recommendation, along with the Commission’s other ideas, quickly evaporated, it is gaining traction again. Switzerland will vote on a guaranteed income of over £20,000 in 2016, and discussions are ongoing in France, Finland and the Netherlands about piloting the idea. Given the complex nature of mental illness, it would be rash to claim that GBI could prevent mental illness by itself. But, if the history of social psychiatry has any lessons in terms of turning theory into practice, it is that pragmatic, practical and nuanced solutions, solutions that balance the human drives to be both community-oriented and individualistic, that will be required.

Data availability

Data sharing is not applicable to this article, as no datasets were generated or analysed during the current study.

Additional information

How to cite this article: Smith M (2016) A fine balance: individualism, society and the prevention of mental illness in the United States, 1945–1968. Palgrave Communications. 2:16024 doi: 10.1057/palcomms.2016.24.