The COVID-19 Pandemic and Ethical Challenges posed by Neoliberal Healthcare

Christopher Ahlbach, Teresa King, Elizabeth Dzeng

Published in the Journal of General Internal Medicine on October 27, 2020

As COVID-19 continues to ravage the world, the pandemic has revealed the stark inadequacies of the United States’ beleaguered social safety net and health care system, which have predisposed us to a poor emergency response. In addition to the 27.9 million uninsured Americans prior to the pandemic, the economic spillover added 36 million and counting newly unemployed Americans.1 COVID-19 has revealed and exacerbated inequalities in US health outcomes and healthcare access which are characteristic of a neoliberal system. Neoliberalism holds that laissez-faire economics result in an optimally organized social system. However, we argue that neoliberalism is fundamentally unethical as healthcare is a human right and not a commodity. Though much thought and research has examined the ways in which the failures of our healthcare system are unethical, the role of neoliberalism has been rarely acknowledged. This gap in scrutiny is worth addressing, as neoliberal economic forces influence if not dictate many dangerous inadequacies with US healthcare, especially in this time of COVID-19.

The term “neoliberalism” refers to a political and economic philosophy of unbridled capitalism popularized in the 1970s that favors corporate deregulation, privatization, and austerity (i.e., reducing government spending for social services), and is the dominant economic system in the US.2 In the context of US healthcare, neoliberal policies promote corporate profit through the privatization of systems of healthcare delivery and payment models which transformed a social service into a business. The interests of the private sector (e.g., profit) are prioritized, while the interests of the public (e.g., healthcare and health), whose labor enables that profit, remain secondary.

As evidenced by the rapid and effective COVID-19 response seen in countries with socialized healthcare delivery systems such as New Zealand, we can better meet the needs of our patients and improve public health by decoupling our care models from corporate interests. The magnitude of social disruption due to the pandemic offers a rare opportunity to reflect on the ubiquity of neoliberalism, and proactively design a more ethical system. Despite the labor of health workers trying to meet their patients’ needs during these unprecedented times, our system is flailing. The neoliberal trade-off of capital for corporations and austerity for the working poor is unethical and ill-suited to the achievement of human rights, including just access to healthcare.

The Universal Declaration of Human Rights states that access to adequate medical care is a human right, and in this the US is profoundly lacking. Those without insurance have no means of obtaining healthcare outside of entering debt. This violation of human rights leads to morbidity and death. A recent analysis estimated that if every American were to have health insurance, 68,000 deaths would be avoided each year.3 The neoliberal healthcare system in the US was intentionally designed to allow corporate wealth accumulation at the expense of public health, as evidenced by recent reports of enormous profits of health insurance companies.

Some may argue that being cognizant of the structural violence that impacts our patients’ lives, such as macroeconomic issues and healthcare delivery payment structures, are outside the duty and expertise of clinicians. We would posit that healthcare professionals have an ethical responsibility to address all determinants of health, particularly those which are embedded in the fabric of their professions. This neoliberal system has also failed to protect healthcare workers. Neoliberal profit-driven production models hindered manufacturing of PPE in preparation for a pandemic, directly endangering clinicians. One of the most incongruous demonstrations of a healthcare system built on profit-generation are the reports of healthcare workers losing their jobs due to large profit loss from the cancellation of elective surgeries and procedures during this pandemic. 

The COVID-19 pandemic lays bare the inadequacies and instabilities inherent to healthcare delivery models that operate according to a neoliberal ethic. Trillions of dollars have been spent to keep stock markets afloat, yet a fraction of that has been spent to guarantee employment, income, or healthcare access, exacerbating already precarious socioeconomic situations for many. Pandemic preparedness was derailed when ventilator stockpiles were hindered by a corporation cancelling manufacturing contracts with the government due to their lack of profitability.4 The rhetoric of the federal government and others has unequivocally advocated for profit at the expense of human lives.

As we are witnessing, crises make deficiencies more glaring, and the COVID-19 pandemic is exacerbating pre-existing disparities.5 The neoliberal presumption that social services can be improved by creating markets for competition, then reducing governmental influence in those markets, has defined the course of healthcare provision in the US for the past century. This inherently constructs a hierarchy whereby health services and health are concentrated among those of prior social privilege, be that economic, racial, immigration status, or other. This sharp reinforcement of inequality is formulated out of a fundamental tenet of neoliberalism: relying on free markets to allow people to procure the goods and services they need to survive. This is unethical because many will be unable to engage in those markets due to pre-existing inequalities and injustices. For example, racism exists in all domains of American society, and creating a free market for healthcare will necessarily prioritize the health of those most able to participate in the market – those of privilege. Indeed, profound racial disparities exist in COVID-19 infection rates and mortality, likely due to multiple direct causes – the root of which being structural racism.

Tying healthcare access to employment status becomes particularly problematic when public health necessitates the complete shutdown of whole industries, affecting millions of workers in the retail, food services, transportation, and entertainment sectors. Since the announcement of shelter-in-place and anti-congregation orders, the US has seen the most filings for unemployment in a single week in recorded history. Without healthcare for all, mass unemployment becomes an even more threatening prospect as those who are infected without health insurance could be more likely to delay or avoid care, furthering spread and mortality.

Social distancing requires all individuals make personal sacrifices for the benefit of the many, a communitarian proposition that is the antithesis of the personal freedom neoliberalism fetishizes. The ways in which the COVID-19 pandemic has touched nearly every part of human life calls for a collectivist pursuit of public health and safety, and to continue after this crisis abates. The COVID-19 pandemic is an emergency exacerbated by the pre-existing, long-simmering healthcare crises in the US that arose from neoliberal ideologies and policies. Immediate universal health insurance could blunt the worst of this pandemic, but we must not return to the failures of our current system when this storm passes. Large-scale healthcare reforms that ensure healthcare for all people, decouple capital generation from healthcare, and prioritize human rights will ensure a more ethical system and a healthier society for all of us.


1.        Tolbert J, Orgera K, Singer N, Damico A. Key Facts about the Uninsured Population.; 2019.

2.        Harvey D. A Brief History of Neoliberalism. Oxford University Press (OUP); 2005.

3.        Galvani AP, Parpia AS, Foster EM, Singer BH, Fitzpatrick MC. Improving the prognosis of health care in the USA. Lancet. 2020. doi:10.1016/S0140-6736(19)33019-3

4.        Kulish N, Kliff S, Silver-Greenberg J. The U.S. Tried to Build a New Fleet of Ventilators. The Mission Failed. The New York Times. 2020.

5.        Williams DR, Cooper LA. COVID-19 and Health Equity-A New Kind of “Herd Immunity”. JAMA. 2020. doi:10.1001/jama.2020.8051


Health care workers’ plea: You can save more lives than we can

A version of this opinion piece was published in the San Francisco Chronicle on December 2, 2020.

By Lingsheng Li and Elizabeth Dzeng

COVID-19 cases and deaths are breaking records around the country and already traumatized healthcare workers are bracing for this onslaught with dread. Frontline clinicians have already seen too much death and suffering. A nurse who worked in New York during the first surge told us, “Nobody lived, that I know of. I don’t feel like I actually helped anybody.” Another said that in just one night, there could be a “code blue”, signifying a patient with a cardiac arrest, called every 15 to 20 minutes. Another doctor clarified, “Every time there was code, it was basically a patient dying.” 

As two physicians and palliative care researchers working at the University of California, San Francisco, we have gathered stories of doctors and nurses interviewed as a part of a research study to understand the experiences of volunteer clinicians who were deployed to New York City during the initial COVID-19 surge in the Spring of 2020. These were not reports of medical miracles or accounts of professional fulfillment, but rather narratives of profound grief and suffering at hospitals that were described as “war zones” again and again.  

A critical care doctor recounted her worst day, as tears streamed down her face during the interview, “We just watched her die and we were the only people there when she died. I was kneeling there with her hand in my hand, sweating and crying, goggles fogged, and N95’s soaking.” These dedicated doctors take on guilt and self-blame for their futile efforts in the face of this virus that has already killed millions, “I blamed myself so intensely for the decisions I’d made in trying to take care of her. I owned her death.”

As the U.S. experiences its third wave, we fear that the idea exalted in the initial phases of the pandemic of “heroes in scrubs” provides a false narrative that frontline clinicians are the last line of defense against whatever bungled public health response the current administration throws our way. 

Healthcare workers are not heroes. Not even close.  

They are exhausted, burned out, and barely holding it together. The emotional strain was obvious throughout the majority of the stories we gathered. One nurse said, “I cried every day I got home. I was mentally exhausted.” Another said, “You get really overwhelmed and just start to cry. I didn’t feel I was helping. I’d wake up the next day and say to myself, ‘All right, let’s put on some hip-hop and get into the hospital and be a badass,’ and don’t let anybody know that you felt just completely defeated.” These clinicians put on a brave face every morning, but inside they’re breaking.

We like to think of heroes as glowing, Superman figures swooping in at a moment’s notice to save the day and protect the public. As COVID-19 cases continue to rise, frontline clinicians, including those who have retired, are once again called upon to step into unfamiliar roles and inadequately equipped hospitals. However, unlike superheroes, we cannot change the inevitable outcome by stopping a fast-moving train from crashing and burning. We are just trying our best to pull off as many passengers as we can before that train reaches a devastating halt.

You can count on us to show up to work, hold your loved one’s hands, grit our teeth, put our heart and soul through yet another surge, but there may not be much more left in us. ICU capacities are maxed out again in cities and towns across the country. Our colleagues who have not yet beaten back the first wave of the pandemic haven’t yet recovered enough to endure yet another battle.  

Because we understand the consequences, we cancelled holiday plans and took rain checks with friends and family, saying “I’ll see you when this is all over”. After several of his staff members tested positive for COVID-19, my 70-year-old father-in-law, who is a practicing physician in Georgia, self-quarantined four separate times for several days by sleeping in a leaky RV parked outside of his house, afraid of bringing the virus home. “Healthcare workers are ignored. I don’t want to be a hero. I just don’t want to die,” he says. Our colleagues do this on a daily basis, only to have, as one Nurse Practitioner from Texas noted on Twitter, a patient ask after testing positive for COVID-19, “what should I do? I went to a wedding this weekend with 200 guests.” What an utter mockery of our efforts. 

We may not be heroes but we will do everything we can to treat and to comfort, even when we cannot cure or fix. Trust that we will do our best but we desperately need your help. The holidays will be different this year. Do not let this be the last Thanksgiving or Christmas you get to spend with someone you love. As the holidays approach, protect yourself and others around you by wearing a mask correctly. Follow CDC tips and think about your risk of getting infected or infecting others with the virus before planning any celebrations. Be wary of exposures over time in enclosed areas without air circulation. Find ways to safely connect with people, especially older adults who may be particularly isolated at this time. Be mindful of others in your families and communities who may be extra vulnerable. We also implore local and national leaders to institute mask mandates and endorse social distancing.

Healthcare workers are not heroes who can singlehandedly save the nation from this virus. You can do far more to stop COVID-19 than we can. 

Ethics in Conflict: Moral Distress as a Root Cause of Burnout

This commentary was published in the Journal of General Internal Medicine on October 30, 2019

In this commentary, we argue that moral distress and professional ethical dissonance are root causes of burnout; the way forward requires attention to the social and ethical dimensions of professional practice. The Physician’s Charter on Medical Professionalism, a charter for the new millennium written in 2002 through a partnership between the American Board of Internal Medicine, the American College of Physicians, and the European Federation of Internal Medicine, identified several threats to physician professionalism including technology, market forces, healthcare system strain, and broader sociological shifts in the role of physicians in society.1 The Charter described how these factors challenged physicians’ ability to adhere to the values of professionalism.


Adding to these challenges, the medical profession is now experiencing a crisis of burnout. Burnout, characterized by emotional exhaustion, depersonalization (i.e., cynicism), and reduced personal accomplishment, has been linked to poorer quality of care and decreased patient safety.2 Doctors who experience burnout (and currently approximately 50% do) have a two-fold increase in suicidal ideation.3 Tragically, around 400 doctors commit suicide every year.


Prevailing explanations on the causes of burnout have focused on long work hours, the rapid adoption of electronic health records (EHRs), and grinding administrative tasks. We believe that the wellsprings of discontent run deeper. We see evidence of an insidious moral distress resulting from physicians’ inability to act in accord with their individual and professional ethical values due to institutional and societal constraints. These constraints have been exacerbated by changes in healthcare and society, changes that go beyond those identified in the Charter. This discontent amplifies a growing rift between the profession’s ethical ideals and reality.


Moral distress’ most pernicious expression is in moral apathy—a moral cynicism derived from feelings of powerlessness, which provides a rich medium for the growth of burnout. To address physician burnout, we must look beyond mechanistic explanations, examining why many physicians feel unable to exercise ethical agency that is so central to their professional identity as providers of ethical care. While others have described this phenomenon as “moral injury,”4 we believe that rooting this discussion within the context of moral distress and ethics provides a valuable framework and tool to understand and develop potential solutions against burnout.


Everyday insults to professional values map onto the three pillars of the Physician Charter. They illustrate how healthcare systems create ethical tensions, both large and small, that lead to moral distress and burnout. Below are some examples.



The consumerization of medicine has challenged the physician’s imperative to act in a patient’s best interest (beneficence). Physicians are buffeted by conflicting pressures to reduce costs in some settings while in others, to raise institutional or individual incomes by prescribing or referral practices. Something as simple as RVU-driven throughput incentives may impede meaningful conversations with patients or disincentivize exploration of health concerns that extend beyond the patient’s reason for visiting.



Physicians strive to offer treatment plans that respect patients’ values and preferences. However, everyday systemic factors challenge physicians’ ability to reconcile respecting autonomy with maximizing patient welfare. One example is when physicians feel compelled to provide potentially harmful or futile treatments, because there is no advance directive or family members disagree, subjecting a patient to perceived futile care near the end-of-life results in moral distress. Conversely, a patient’s refusal of life-sustaining treatment or desire to seek assistance in dying in states where it is legal may also create moral distress. Both situations conflict with the precept to “do no harm” and create tension with the value of respect for persons. Physicians may deploy coping mechanisms of dehumanization and detachment to deal with their perceived powerlessness as moral agents. They then absolve themselves of responsibility for the negative consequences of care by reasoning that systemic constraints offered them no leeway, further contributing to moral apathy.



The emphasis on healthcare as a business rather than a human right has created a cultural milieu that works against patient welfare. Examples abound. From the skyrocketing costs of critically important medications to the unaffordable costs of insurance and high out-of-pocket payments, physicians work within a system that regularly violates the Physician’s Charter. Physicians aware of the social determinants of health regularly battle the effects of a gutted social safety net, resulting in egregious disparities in the quality of care. They feel powerless to address systemic threats to their patients’ health, such as inadequate medication drug coverage resulting in a preventable stroke, inadequate screening resulting in a delayed cancer diagnosis, and a lack of housing and food resulting in markedly worse outcomes in people who are homeless.



The clinician-ethicist Richard Gunderman wrote in the Atlantic (2014), “professional burnout is the sum total of hundreds and thousands of tiny betrayals of purpose, each one so minute that it hardly attracts notice.” These betrayals of purpose with their attendant pressure to engage in practices that contradict ethical ideals contribute powerfully to today’s crisis of burnout and disillusionment.

Moral distress can have a lasting impact on the culture of medicine as a whole. Individual moments of moral distress lead to alienation, detachment, and loss of empathy, which in turn produce a culture of cynical care devoid of meaning. Indeed, the phenomenon of individual moral distress can be mapped onto a larger and systemic ethical malaise within the medical profession. A deeper understanding of the underlying sources and mechanisms of discontent provides new avenues for interventions to improve the well-being of healthcare professionals and the culture of medicine as a whole.

As with many complex social phenomena, many of the forces that have led to growing rates of physician burnout are well-intended. The digitization of medicine and residency duty hour reductions, for example, are appropriate responses to real problems. However, interventions to prevent or mitigate burnout must also acknowledge the deeper underlying pathologies of values in conflict and moral distress. They require the courage to question the broader market-based trends and systemic changes that so often pit physicians’ professional ideals against today’s reality.

Solutions for burnout should focus on enabling clinicians to act according to their professional values. Healthcare organizations will need to prioritize ethics at the highest level of hospital leadership and embed ethical values into the core of their organization’s mission. Creating an organizational ethics aligned with medicine’s professional values requires responsive ethics and other leadership committees that empower physicians to exercise their moral agency to act ethically, rather than what would be safest for risk management or best for a short-term bottom line. We should also remain cognizant of the unintended consequences of interventions against burnout that create values conflicts such as restrictions on resident work hours leading to perceptions of unprofessionalism and compromised patient safety among trainees. Most importantly, we should strive towards reforming our inherently unethical system that privileges market-driven mandates to prioritize shareholder profit over providing care in the patient’s best interest.

Encouraging physicians to advocate for issues that directly impact their patients, such as the recent #stayinyourlane efforts regarding gun control or advocating for universal healthcare coverage, empowers physicians to stand up for patient welfare and social justice and adds their voices to the many others working towards social and political change. This change however is a long game; strategies to combat moral distress and burnout must be established in the meantime. Perhaps most fundamentally, ethical awareness and ethics education must be strengthened for both medical learners as well as seasoned professionals, to provide the foundational ethical knowledge, skills, and moral courage to practice medicine in ways consistent with their professional values. While ethics education and a focus on ethics cannot solve these very real systemic strains, it can help arm physicians with the language and framework of ethics to more successfully navigate these challenges on an individual and institutional level. For example, in addition to focusing on reducing residency duty hours to alleviate burnout, residency leadership could also focus on interventions that improve trainees’ abilities to provide ethical care such as increasing resources and support around the discharge of homeless patients or providing resources and training around end-of-life care rooted in an ethical lens. One of us (E.D.) found that institutions whose cultures and policies prioritized autonomy over beneficence encouraged trainee behaviors that reflected a simplistic understanding of autonomy to mean offering limitless choice while eschewing the important role of physician recommendation in decision-making5. We hypothesize that institutional prioritization of beneficence could encourage a more nuanced understanding of autonomy and help transcend ethical dilemmas where autonomy and beneficence appear to conflict.

Burnout is a complex and multidimensional problem. The degree to which it is a symptom of a larger ethical malaise has been underemphasized. Creating more efficient support systems and teaching resilience skills will go only so far. Taking the Professionalism Charter seriously as a guide to sources of moral distress can help re-ground the medical profession in an authentic moral framework.


  1. ABIM Foundation, Foundation A, Medicine EF of I. Medical Professionalism in the New Millenium: A Physician Charter. Ann Intern Med 2002;136(3):243–246. doi:
  2. Panagioti M, Geraghty K, Johnson J, et al. Association between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction: A Systematic Review and Meta-analysis. JAMA Intern Med 2018;178(10):1317–1330. doi:

    Article PubMed PubMed Central Google Scholar

  3. Shanafelt TD, West CP, Sinsky C, et al. Changes in Burnout and Satisfaction With Work-Life Integration in Physicians and the General US Working Population Between 2011 and 2017. Mayo Clin Proc 2019;1–14. doi:

    Article Google Scholar

  4. Talbot S, Dean W. Physicians aren’t ‘burning out.’ They’re suffering from moral injury. STAT. Accessed July 4, 2019.
  5. Dzeng E, Colaianni A, Roland M, et al. Influence of Institutional Culture and Policies on Do-Not-Resuscitate Decision Making at the End of Life. JAMA Intern Med 2015;175(5):812–819. doi:

What are the Social Causes of Rational Suicide in Older Adults?

Editorial commentary to “Balasubramaniam M. Rational Suicide in the Elderly: A Clinician’s Perspective. J Am Geriatr Soc. 2018.”

In this issue of the Journal of the American Geriatrics Society, Dr. Meera Balasubramaniam discusses rational suicide in older adults, a desire for suicide in the absence of diagnosed psychiatric illness. She describes rational suicide in older adults to be of growing interest, and yet it is a topic that is rarely discussed in geriatrics1. Given the lack of psychiatric pathology associated with rational suicide, this is an issue that geriatricians are increasingly likely to encounter.

Dr. Balasubramaniam provides a compelling psychiatric analysis of the various reasons one might consider rational suicide and the interplay between the self, others, and society on this decision. In this commentary, we highlight the influence of social, economic, and political trends on rational suicide in older adults. We believe that the trends towards rational suicidal thoughts in part stems from a confluence of three factors – neoliberalism, technology, and changing attitudes related to the legalization of physician-assisted death (PAD) in some states. By spotlighting this impact of sociological trends on rational suicide, we emphasize that in order to effectively address the challenge of rational suicide, clinicians must not only address their individual patients, but also engage with societally based interventions.

Emile Durkheim, first described the sociological roots of suicide in his seminal text, Suicide (1897)2. He found that social isolation, defined as a lack of relationships and loneliness, was an important factor that led to suicides. He used the term economic anomie to describe a subset of social causations of suicides. Economic anomie occurs when traditional institutions are no longer able to regulate key social and economic needs, thus resulting in a lack of individual belonging and a sense of disconnection from society due to weakened social cohesion. This disruption of social equilibrium occurs during periods of serious social, economic, or political upheaval, resulting in declines in economic well-being and subsequent increases in suicides.

The 1970s and 1980s saw the beginning of the neoliberal era – the political, economic, and social upheaval of our time. Neoliberal policies prioritize economic deregulation, free trade, and privatization of public goods and services3. These policies have created a culture that redefines citizens as consumers, whereby competition and market-based metrics become dominant ideological forces that permeate all aspects of human life4. Neoliberalism changed human relationships within society from a civil sphere that enshrined a commitment to social solidarity and collaboration amongst fellow citizens to that of a universal market where human beings are fair game in calculations of profits and losses5. Rather than emancipation and freedom, the markets created atomization and loneliness4.

Neoliberal ideology in America manifests in several ways that may contribute to the increasing trend of rational suicide in older adults. One could hypothesize that thoughts of rational suicide might occur when people feel that their value to society and what they receive from society no longer feels worthwhile. In a neoliberal society that values one’s utility in the market, the process of retirement and age-related declines in mental and physical capacities may have profound effects on self worth. In more traditional societies, elders had a valued and respected place in society with treasured wisdom that would be passed on to younger generations. Today, too many older adults see themselves as an appendage or even worse, a burden. In a recent New York Times article, an 82 year-old woman described “vanishing” – once she became wheelchair bound she was invisible and erased6. Feelings of being unwanted by and invisible to society might easily slip into thoughts of erasing one’s actual presence in the world.

Several authors have attributed the crisis of loneliness to neoliberalism’s ideology of competitive self-interest and extreme individualism7,8. The gig economy and the widespread use of contractors in workplaces further isolates individuals who suffer from loneliness and a lack of connection by no longer having regular colleagues nor fixed work places9. This isolation is further exacerbated by the civic breakdown of “families and communities, the decline of institutions such as churches and trade unions, the switch from public transportation to private, inequality, an alienating ethic of consumerism, [and] the loss of common purpose”8,10. The social isolation and loneliness that ensues has contributed to an increase in mental illness and suicides in all ages11. Loneliness has profound health consequences including an increased risk of cognitive decline, dementia, depression, heart disease, and stroke12,13. Studies have demonstrated that social isolation, loneliness, and living alone increases mortality by 26-32%12,14.

Technology has often been attributed as a contributing factor towards loneliness, but its potential role in rational suicide is multi-factorial10. The technology industry has accelerated the broader temporal trends towards ageism and fear of decay that Dr. Balasubramaniam describes. Technology companies, eager to “disrupt” everything from the way we drive to the way we dry clean, have thrown down the gauntlet to “cure aging” and “solve death”, as Google’s Calico once declared15. More so than ever before, aging is perceived as something to be vanquished, rather than a natural human experience. The declaration of aging as a disease, pathologizes elders as an entity to be shunned and avoided, both in oneself and others16. Nick Bostrom, a philosopher and thought leader in the “radical longevity” movement states, “Aging breaks down your health and vitality, and eventually you get so weak that no amount of health care and medicine can prop you up…Just as you have begun to acquire a modicum of wisdom and experience, old age sets in to sap your energy and degrade your intellect. And then death swoops in to deliver the final insult. Now, there is real hope of ending this; that the last chapter of every human story need not play out this way”17. Given the tech elite’s influence on social media, technology, and culture, these attitudes spread beyond Silicon Valley to pervade overall attitudes.

At the same time, life-sustaining medical technologies have changed the culture of how we live with serious illness and die. It is now possible to artificially sustain multiple organs in the absence of a realistic possibility of meaningful survival of the whole person18. Concerns over overly aggressive care at the end of life and of unrelenting suffering, have in part fueled advocacy for PAD, as many people see aggressive medical interventions and unrelieved suffering at the end of life as avoidable only through premature self-inflicted death19.

The growing acceptance of PAD and its legalization in six American states and the District of Columbia plays an important role in changing attitudes towards rational suicide. An ethical concern of those opposed to PAD is the potential for the “slippery slope” whereby the legalization of PAD and the greater acceptance of PAD as a result of that legalization, initiates a trend in social perceptions towards acceptance of rational suicide, something that was previously ethically unacceptable. We believe that the legalization and more widespread acceptance of PAD was a necessary societal precursor to the rationalization of suicide in older adults. Just as PAD is increasingly accepted as a rational response to relieving suffering in the setting of terminal illness, PAD establishes a foundation for acceptance of suicide as an ethically and personally permissible response to the natural degradation of the human condition from age.

In order to mitigate the desire for rational suicide in older adults, it is important that geriatricians and primary care physicians develop awareness of this issue and understand the individual psychological and broader societal underpinnings for such a request. By understanding the individual and social etiologies of rational suicide, clinicians and the broader community will be able to advocate for medical and social support systems that might alleviate the desire for such requests.

Geriatric assessment could include loneliness and integration within one’s community as a key part of the assessment. Strengthening social support systems for older adults to decrease loneliness, and ease physical and caregiving challenges is an important step in combatting impact of neoliberalism and the ensuing crisis of loneliness. Interventions could include assisting the patient in finding programs that provide community and human contact with other people. The “Campaign to End Loneliness”20 in the United Kingdom is an example of an organization that combats loneliness in older adults through research, education, and outreach. Though technology has been implicated in increasing loneliness10, technology also has the power to connect. Helping less technologically savvy older adults to use technologies such as video chat with family members and online support groups can help alleviate loneliness.

Clinicians should also feel empowered to speak up against ageism and recognize it in themselves. Indeed, acceptance of the idea of rational suicide in older adults is in itself ageist. It implicitly endorses a view that losses associated with aging result in a life that is not worth living. The debates surrounding PAD must also recognize and consider rational suicide in older adults as a slippery slope that is already happening. The ethical and clinical challenges inherent in the discussion of rational suicide in older adults are fraught and will require further intellectual and ethical engagement of all people who care for and about elders.


  1. Balasubramaniam M. Rational Suicide in the Elderly: A Clinician’s Perspective. J Am Geriatr Soc. 2018.
  2. Durkheim E. Suicide: A Study in Sociology. London: Routledge Classics
  3. Chomsky N. Profits Over People: Neoliberalism and Global Order. 1st ed. New York, NY: Seven Stories Press; 1999.
  4. Monbiot G. How Did We Get Into This Mess?: Politics, Equality, Nature. 1st ed. London: Verso; 2016.
  5. Metcalf S. Neoliberalism: The Idea that Swallowed the World. The Guardian. Published August 18, 2017. Accessed December 22, 2017.
  6. Bruni F. Are you Old? Infirm? Then Kindly Disappear. New York Times. Published December 16, 2017. Accessed December 22, 2017.
  7. Monbiot G. Neoliberalism is creating loneliness. That’s what’s wrenching society apart. The Guardian. Published October 12, 2016. Accessed December 22, 2017.
  8. Putnam R. Bowling Alone. New York: Simon and Schuster; 2000.
  9. Fisher A. There’s a Loneliness Epidemic Among Freelancers. Fortune. September 2016.
  10. Monbiot G. The Age of Loneliness. New Statesman. October 2016. Accessed December 22, 2017.
  11. Curtin SC, Warner M, Hedegaard H. Increase in Suicide in the United States, 1999–2014 (NCHS Data Brief No. 241). National Center for Health Statistics , Center for Disease Control. Published 2016. Accessed December 22, 2017.
  12. Holt-lunstad J, Smith TB, Baker M, Harris T, Stephenson D. Loneliness and Social Isolation as Risk Factors for Mortality: A Meta-Analytic Review. Perspect Psychol Sci. 2015;10(2):227-237. doi:10.1177/1745691614568352.
  13. Luo Y, Hawkley LC, Waite LJ, Cacioppo JT. Loneliness, health, and mortality in old age: A national longitudinal study. Soc Sci Med. 2012;74(6):907-914. doi:10.1016/j.socscimed.2011.11.028.
  14. Steptoe A, Shankar A, Demakakos P, Wardle J. Social isolation, loneliness, and all-cause mortality in older men and women. PNAS. 2013;110(15):5797-5801. doi:10.1073/pnas.1219686110.
  15. McCracken H, Grossman L. Google vs. Death. Time. September 2013.,33009,2152422,00.html.
  16. Khazan O. Should We Die? Atl. February 2017. Accessed December 22, 2017.
  17. Bostrom N. The Case Against Aging. personal website.
  18. Gawande A. Being Mortal: Medicine and What Matters in the End. 1st ed. New York, NY: Metropolitan Books; 2014.
  19. Dzeng E. Aid in Dying: a Triumph of Choice Over Care? Geripal. Published 2016. Accessed June 7, 2016.
  20. Campaign to End Loneliness: Connections in Older Age. Accessed December 28, 2017.


Should Homelessness be a Death Sentence?

I was on call in the hospital last winter when I received notification of a new admission, “30-year-old homeless woman admitted to ICU for DKA.”

“Leslie,” as I’ll call her, was admitted to the intensive care unit for diabetic ketoacidosis, a condition that occurs in diabetics when blood sugar levels are dangerously high. She was critically ill, with acid and sugar levels in her blood so high that she was in a coma.

A Muni driver had found her in the back of the bus and called 911. Her records showed that she had been admitted to emergency rooms around the city multiple times for this same problem.

It was relatively easy to stabilize her medically. Tempering the social ills that prompted her admission was a greater challenge. She had fallen upon hard times and lost her home. As a Type 1 diabetic, she needed to take regular insulin injections to stay out of the hospital. However, all her possessions kept getting stolen.

Leslie, a slight, thin woman, felt unsafe among the predominantly male homeless population, who would frequently heckle and threaten her. I asked her why she didn’t go to a shelter, and she said that it was much worse there. The moment she closed her eyes to sleep, she would get robbed, again and again. Her insulin and lifeblood would get lost each time. Through tears of hopelessness, she expressed fears that she would die the next time this happened. I could not disagree.

Social workers had tried to help, but the best they could offer was to return to a shelter. In my eyes, discharging her to the street felt complicit in her impending death. It felt morally reprehensible, and yet our social workers told me I might not have a choice.

Vulnerable groups, such as women, the disabled, chronically ill or elderly, face additional dangers, which make shelters more treacherous than sleeping on the streets. An elderly patient told me it was a relief to sleep under the stars because she had been assaulted in a shelter. On another occasion, she escaped a rape attempt in the bathroom only to be forced to sleep through the night in the same room as her perpetrator. The spaces intended to help instead became spaces where they were further victimized.

A study from San Francisco General Hospital on homeless women showed that 27 percent of the homeless were women; 60 percent of the homeless women experienced some form of physical, emotional or sexual abuse. The majority of the women had endured emotional violence while approximately a third had experienced physical violence and a similar number experienced sexual violence.

Homelessness remains one of the greatest threats to health. A study in the United Kingdom showed that the average life expectancy for a homeless person was 30 years lower than the general population, and another study showed that the risk of death was four to 31 times higher in homeless women than among women in general.

Efforts to improve homeless services in San Francisco should pay special attention to these vulnerable populations. This includes expanding women-only shelters and medical respite programs, which provide shelter and care for patients like Leslie. Medical respite programs in the city are in short supply. These are only temporary fixes to more long-term problems that require solutions such as housing policy reform, long-term drug rehab centers and supportive housing.

Funding is persistently inadequate, hampering efforts to provide homeless services. The average cost for a day in the hospital is $3,000 and significantly more for stays in the intensive care unit — a cost transferred onto taxpayers for the homeless and uninsured. Strengthening homeless services to prevent hospital admissions may not just be the right thing to do, but might also be cost saving.

Due to our social workers’ advocacy, we were able to find a place for Leslie in medical respite. This was such an unlikely outcome that our case manager called her our “Christmas miracle.” The security to take life-saving medications shouldn’t have to be a miracle.

In this case, we were able to safely discharge her, but all too often these women are not able to make it into appropriate support systems. In some of these cases, this is a matter of life or death.

This OpEd was published in the San Francisco Chronicle on June 30, 2016 as part of the SF Homelessness Project

OpEd SF Chronicle Dzeng

How Academia and Publishing are Destroying Scientific Innovation: A Conversation with Sydney Brenner

I recently had the privilege of speaking with Professor Sydney Brenner, a professor of Genetic medicine at the University of Cambridge and winner of the Nobel Prize in Physiology or Medicine in 2002. I had originally intended to ask him about Professor Frederick Sanger, the two-time Nobel Prize winner famous for his discovery of the structure of proteins and his development of DNA sequencing methods, who passed away in November. I wanted to do the classic tribute by exploring his scientific contributions and getting a first hand account of what it was like to work with him at Cambridge’s Medical Research Council’s (MRC) Laboratory for Molecular Biology (LMB) and at King’s College where they were both fellows. What transpired instead was a fascinating account of the LMB’s quest to unlock the genetic code and a critical commentary on why our current scientific research environment makes this kind of breakthrough unlikely today.

It is difficult to exaggerate the significance of Professor Brenner and his colleagues’ contributions to biology. Brenner won the Nobel Prize for establishing Caenorhabditis elegans, a type of roundworm, as the model organism for cellular and developmental biological research, which led to discoveries in organ development and programmed cell death. He made his breakthroughs at the LMB, where beginning in the 1950s, an extraordinary number of successive innovations elucidated our understanding of the genetic code. This code is the process by which cells in our body translate information stored in our DNA into proteins, vital molecules important to the structure and functioning of cells. It was here that James Watson and Francis Crick discovered the double-helical structure of DNA. Brenner was one of the first scientists to see this ground-breaking model, driving from Oxford, where he was working at the time in the Department of Chemistry, to Cambridge to witness this breakthrough. This young group of scientists, considered renegades at the time, made a series of successive revolutionary discoveries that ultimately led to the creation of a new field called molecular biology.

To begin our interview, I asked Professor Brenner to speak about Professor Sanger and what led him to his Nobel Prize winning discoveries.

Sydney Brenner: Fred realized very early on that if we could sequence DNA, we would have direct contact with the genes. The problem was that you couldn’t get hold of genes in any way. You couldn’t purify what was a gene. That is why right from the start in 1954, we decided we would do this by using Fred’s method of sequencing proteins, which he had achieved [proteins are derived from the information held in DNA]. You have to realise it was only on a small scale. I think there were only forty-five amino acids [the building blocks of proteins] that were in insulin. We thought even scaling that up for proteins would be difficult. But finally DNA sequencing was invented. Then it became clear that we could directly approach the gene, and it produced a completely new period in science.

He was interested in the method and interested in getting the methods to work. I was really clear in my own mind that what he did in DNA sequencing, even at the time, would cause a revolution in the subject, which it did. And of course we immediately, as fast as possible, began to use these methods in our own research.

ED: This foundational research ushered in a new era of biological science. It has formed the basis of nearly all subsequent discoveries in the field, from understanding the mechanisms of diseases, to the development of new drugs for diseases such as cancer. Imagining the creative energy that drove these discoveries was truly inspirational, and so, I asked Professor Brenner what it felt like to be part of this scientific adventure.

SB: I think it’s really hard to communicate that because I lived through the entire period from its very beginning, and it took on different forms as matters progressed. So it was, of course, wonderful. That’s what I tell students. The way to succeed is to get born at the right time and in the right place. If you can do that then you are bound to succeed. You have to be receptive and have some talent as well.

ED: Today, the structure of DNA and how genetic information is translated into proteins are established scientific canon, but in the 1950s, the hypotheses generated at the LMB were dismissed as inconceivable nonsense.

SB: To have seen the development of a subject, which was looked upon with disdain by the establishment from the very start, actually become the basis of our whole approach to biology today. That is something that was worth living for.

I remember Francis Crick gave a lecture in 1958, in which he discussed the adapter hypothesis at the time. He proposed that there were twenty enzymes, which linked amino acids to twenty different molecules of RNA, which we call adapters. It was these adapters that lined up the amino acids. The adapter hypothesis was conceived I think as early as 1954 and of course it was to explain these two languages: DNA, the language of information, and proteins, the language of work.

Of course that was a paradox, because how did you get one without the other? That was solved by discovering that a molecule from RNA could actually have function. So this information on RNA, which happened much later really, solved that problem as far as origins were concerned.

ED: (Professor Brenner was far too modest here, as it was he who discovered RNA’s critical role in this translation from gene to protein.)

SB: So he [Crick] gave the lecture and biochemists stood up in the audience and said this is completely ridiculous, because if there were twenty enzymes, we biochemists would have already discovered them. To them, the fact that they still hadn’t went to show that this was nonsense. Little did the man know that at that very moment scientists were in the process of finding the very first of these enzymes, which today we know are the enzymes that combined amino acids with transfer RNA. And so you really had to say that the message kept its purity all the way through.

What people don’t realise is that at the beginning, it was just a handful of people who saw the light, if I can put it that way. So it was like belonging to an evangelical sect, because there were so few of us, and all the others sort of thought that there was something wrong with us.

They weren’t willing to believe. Of course they just said, well, what you’re trying to do is impossible. That’s what they said about crystallography of large molecules. They just said it’s hopeless. It’s a hopeless task. And so what we were trying to do with the chemistry of proteins and nucleic acids looked hopeless for a long time. Partly because they didn’t understand how they were built, which I think we molecular biologists had the first insight into, and partly because they just thought they were amorphous blobs and would never be able to be analysed.

I remember when going to London to talk at meetings, people used to ask me what am I going to do in London, and I used to tell them I’m going to preach to the heathens. We viewed most of everybody else as not doing the right science. Like one says, the young Turks will become old Greeks. That’s the trouble with life. I think molecular biology was marvellous because every time you thought it was over and it was just going to be boring, something new happened. It was happening every day.

So I don’t know if you can ride on the crest of a wave; you can ride on it, I believe, forever. I think that being in science is the most incredible experience to have, and I now spend quite a lot of my time trying to help the younger people in science to enjoy it and not to feel that they are part of some gigantic machine, which a lot of people feel today.

ED: I asked him what inspired them to maintain their faith and pursue these revolutionary ideas in the face of such doubt and opposition.

SB: Once you saw the light you were just certain that you had to be right, that it was the right way to do it and the right answer. And of course our faith, if you like, has been borne out. 

I think it would have been difficult to keep going without the strong support we had from the Medical Research Council. I think they took a big gamble when they founded that little unit in the Cavendish. I think all the early people they had were amazing. There were amazing personalities amongst them.

This was not your usual university department, but a rather flamboyant and very exceptional group that was meant to get together. An important thing for us was that with the changes in America then, from the late fifties almost to the present day, there was an enormous stream of talent and American postdoctoral fellows that came to our lab to work with us. But the important thing was that they went back. Many of them are now leaders of American molecular biology, who are alumni of the old MRC.

ED: The 1950s to 1960s at the LMB was a renaissance of biological discovery, when a group of young, intrepid scientists made fundamental advances that overturned conventional thinking. The atmosphere and camaraderie reminded me of another esteemed group of friends at King’s College – the Bloomsbury Group, whose members included Virginia Woolf, John Maynard Keynes, E.M. Forrester, and many others. Coming from diverse intellectual backgrounds, these friends shared ideas and attitudes, which inspired their writing and research. Perhaps there was something about the nature of the Cambridge college systems that allowed for such revolutionary creativity?

SB: In most places in the world, you live your social life and your ordinary life in the lab. You don’t know anybody else. Sometimes you don’t even know other people in the same building, these things become so large.

The wonderful thing about the college system is that it’s broken up again into a whole different unit. And in these, you can meet and talk to, and be influenced by and influence people, not only from other scientific disciplines, but from other disciplines. So for me, and I think for many others as well, that was a really important part of intellectual life. That’s why I think people in the college have to work to keep that going.

Cambridge is still unique in that you can get a PhD in a field in which you have no undergraduate training. So I think that structure in Cambridge really needs to be retained, although I see so often that rules are being invented all the time. In America you’ve got to have credits from a large number of courses before you can do a PhD. That’s very good for training a very good average scientific work professional.  But that training doesn’t allow people the kind of room to expand their own creativity. But expanding your own creativity doesn’t suit everybody. For the exceptional students, the ones who can and probably will make a mark, they will still need institutions free from regulation.

ED: I was excited to hear that we had a mutual appreciation of the college system, and its ability to inspire interdisciplinary work and research. Brenner himself was a biochemist also trained in medicine, and Sanger was a chemist who was more interested in chemistry than biology.

SB: I’m not sure whether Fred was really interested in the biological problems, but I think the methods he developed, he was interested in achieving the possibility of finding out the chemistry of all these important molecules from the very earliest.

ED: Professor Brenner noted that these scientific discoveries required a new way of approaching old problems, which resist traditional disciplinary thinking.

SB: The thing is to have no discipline at all. Biology got its main success by the importation of physicists that came into the field not knowing any biology and I think today that’s very important.

I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant.  Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.

ED: But he felt that young people today face immense challenges as well, which hinder their ability to creatively innovate.

SB: Today the Americans have developed a new culture in science based on the slavery of graduate students. Now graduate students of American institutions are afraid. He just performs. He’s got to perform. The post-doc is an indentured labourer. We now have labs that don’t work in the same way as the early labs where people were independent, where they could have their own ideas and could pursue them.

The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.

But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow. 

There’s no exploration any more except in a very few places. You know like someone going off to study Neanderthal bones. Can you see this happening anywhere else? No, you see, because he would need to do something that’s important to advance the aims of the people who fund science.

I think I’ve often divided people into two classes: Catholics and Methodists. Catholics are people who sit on committees and devise huge schemes in order to try to change things, but nothing’s happened. Nothing happens because the committee is a regression to the mean, and the mean is mediocre. Now what you’ve got to do is good works in your own parish. That’s a Methodist. 

ED: His faith in young, naïve (in the most positive sense) scientists is so strong that he has dedicated his later career to fostering their talent against these negative forces.

SB: I am fortunate enough to be able to do this because in Singapore I actually have started two labs and am about to start a third, which are only for young people. These are young Singaporeans who have all been sent abroad to get their PhDs at places like Cambridge, Stanford, Berkley. They return back and rather than work five years as a post-doc for some other person, I’ve got a lab where they can work for themselves. They’re not working for me and I’ve told them that.

But what is interesting is that very few accept that challenge, providing what I think is a good standard deviation from the mean. Exceptional people, the ones who have the initiative, have gone out and got their own funding. I think these are clearly going to be the winners. The eldest is thirty-two. 

They can have some money, and of course they’ve got to accept the responsibility of execution. I help them in the sense that I oblige them and help them find things, and I can also guide them and so on. We discuss things a lot because I’ve never believed in these group meetings, which seems to be the bane of American life; the head of the lab trying to find out what’s going on in his lab. Instead, I work with people one on one, like the Cambridge tutorial. Now we just have seminars and group meetings and so on.

So I think you’ve got to try to do something like that for the young people and if you can then I think you will create. That’s the way to change the future. Because if these people are successful then they will be running science in twenty years’ time.

ED: Our discussion made me thinking about what we consider markers of success today. It reminded me of a paragraph in Professor Brenner’s tribute to Professor Sanger in Science:

“A Fred Sanger would not survive today’s world of science. With continuous reporting and appraisals, some committee would note that he published little of import between insulin in 1952 and his first paper on RNA sequencing in 1967 with another long gap until DNA sequencing in 1977. He would be labelled as unproductive, and his modest personal support would be denied. We no longer have a culture that allows individuals to embark on long-term—and what would be considered today extremely risky—projects.”

I found this particularly striking given that another recent Nobel prize winner, Peter Higgs, who identified the particle that bears his name, the Higgs boson, similarly remarked in an interview with the Guardian that, “he doubts a similar breakthrough could be achieved in today’s academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said that: ‘it’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964.’”

It is alarming that so many Nobel Prize recipients have lamented that they would never have survived this current academic environment. What is the implication of this on the discovery of future scientific paradigm shifts and scientific inquiry in general? I asked Professor Brenner to elaborate.

SB: He wouldn’t have survived. It is just the fact that he wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments, but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently?  And a third would say, to top it all, he published it all in an un-refereed journal.

So you know we now have these performance criteria, which I think are just ridiculous in many ways. But of course this money has to be apportioned, and our administrators love having numbers like impact factors or scores. Singapore is full of them too. Everybody has what are called key performance indicators. But everybody has them. You have to justify them. 

I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.

I have sometimes given a lecture in America called “The Casino Fund”. In the Casino Fund, every organisation that gives money to science gives 1% of that to the Casino Fund and writes it off. So now who runs the Casino Fund? You give it to me. You give it to people like me, to successful gamblers. People who have done all this who can have different ideas about projects and people, and you let us allocate it. 

You should hear the uproar. No sooner did I sit down then all the business people stand up and say, how can we ensure payback on our investment? My answer was, okay make it 0.1%. But nobody wants to accept the risk. Of course we would love it if we were to put it to work. We’d love it for nothing. They won’t even allow 1%. And of course all the academics say we’ve got to have peer review. But I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean.

I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals.

Now I mean, people are trying to do something, but I think it’s not publish or perish, it’s publish in the okay places [or perish]. And this has assembled a most ridiculous group of people. I wrote a column for many years in the nineties, in a journal called Current Biology. In one article, “Hard Cases”, I campaigned against this [culture] because I think it is not only bad, it’s corrupt. In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all. And that’s all been done in the aid of commerce, because they are now giant organisations making money out of it. 

ED: Subscriptions to academic journals cost British universities between £4-6 million a year. In this time of austerity where university staff face deep salary cuts and redundancies, and adjunct faculty are forced to live on food stamps, do we have the resources to pour millions of dollars into the coffers of publishing giants? Shouldn’t these public monies be put to better use, funding important research and paying researchers liveable wages? To add insult to injury, many academics are forced to relinquish ownership of their work to publishers.

SB: I think there was a time, and I’m trying to trace the history when the rights to publish, the copyright, was owned jointly by the authors and the journal. Somehow that’s why the journals insist they will not publish your paper unless you sign that copyright over. It is never stated in the invitation, but that’s what you sell in order to publish. And everybody works for these journals for nothing. There’s no compensation. There’s nothing. They get everything free. They just have to employ a lot of failed scientists, editors who are just like the people at Homeland Security, little power grabbers in their own sphere.

If you send a PDF of your own paper to a friend, then you are committing an infringement. Of course they can’t police it, and many of my colleagues just slap all their papers online. I think you’re only allowed to make a few copies for your own purposes. It seems to me to be absolutely criminal. When I write for these papers, I don’t give them the copyright. I keep it myself. That’s another point of publishing, don’t sign any copyright agreement. That’s my advice. I think it’s now become such a giant operation. I think it is impossible to try to get control over it back again.

ED: It does seem nearly impossible to institute change to such powerful institutions. But academics have enthusiastically coordinated to strike in support of decent wages. Why not capitalise on this collective action and target the publication industry, a root cause of these financial woes? One can draw inspiration from efforts such as that of the entire editorial board of the journal Topology, who resigned in 2006 due to pricing policies of their publisher, Elsevier.

Professor Tim Gowers, a Cambridge mathematician and recipient of the Fields medal, announced last year that he would not be submitting publications to, nor peer reviewing for Elsevier, which publishes some of the worlds top journals in an array of fields. Thousands of other researchers have followed suit, pledging that they would not support Elsevier via an online initiative, the Cost of Knowledge. This “Academic Spring”, is gathering force, with open access publishing as its flagship call.

SB: Recently there has been an open access movement and it’s beginning to change. I think that even Nature, Science and Cell are going to have to begin to bow. I mean in America we’ve got old George Bush who made an executive order that everybody in America is entitled to read anything printed with federal funds, tax payers’ money, so they have to allow access to this. But they don’t allow you access to the published paper. They allow you I think what looks like a proof, which you can then display.

ED: On board is the Wellcome Trust, one of the world’s largest funders of science, who announced last year that they would soon require that researchers ensure that their publications are freely available to the public within six months of publication. There have also been proposals to make grant renewals contingent upon open access publishing, as well as penalties on future grant applications for researchers who do not comply.

It is admirable that the Wellcome Trust has taken this stance, but can these sanctions be enforced without harming their researchers’ academic careers? Currently, only 55% of Wellcome funded researchers comply with open access publishing, a testament to the fact that there are stronger motivators at play that trump this moral high ground. For this to be successful, funders and universities will have to demonstrate collective leadership and commitment by judging research quality not by publication counts, but on individual merit.

Promotion and grant committees would need to clearly commit both on paper and in practice to these new standards. This is of course not easy. I suspect the reason impact factors and publication counts are the currency of academic achievement is because they are a quick and easy metric. Reading through papers and judging research by its merit would be a much more time and energy intensive process, something I anticipate would be incompatible with a busy academic’s schedule. But a failure to change the system has its consequences. Professor Brenner reflected on the disillusioning impact this reality has on young scientists’ goals and dreams.

SB: I think that this has now just become ridiculous and its one of the contaminating things that young people in particular have to actually now contend with. I know of many places in which they say they need this paper in Nature, or I need my paper in Science because I’ve got to get a post doc. But there is no judgment of its contribution as it is.

ED: Professor Brenner hit upon several hot topics amongst academics in all disciplines. When Randy Scheckman won his Noble prize this year in the Physiology or Medicine, he announced his boycott of “luxury” journals such as Nature, Science, and Cell, declaring that their distorting incentives “encouraged researchers to cut corners and pursue trendy fields of science instead of doing more important work.”

Because publications have become a proxy for research quality, publications in high impact factor journals are the metric used by grant and promotion committees to assess individual researchers. The problem is that impact factor, which is based on the number of times papers are cited, does not necessarily correlate with good science. To maximize impact factor, journal editors seek out sensational papers, which boldly challenge norms or explore trendy topics, and ignore less spectacular, but equally important things like replication studies or negative results. As a consequence, academics are incentivised to produce research that caters to these demands.

Academics are slowly awakening to the fact that this dogged drive to publish rubbish has serious consequences on the quality of the science that they produce, which have far reaching consequences for public policy, costs, and human lives. One study found that only six out of 53 landmark studies in cancer research were replicable. In another study, researchers were only able to repeat a quarter of 67 influential papers in their field.

Only the most successful academics can afford to challenge these norms by boycotting high impact journals. Until we win our Nobel prizes, or grant and promotion structures change, we are shackled to this “publish or perish” culture. But together with leaders in science and academia such as Professor Brenner, we can start to change the structure of academic research and the language we use to judge quality. As Brenner emphasised, it was the culture of the LMB and the scientific environment at the time that permitted him and his colleagues to uncover the genetic basis of life. His belief that scientists like Professor Sanger would not have survived today are cautionary words, providing new urgency to the grievances we have against the unintended consequences of the demands required to achieve academic success.

Originally published in the King’s Review on February 24, 2014


Why is qualitative interdisciplinary research so difficult to take seriously?

I was recently corresponding with a professor discussing one of the “Big Five” medical journals. My research is primarily qualitative and he remarked that the particular editor he was talking to, “doesn’t believe that qualitative research is research.” It is unfortunate that this perception exists in academic medicine and in particular with journal editors, the gatekeepers of scientific knowledge. I’d like to address the arguably widespread perception in academia that interdisciplinary research is generally of poor quality and more specifically, challenges that qualitative research faces in academic medicine. In order to answer this question, I thought it was necessary to address a more fundamental question: What is the definition of quality and who defines it?

Any scientific exploration must include an understanding of the research’s epistemological framework. Those with a realist ontology seek an objective truth that exists independently of an individual’s understanding of the world, whereas qualitative researchers tend towards a more interpretive lens.

The challenge with interdisciplinary research is that it operates at the intersection of these different theoretical frameworks. Researchers are thus confronted by the debates between these diverse worldviews in ways that disciplinarily focused researchers are not. Due to unequal funding streams and leadership structures, dominant frameworks emerge within interdisciplinary departments, which dictate definitions of quality.

Because publication counts factor so highly in evaluation metrics such as the REF, the academic publishing industry has a tremendous influence on this interdisciplinary research agenda. A drive to publish in high impact journals incentivises researchers to conform to these journal’s definitions of quality, even if their definition reflects a framework that is different from their researcher’s mode of inquiry.

Traditional quantitative medical sciences for example, judge research quality by its generalizability and validity. Because of this, they are less accepting of approaches such as phenomenology, which focus instead on understanding the subjective experiences of individuals in a specific setting. Checklists have emerged to conform qualitative research to positivist understandings of validity and generalizability. Standards such as double coding to ensure objectivity and consistency, are required for publication in reputable medical journals. One checklist even recommends that “interpretation must be grounded in ‘accounts’ and semi-quantified if possible or appropriate.”

I have spoken to social scientists working in medical based departments, who felt that the need to adapt to the principal discipline posed challenges to their intellectual self-identity. They expressed angst over their inability to produce research true to their home discipline’s definition of quality. This might affect their own employment prospects if they decide to move back into their native discipline. In my own research, I have realized that manuscripts I will submit to medical journals will need to written through a more objectivist mindset, rather than through an interpretive framework that seems more appropriate for my project.

This perception of poor quality reflects not only intrinsic prejudices against interdisciplinary research, but also systematically ingrained biases in the publication process. A recent study elucidated factors that contribute to this perception by showing that journal rankings inherently disadvantaged this type of research. They found that top journals “span a less diverse set of disciplines than lower ranked journals,” resulting in systematic bias against interdisciplinary research. Because publications in high impact journals are a proxy for quality and determine REF evaluation and financing, this becomes a disincentive against engaging in interdisciplinary research.

Many have warned me that it is difficult to publish qualitative research in the best medical journals. Particularly discouraging is a study which showed that over a span of ten years, only 0-0.6% of articles in the top ten medical journals were qualitative. As an early career researcher, this means that I will have a more difficult time distinguishing myself amongst my quantitative colleagues, since evaluation for jobs, promotions, and funding, are primarily based upon where we have published.

This is also disheartening if one thinks about the real world impact of requiring interdisciplinary research to conform to sweeping definitions of quality (impact is after all a REF priority!). These overwhelming structural incentives promote further siloing into individual disciplinary camps. As a medic transitioning into the social sciences, I have been thoroughly impressed by the ability of social scientists to provide a deeper understanding into key problems in health care. Social scientific inquiry in medicine has the potential to apply alternative insights towards positively informing health care practice. Cross-fertilization of ideas will remain limited unless we redefine quality to include all relevant modes of inquiry, and lower the barriers to publishing interdisciplinary research.

Bringing emotions back into medicine

I had always thought I could never be a great doctor because I felt too emotionally bound to my patients. It was impossible for me to hold back tears when feeling that gut wrenching empathy for families mourning the passing of their loved ones. Because it always seemed as if I were the only resident moved by these scenes, I reasoned that this was an unprofessional impulse that prevented me from being the calm, scientific, quick thinking doctor that exemplified the model physician.

This perception is so exalted in medicine, that it was the motto of my medical school’s residency program: Aequaminitas. Based on an essay of that title by Sir William Osler, it means unperturbability. They urged us to become that ideal doctor who had that, “coolness and presence of mind under all circumstances, calmness amid storm, clearness of judgment in moments of grave peril, immobility, impassiveness.” It was these doctors who had the expertise to heal their patients, not ones who were so “weak” as to weep with a patient during life and death situations.

My last day of residency was my most memorable. I was rotating on the intensive care unit where a 21-year-old Mexican immigrant boy was dying of end stage testicular cancer that had spread throughout his body. His stomach was swollen from liver failure, he was in a deep coma from insults to his brain, and infection had spread throughout his blood and body. For days, we struggled to keep him alive — tethered to life support, with virtually every organ maintained through artificial means. He was the sole breadwinner of his large family, and they were completely unprepared to let him go. His mother threatened to kill herself if her son died, and his brother begged to me every day to let him donate half his liver to his sibling.

He died during rounds on my last day. Seared into my mind was the image of his mother throwing herself onto the floor and hitting her hands and head on the floor, howling with sorrow. His father swept their daughter into his arms and ran out of the ICU with unbridled angst.

Every doctor in the room stood there watching in silence. I tried with all my might to control my tears. Blinking frequently. Thinking about hard medical facts. Staring at the ceiling. But it was not possible, so I quickly excused myself to run into the supply closet to weep in private. When I returned, rounds went on as if nothing had happened.

A recently essay by David Bornstein, “Medicine’s Search for Meaning” resonated tremendously and brought these memories back to the fore. In it, he describes the need for a culture change within medicine to embrace emotions and provide compassionate care. His article and others have noted how a lack of humanism in medicine contributes to burnout and low physician satisfaction. Disillusioned physicians who initially pursued medical careers to connect with and help people, instead find themselves in a health care infrastructure dominated by bureaucracy and little time for patient interactions.

In order for programs such as the Healer’s Art, to counter the dehumanizing aspects of medicine in a physician’s professional life course, they must consider expanding to residency. Residency is the time when young doctors experience for the first time that terrifying sense of responsibility for making life and death decisions in the middle of the night, and having to tell a parent that their child has died.

These unforgettable experiences are watershed events in a doctor’s life, as they are the moments where their actions directly impact patients, rather than in medical school where they are chiefly the apprentice watching it being done. I had taken the Healer’s Art in medical school, but by the time residency rolled around there was little opportunity to circle back and reflect upon those lessons.

There is little time and space in the harried life of an intern to think about these sorts of things. Non-clinical time is packed with curricular essentials on the fundamentals of medicine. Yes, it is critical that a doctor understand how to treat high blood pressure and manage liver failure. But as the article stated, it is arguably as important to the therapeutic relationship to cultivate caring doctors who not only feel compassion, but also feel comfortable expressing it.

Perhaps more importantly, creating no-judgment spaces for dialogue amongst residents allows for mutual understandings of common experiences, which in my experience was completely alien in residency. I had always felt like there was something wrong with me for feeling emotions. I felt like everyone around me had a confidence and assurance that I never had, and did not experience the self-doubt I did when patients did poorly. Only upon reading Bornstein’s article did I realize that I might not be alone.

This article was originally published on 28 October 2013 on

Dying with dignity – what next after the Liverpool care pathway?

Controversy and opposition over the Liverpool care pathway (LCP) has prompted the Department of Health to commission an independent review into the end-of-life system. A report released on Monday revealed “numerous examples of poor implementation and worrying standards in care,” prompting the commission to recommend phasing out the pathway over the next six to 12 months.

Used correctly, the LCP allows people to die with dignity – surrounded by loved ones rather than machines and in peace, rather than in the violent throes of CPR. But instead, examples of LCP-induced distress flooded the news over the past year.

The implementation of the LCP was so deeply flawed that rather than facilitating a good death, in some cases it worsened the emotional burden and created missed opportunities for proper goodbyes. Much of this can be attributed to a critical failure of communication between doctors and patients and their families.

Communication is integral to successful treatment in almost every aspect of medicine, but nowhere is it as evident as in care at the end of life. As such, no end-of-life intervention will be successful unless doctors fully embrace family discussions as a required component of treatment. The LCP guidelines emphasise communication as part of the pathway, but too often that falls to the wayside.

Most importantly, this is the time for the dying person to say their final farewells and get their affairs straight, but also for their family to begin coming to terms with their impending loss. A family member of a patient on the LCP lamented: “My mum didn’t even get to say goodbye to her husband of 51 years because she was too traumatised.” Beginning the process of bereavement is impossible if the family is kept in the dark about imminent death and implementation of the LCP.

This failure to communicate with both patients and the public, turned the LCP into a “‘barbaric’ end of life pathway” where people were “starved to death”.These phrases highlight fears and misunderstandings about the end of life, where cessation of eating and drinking is actually a wholly natural sign that death is approaching.

We have an innate desire to nurture and feed the weak, but hydration can potentially do more harm than good. Often the heart is too weak to pump blood, causing water to accumulate in the lungs and resulting in breathlessness and a horrible feeling of drowning. Fluid can also accumulate in the flesh causing pain and discomfort. What should never be withheld in the LCP are pain medications and other therapies such as oxygen that help relieve distressing symptoms.

Palliative care and the LCP should not be thought of as causing death. When the end of life is inevitable, it is (depending on what you believe) God or nature who decides the moment you go, not doctors or families, and it is certainly not determined by whether food or drink is given. Care at the end of life is as active and intensive as any other treatment – just with a different goal in mind. Indeed, a focus on comfort and symptom management early on has been shown to extend life, decrease depression, and improve quality of life. As doctors make the most informed patients, it should be reassuring to know that 90% of doctors would be happy to be placed on the pathway themselves if they were dying.

Many challenges inherent in the healthcare system hinder a doctor’s ability to communicate. Time pressures and heavy workloads make it difficult to have lengthy conversations, especially those that address topics the doctor may not be personally comfortable with. A recent survey by the Royal College of Physicians showed that 37% of medical registrars felt their workload was “unmanageable”. In a system described as “unsafe” and at a “crisis point,” important conversations are crowded out by the need to acutely stabilise other ill patients under the doctor’s care.

This is further exacerbated by the taboos British society holds against discussing death. According to one study, approximately 75% of the public and GPs agree that British people are uncomfortable talking about death and dying and only 30% of people have talked to their loved ones about their own wishes.

Future efforts to replace the LCP must include system-wide changes, which address these structural challenges. Medical education will need to intensify efforts to train junior doctors and medical students in end of life discussions and expose them frequently to issues of death and dying. The medical profession will need a shift in mindset to reprioritise the importance of interpersonal relationships and communication into the practice of medicine. Although logistical and budgetary constraints limit the NHS’s ability to lighten workloads, alternative programmes could be considered to bring in social workers, chaplains, nurses and community volunteers to improve patient and family empowerment regarding end-of-life issues.

The Liverpool care pathway has met its end, but the need to promote a peaceful and dignified death has never been more important.

This piece was originally published in “The Guardian” on 16 July, 2013

How to Truly Inspire Interdisciplinarity: Lessons From a Cambridge College

Studying at Cambridge, I have had the privilege of drawing inspiration from the familiar and routine. I am fortunate that my workspace is across from the breathtaking King’s College Chapel. Just looking up and seeing the calmness and majesty of this architectural masterpiece washes away any momentary PhD setback. My walk to lectures takes me by the pub where Watson and Crick celebrated their discovery of DNA and J.J. Thompson discovered the electron. My bike ride home runs through the field where the rules of football were invented.

But what makes Cambridge an incredible place is the casual and natural interdisciplinarity that is unavoidable and fully ingrained into its social fabric. Student life is focused around the colleges; the dining hall, bar, and common areas are the nexus of the Cambridge experience. This is unique amongst post-graduate experiences in higher education. In most universities, social interactions are centred around one’s academic department.

In Cambridge, students hang out with people from every discipline from scientists to the social sciences to artists. Graduates and faculty at Cambridge colleges live, eat, and work together. They have offices in both their department as well as in their college, and they work with colleagues in other disciplines in college committees and councils. This creates a community that fully embodies the notion of working together as academics and intellectuals, rather than as a biologist, an architect or a historian. When disciplines mix organically, you develop new ideas and perspectives, as well as a flexibility and acceptance for other ways of thinking.

Looking back over the past few years, my most profound and influential educational moments occurred not in the classroom, but over meals, casual conversations, and at the pub. It was here in Cambridge that a good friend with opposing political views, but a ready willingness to engage in thought provoking debates, taught me how to hold my own in casual conversational debates on current affairs. It was in the King’s College bar where I first understood the fundamentals of Marxism over a pint with an anthropologist friend. Random discussions with a friend studying architecture resulted in our creation of a medical hypothesis to explain stigmata in Christianity. The most engaging discussion I’ve ever had about my PhD project on Do-Not-Resuscitate decision-making was with my philosopher and historian friends debating the moral philosophical aspects of autonomy and suffering. And a central influence in my desire to engage the public with my academic research developed through conversations with a friend who studies the Sociology of intellectuals.

These defining moments left me unsatisfied with a purely clinical career path. Residency training following my MPhil was at times a frustrating experience, as I could not help but think about all the structural problems that hindered my ability to properly care for patients. It was infuriating to only be able to put plasters on the acute issues that only temporised deeper pathologies that could only be fixed if we addressed the underlying problem such as lack of insurance, poverty, and skewed economic incentives in medicine.

I found it repressive to be so inspired by events such as the Arab Spring, only to go to work in the hospital and not hear a single mention of these events. I yearned to be back in Cambridge, where I could discuss the implications of the day’s events with my friends in Middle Eastern Studies and International Relations. It also reminded me of how far I had come. I recalled walking into work one day during medical school and someone telling me, “Al Zarqawi is dead.” I looked up at the patient census and asked, “What?! Which patient was he? What did he die of?” I was so immersed in medicine at the time that I had no conception of what was going on in the outside world.

Interdisciplinarity is the new buzzword in academic research and education. But few universities are able to pay more than just lip service to this concept. Indeed, the very nature of academia resists interdisciplinarity. We are trained to become experts on the most minute aspects of our subject, and are chastised for being too broadly focused or having too many interests. As Simon Goldhill, Director of the Cambridge Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) states, “We have people who know more and more about less and less. That’s the definition of a PhD, isn’t it?” This intense specialisation prevents us from seeing the forest amongst the trees. It is thus completely unsurprising that cross-disciplinary efforts and inter-departmental initiatives often fail and do not go beyond initial superficial connections. I have heard gripes at more than one prestigious institution about the impenetrable silos that separate departments and communities within the university.

A common reminisce about college is the spontaneous philosophical conversations that occur in dorm hallways until 3 am. It is unfortunate that in most institutions of higher learning, we stop talking to each other after undergrad. Instead, we compartmentalise ourselves off in our departments, talking to people who think the same way we do. We begin preaching to the choir and feel affirmed that our style of thought is the only right way. The way in which we are trained and specialise shapes our identity and how we process the world beyond our academic disciplines.

Not only do we develop specialised knowledge, but we also become inculcated in a particular way of examining and talking about the world that breeds distrust against other approaches and a belief that our methodology is the best. I have felt for example, that some academic physicians on the surface embrace the concept of applying social sciences to medicine, but are unable to accept non-positivist ways of understanding the world and dismiss it as insufficiently rigorous. Joe Henrich, an anthropologist, used game theory rather than the more traditional ethnography to elucidate cross-cultural differences in gift giving and human behavior. Rather than embracing the capacity for other fields to enhance understanding, many anthropologists felt threatened by this methodological promiscuity, finding it “unethical,” “heavy-handed and invasive.”

I think a key aspect to achieving substantive interdisciplinarity is through the intentioned design of physical and social spaces. Creating spaces where people continuously come into contact with people outside their discipline in natural, casual social settings over and over again, helps develop social networks that eventually become the source of intellectual inspiration and creativity. Here is where “nudge” can be applied, where calculated use of space have the power to change human behavior and promote unconventional social interactions and networks.

Stanford has been a pioneer in designing physical spaces to foster mixing of ideas and philosophies. The Center for Clinical Sciences Research (CCSR) building has few walls. Instead of lab bench space being allocated by research group, where all members of a lab are grouped together, scientists are interspersed around the entire building, promoting collegiality and discussion amongst members of different labs and disciplines. Its intentions were clear from the start, “its design responds to emerging trends for interdisciplinary biomedical research, [where] interaction between disciplines and individuals is encouraged.” Bio-X’s Clark Center is another example of interdisciplinary spaces, where “warehouse like lab spaces and shared facilities” foster collaboration between engineers, scientists, doctors and others to develop technologies and solutions to a common problem.

At Stanford, I participated in a program called the Biodesign Innovation Program, which brought together engineers, business students, and medical students into small teams to come up with solutions to medical problems. My team invented a device, which we’ve since patented to minimally invasively cool the heart during a heart attack. The experience emphasised the ability of different perspectives to develop innovative solutions to existing problems. The Stanford (Institute of Design) is the latest example of innovation in interdisciplinarity, where students from any department are able to take classes in applying concepts of design to their specific areas of research.

Michael Bloomberg recently announced a $350 million donation to the Johns Hopkins University, the largest donation of its kind to a university. He stipulated that a portion of this donation go towards endowing professorships focused on collaborative interdisciplinarity. I would urge Mr. Bloomberg to also encourage Johns Hopkins to think about new ways to nudge scholars out of their comfortable silos through design strategies that bring researchers of different subjects together. It would be amazing if new developments on campus grouped people in innovative ways, perhaps by problems to be solved or themes to explore, rather than by discipline. Programs similar to Stanford’s Biodesign Innovation Program would further bring together researchers, but perhaps more importantly, social spaces should be created which foster collegiality, trust and personal connections.

Many medical school campuses, including Johns Hopkins, are miles away from the main campus, preventing easy interactions between these campuses. Obviously it would be unfeasible to change this, but future buildings could be strategically located in ways that foster cross-disciplinary interactions. The Hopkins Bio Park is currently under development. Why not introduce buildings that house academics in medicine, humanities and social science who work together and research together as equals the intersection between medicine and the social science?

Princeton and Yale have collegiate systems modeled on those of Oxford and Cambridge colleges, where undergraduates live and socialize together in colleges. Neither graduate students nor faculty members are integrated into the college system. The colleges primarily provide residential and social support, rather than academic enrichment. I believe that these institutions have missed out on a critical aspect of the “Oxbridge” college system. Integrating the remainder of the university population into the college structure would enhance interdisciplinary interactions at the graduate and potentially faculty level. This is key because it is at the graduate level where we start becoming specialised and indoctrinated into the academic mindset.

I am leading a conference in Cambridge called the Global Scholars Symposium, which brings together students for three days of cross-disciplinary discussion with leaders in the field to discuss global problems, and how we can apply creative solutions to these issues. Participation in this conference in past years inspired me to continue looking outwards beyond my field to think about what we as young individuals can do to make the world a better place. Creating more opportunities that bring together scholars from different fields would hopefully inspire academics to look outwards beyond publication counts and grant writing to see how their research can be applied to solving real world problems.

Taking the interdisciplinary path has not been easy. Residency would have been far easier if I wasn’t always frustrated by the social and political problems which got my patients in the hospital in the first place, and hospital financing practices which at times seemed to prioritize the bottom line over patient care. I sometimes envied my colleagues who were singularly focused on becoming cardiologists so that they could focus on repairing valves. In my PhD research, I am constantly admonished for being too unfocused, and the desire to meld divergent discourses and epistemological stances has been fraught with challenges and misunderstandings. Hopefully in the end, I will be able to say that it was worth it and there will be a role for someone like me when I’m done with this chapter in my intellectual development.

A version of this essay was published in the Guardian on 15 March, 2013