“MODERN SCIENTIFIC CAPABILITY has profoundly altered the course of human life. People live longer and better than at any other time in history. But scientific advances have turned the processes of aging and dying into medical experiences, matters to be managed by health care professionals. And we in the medical world have proved alarmingly unprepared for it.”
I work in an industry that’s dedicated to improving or extending patient’s lives. Every single protocol I read, in one way or another, declares how the goal of the product or device (if the trial provides promising data on its effectiveness) is to improve the quality of life of a particular part of the population (depending on the indication/disease being studied). While science research is hopeful and mostly necessary, I’m always deeply aware of its ultimate limitations: death is an inevitable part of the human experience. At the age of 18, I walked into my first bioethics class. The course centered on questions of medical ethics: what is right and wrong in the context of medical care? At one point during the course we entered the ultimate debate: that of life and death. We discussed long-term care, the financial implications, social views of death and dying, and the morality of continuously performing heroic and invasive procedures on people we know didn’t have much time left to live. Was simply keeping a patient alive the goal? Was the psychological aspect of a “good life” – how productive, autonomous, and fulfilled an individual felt – just as important as simply keeping someone breathing for another day? How can we begin societal conversations about our collective fear of death? Is death really the worst thing that can happen?
“IN THE PAST, when dying was typically a more precipitous process, we did not have to think about a question like this. Though some diseases and conditions had a drawn-out natural history—tuberculosis is the classic example—without the intervention of modern medicine, with its scans to diagnose problems early and its treatments to extend life, the interval between recognizing that you had a life-threatening ailment and dying was commonly a matter of days or weeks. Consider how our presidents died before the modern era. George Washington developed a throat infection at home on December 13, 1799, that killed him by the next evening. John Quincy Adams, Millard Fillmore, and Andrew Johnson all succumbed to strokes and died within two days. Rutherford Hayes had a heart attack and died three days later. Others did have a longer course: James Monroe and Andrew Jackson died from progressive and far longer-lasting (and highly dreaded) tubercular consumption. Ulysses Grant’s oral cancer took a year to kill him. But, as end-of-life researcher Joanne Lynn has observed, people generally experienced life-threatening illness the way they experienced bad weather—as something that struck with little warning. And you either got through it or you didn’t.”
In the Atlantic article, Why I Hope to Die at 75, Dr. Ezekiel Emanuel discusses his view on why 75 is a good age to die. He writes, “But here is a simple truth that many of us seem to resist: living too long is also a loss. It renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.” (Source) I think in modern, wealthy nations we often forget that our life expectancy and quality of life have increased due to advances in hygiene (such as having our trash collected and transported to a landfill) as well as medical science and technology (such as vaccines and antibiotics). If we indeed would allow “nature to take its course” without mass use of advances to healthcare in the past 200 years, most of us would not have lived as long as we have; our family lines wouldn’t have made it this far. Society has come a long way from the days of the father of epidemiology, John Snow (read about his story: here and here). If there’s anything this COVID-19 ( an infectious disease caused by a novel (new) virus) experience is teaching us is that we still have a long way to go. It’s my hope, that in future conversations we have as a society, we start discussing end-of-life care and its true realistic expectations. I’m all for advancement and improvement, but I think understanding our limitations is just as wise as looking to break ceilings.
“Technological society has forgotten what scholars call the “dying role” and its importance to people as life approaches its end. People want to share memories, pass on wisdoms and keepsakes, settle relationships, establish their legacies, make peace with God, and ensure that those who are left behind will be okay. They want to end their stories on their own terms. This role is, observers argue, among life’s most important, for both the dying and those left behind. And if it is, the way we deny people this role, out of obtuseness and neglect, is cause for everlasting shame. Over and over, we in medicine inflict deep gouges at the end of people’s lives and then stand oblivious to the harm done. Peg”