06/19/2023 / By Ethan Huff
Actual human beings are getting phased out of health care in exchange for artificial intelligence (AI) robots that are now reportedly overruling nurses at hospitals.
The life-or-death decisions that have long been made by real people at health care facilities are now being made by computers that have been programmed with who-knows-what to do God-only-knows to patients.
One oncology nurse by the name of Melissa Beebe who relies on her observational skills to help patients in need of emergency care spoke with The Wall Street Journal about the changes she is seeing in the way care is administered due to the AI infiltration.
“I’ve been working with cancer patients for 15 years so I know a septic patient when I see one,” Beebe said about an alert she recently received in the oncology unit at UC Davis Medical Center in California that she knew was wrong. “I knew this patient wasn’t septic.”
The alert Beebe received had been created by AI based on an elevated white blood cell count it observed in said patient, which it correlated with a septic infection. What the AI system failed to recognize is that the patient in question also had leukemia, which can also cause similar elevated white blood cell counts.
“The algorithm, which was based on artificial intelligence, triggers the alert when it detects patterns that match previous patients with sepsis,” the Journal reported. “The algorithm didn’t explain its decision.”
(Related: If leftists are able to program the ChatGPT AI robot into promoting anti-white hate, can the same thing be done to the AI robots that are in charge of people’s medical care at hospitals?)
The rules at the hospital where Beebe works stipulate that she and all other nurses must follow certain protocols whenever a patient is flagged for sepsis – even if the flag is a mistake based on wrong assumptions made by AI.
The only way to override the AI’s decision is to get a doctor to approve – though if the modified decision ends up being wrong, then nurses can face disciplinary action. The threat of this causes most of them to simply follow orders, even when they know those orders are wrong.
“When an algorithm says, ‘Your patient looks septic,’ I can’t know why,” Beebe, a representative of the California Nurses Association, says. “I just have to do it.”
“I’m not demonizing technology,” she added while noting that, in the case of the aforementioned cancer patient, she was right and the AI was wrong. “But I feel moral distress when I know the right thing to do and I can’t do it.”
While there are arguably some things that AI can maybe, possibly do better than a human being, relying on AI systems to control the direction of medicine and care at hospitals is dangerous business.
Who is to say that the AI machines will not suddenly start targeting certain patients for early elimination if their names come up on a government-created “agitator” list, as one hypothetical dystopian outcome? What about when the AI machines are just plain wrong and hospital staff are too tired, ambivalent, or even apathetic to try to override it and risk their own careers in the process?
“AI should be used as clinical decision support and not to replace the expert,” warns Kenrick Cato, a professor of nursing at the University of Pennsylvania and a nurse scientist at the Children’s Hospital of Philadelphia.”
“Hospital administrators need to understand there are lots of things an algorithm can’t see in a clinical setting.”
The AI takeover of the world is perhaps the biggest threat to civilization besides globalism. Learn more at Transhumanism.news.
Sources for this article include:
Tagged Under:
This article may contain statements that reflect the opinion of the author
BadMedicine.News is a fact-based public education website published by BadMedicine News Features, LLC.
All content copyright © 2019 by BadMedicine News Features, LLC.
Contact Us with Tips or Corrections
All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.