Washington: Although artificial intelligence is considered to be one of the best problem-solving tools in today’s times, it can also prove useless in an emergency.
In many cases, AI technology has shown great potential to assist doctors in the paperwork and, in some cases, diagnostic stages, but now a new study shows that this technology can be extremely unhelpful and even harmful in the event of an emergency. It will prove.
OpenAI’s ChatGPT program has yielded conflicting results for patients with chest pain in the emergency setting. In one experiment, AI produced different results for the same patient, which could prove dangerous for the patient.
Lead researcher Dr. Thomas Heston, an associate professor at Washington State University, said ChatGPT is not working consistently. Given the same data, ChatGPT sometimes rates one patient’s outcome as low risk, the next as medium risk, and occasionally as high risk, which is not a good thing in an emergency.
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.3&appId=770767426360150”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));
(function(d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s); js.id = id;
js.src = “//connect.facebook.net/en_GB/sdk.js#xfbml=1&version=v2.7”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));