Health News

AI estimates a person’s risk of dying from heart disease

AI can estimate a patient’s risk of dying from heart disease by analysing just 15 minutes of scans, scientists claim

  • Massachusetts Institute of Technology created a system called RiskCardio
  • The technology was made for patients with acute coronary syndrome
  • Draws on sample of ECG data to sort a patient into a ‘risk category’ 

Scientists in the US are using artificial intelligence (AI) to gauge a patient’s risk of dying from heart disease.

A team from the Massachusetts Institute of Technology created a system called RiskCardio.

The technology was made for patients with acute coronary syndrome (ACS), which covers a range of conditions that suddenly reduce blood flow to the heart.

RiskCardio works off just 15 minutes of a patient’s ‘raw electrocardiogram (ECG) signal’, which records the heart’s rhythm and electrical activity.

It then draws on a sample of ECG data to sort that particular patient into a ‘risk category’. The technology then predicts whether they could die between the next 30 days to a year.

Scientists in the US are using AI to gauge a patient’s risk of dying from heart disease (stock)

The scientists believe RiskCardio could help doctors quickly assess the best treatment for an ACS patient based on their risk of death. 

‘We’re looking at the data problem of how we can incorporate very long time series into risk scores,’ lead author Divya Shanmugam said.

‘And the clinical problem of how we can help doctors identify patients at high risk after an acute coronary event.

‘The intersection of machine learning and healthcare is replete with combinations like this – a compelling computer science problem with potential real-world impact.’

ACS can refer to a heart attack or angina. Cardiovascular events like these are responsible for a quarter of all deaths in the US and UK, statistics show.

RiskCardio works by separating a patient’s ECG signal into sets of consecutive beats, the scientists said. Variability between adjacent beats suggests risk.

It was ‘trained’ using data from a study of past ACS patients. To get the technology up and running, the team first separated each of these patients’ signals into a collection of adjacent heart beats. 

Each set of beats was assigned a ‘label’, i.e whether the patient died or not. 

The scientists then trained the technology to classify heartbeats from patients who died as ‘risky’ and beats from those who survived as ‘normal’.

When presented with a new patient, RiskCardio creates a risk score by looking at how their set of heartbeats compare to the patients’ in the past study.

‘Using just the first 15 minutes of a patient’s raw ECG signal, the tool produces a score that places patients into different risk categories,’ the scientists wrote.  

RiskCardio was put to the test on around 1,250 people with ACS. It predicted 28 of these patients would die within a year. It is unclear how accurate this estimation was.

The patients in the ‘high risk’ group were found to be seven times more likely to die than those who were deemed low risk.

This is compared to ‘the most common existing risk metrics’, which found the high risk patients to be three times more likely to die than their low-risk counterparts. 

Results were presented in full at the Machine Learning for Healthcare Conference at the University of Michigan. 

The researchers believe RiskCardio could help doctors quickly assess the best treatment for an ACS patient if they arrived at a hospital.

Medics currently have to ‘use medical data and lengthy tests’ to estimate a patient’s risk of death and then choose the best treatment, they said. 

‘Machine learning is particularly good at identifying patterns, which is deeply relevant to assessing patient risk,’ Ms Shanmugam said. 

‘Risk scores are useful for communicating patient state, which is valuable in making efficient care decisions.’ 

RiskCardio also only requires a patient’s ECG data, with no need for information like age or weight.  

Going forward, the scientists hope to make the technology ‘more inclusive’ by adding information for different ages, ethnicities and genders. 

WHY ARE PEOPLE SO WORRIED ABOUT AI?

It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.

SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.

He believes super intelligent machines could use humans as pets.

Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.

They could steal jobs 

More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.

And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.

As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.

A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade. 

They could ‘go rogue’ 

Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.

If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.

This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.

For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.

They could wipe out humanity 

Some people believe AI will wipe out humans completely.

‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.

He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.

Musk warned that AI poses more of a threat to humanity than North Korea.

‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.

‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’

Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.

He has argued that controls are necessary in order protect machines from advancing out of human control

Source: Read Full Article