Health News

AI development must be guided by ethics, human wellbeing and responsible innovation

Photo: IEEE SA

The topic of ethics and artificial intelligence is not new, but businesses and policy creators should prioritize human wellbeing and environmental flourishing – also known as societal value – in the discussion, says John C. Havens, director of emerging technology and strategic development at the IEEE Standards Association.

Typically, ethical concerns tied to AI largely focus on risk, harm and responsibility; bias against race and gender; unintended consequences; and cybersecurity and hackers. 

These are important concerns, but Havens contends that as AI systems are created, they must directly address human-centric, values-driven issues as key performance indicators of success to build trust with end users.

Havens further says that AI systems must also prioritize human wellbeing (specifically, aspects of caregiving, mental health and physiological needs not currently included in the GDP) and environmental flourishing as the ultimate metrics of success for society along with fiscal prosperity. Flourishing means restoration of the ecosystem, not just avoiding harm,

Healthcare IT News sat down with Havens, author of “Heartificial Intelligence: Embracing Humanity to Maximize Machines,” to discuss these and other important issues surrounding AI and ethics.

Q. Why and how should ethics play an important role in artificial intelligence technologies?

A. Whether it’s a simple form of machine learning or a more complex application, the ethics related to artificial intelligence technologies are complex. There are multiple reasons for this, including the speed and volume at which AI systems can function, oftentimes faster and with less errors than humans.

In addition, when people don’t understand how their personal data is being used in association with an AI system, the intentions of designers or manufacturers are irrelevant if the functionality of an AI is not clearly understood.

Ethics are the principles that govern a person’s behavior and interpretation of the world, and what AI manufacturers sometimes forget is that defining trust for a system must involve two-way communication with users. As algorithms are invisible, and robotic devices shaped like a human can influence a person’s response in ways they don’t understand, disclosure and participatory design are key for ethically aligned design.

In a healthcare-related example, in March of 2019, a 78-year-old patient named Ernest Quintana suffering from chronic lung disease was told by a doctor on a video call he would likely die within days.

“If you’re coming to tell us normal news, that’s fine,” noted his daughter Catherine Quintana as reported by WTOL.com, “but if you’re coming to tell us there’s no lung left, and we want to put you on a morphine drip until you die, it should be done by a human being and not a machine.”

This example provides an excellent case for the use of what are known as applied ethics methodologies at the outset of design. This logic is what drove the IEEE SA to create the foundational document, (creative commons and free) Ethically Aligned Design, with more than 700 global experts in three versions from 2016-2019.

While an organization’s code of ethics governing its behavior is a critical tool to help guide employees and provide trust for external stakeholders, applied ethics as related in EAD in multiple chapters provides a pragmatic road map to explore how end users’ values will affect their impression of a particular AI system when in use.

Here’s a quote from the Methods to Guide Ethical Research and Design chapter of EAD along these lines:

“Ethics are often treated as an impediment to innovation, even among those who ostensibly support ethical design practices. In industries that reward rapid innovation in particular, it is necessary to develop ethical design practices that integrate effectively with existing engineering workflows. 

“Those who advocate for ethical design within a company should be seen as innovators seeking the best outcomes for the company, end users and society. Leaders can facilitate that mindset by promoting an organizational structure that supports the integration of dialogue about ethics throughout product life cycles.”

This logic of applied ethics and its practitioners leading to innovation is key. Rather than only try to address risk or harm, understanding end user values also leads to positive product research and development.

As an example, in the Quintana patient example, a simple geofencing technology could have been used to alert the patient’s family that a robotic device (versus a human doctor) was entering their loved one’s room. This could have triggered a text or other alert, providing an automated way for them to indicate (before the robotic device was ever used with their loved one) that only minor health-based reports should be directed directly to their father.

This same alert could also be sent to the doctor before he or she addressed the patient to recognize the patient had requested only certain information be shared along these lines. Besides the fact this form of explicit and meaningful consent is at the heart of legislation like General Data Protection Regulation (GDPR) regulating the exchange of human data, this type of technology could be sold to hospitals or care facilities or be offered as a service to patients.

Whether or not they opted to buy it, they would immediately know that this type of technology exists and could prepare themselves and their loved ones accordingly.

This type of two-way communication is what is recommended and what is at the heart of ethically aligned, values-oriented design methodologies prioritized at the outset of all AI system manufacturing.

Q. You’ve said businesses and policy creators should prioritize human wellbeing and environmental flourishing (or as you say, societal value) when using AI. Please elaborate.

A. The subtitle of “Ethically Aligned Design” since it was released is “A Vision for Prioritizing Human Wellbeing with Autonomous and Intelligent Systems.”

A key question to ask for any technology design is, “What are the metrics for success for what we’re building?” While this may seem like a natural question to ask, without specifying these metrics at a design level, what are often the de facto metrics of success are the following items:

  • A product or service must make profit for the organization creating it.
  • A product or service should not cause (physical) harm.
  • A product or service should provide value for end users, customers and stakeholders.
  • If possible, a product or service should not harm the environment.

For the four bullet points above, metrics to measure success or conformity include the following:

  • Profits are measured by quarterly or annual monetary measures, including metrics of growth.
  • Harm is measured by adherence to regulation, accepted design standards for a country of origin.
  • Value for customers is indicated by sales numbers, sentiment in social or other media, and usage.
  • Typically, after a product or service is in production, metrics of corporate compliance are considered.

You’ll note when describing a focus on harm the word “physical” was put in parentheses. This was because mental health is by and large not measured in metrics like gross domestic product (GDP) or in many corporate settings. Note, this is not to say governments or companies don’t care about mental health, but rather at the design level at the outset of production there are not uniform global standards regarding how AI or other technology could harm or help end users.

Here again is an opportunity for innovation regarding AI systems design, especially in regard to tools driven by algorithms that are invisible by nature and hard to understand by the general public. Where customers are potentially going to be confused when dealing with an AI-enabled technology, providing tools to engage with mental health issues is an initial step towards better identifying metrics to support human wellbeing.

One fun, non-AI-based example in terms of educating the general public with regard to the benefits of positive mental health comes from the famous toy brand LEGO and its focus on mindfulness. As they point out on their site, Mindfulness and LEGO Bricks, “Despite the growing popularity of mindfulness, there remains some confusion surrounding it. Mindfulness isn’t just some pseudo-scientific popular fad, but can actually physically alter the brain of depressed patients for the better, as an increasing number of scientific studies are showing.”

Where AI products or systems could follow this trend to help end users or customers alleviate depression, think what an amazing metric of success that would be for any organization, especially due to the pandemic regarding isolation brought about by COVID.

Where organizations also may create products or services that provide value in one domain for a customer, where design processes don’t include metrics regarding mental health, manufacturers may unwittingly create something that would harm end users.

This same logic applies to the environment. Building an AI system and then working to make sure it doesn’t harm the planet is a completely different design constraint than prioritizing no product or service that gets into production that won’t improve long-term ecological sustainability.

If in fact the planet’s ecosystems are being harmed at a rate that can’t be fixed without pronounced change, and if human mental health and wellbeing is also finite and diminishing due to a lack of focus or standardized metrics, then AI (and all technology) design needs to prioritize these issues and determine how innovation and profits can be figured out as a result. Otherwise, in a world not putting sustainability of people and planet first, who will any profits benefit?

Q. You’ve also said that as AI systems are created, they must directly address human-centric, values-driven issues as key performance indicators of success to build trust with end users. Why is this so?

A. Having metrics for “people and planet” are essential for companies to identify the KPIs they can easily communicate to their shareholders and stakeholders. In terms of trust, end users and customers will not trust (or will not continue to trust) any organization or government that doesn’t address issues of human or planetary sustainability with pragmatic and holistic approaches to bring about genuine change.

Otherwise, they will assume, perhaps rightly so, that an organization is simply providing lip service in their corporate social responsibility or environmental, societal and governance reporting to avoid regulation versus changing their company’s processes at an operational level.

At IEEE, there is a new standard that provides methodologies for organizations to address how AI systems can address human-centric, values-driven issues in hopes they will be used as KPIs for the near and present future called IEEE 7000-2021, which provides a “Standard Model Process for Addressing Ethical Concerns During System Design.”

Created to integrate ethical and functional requirements to mitigate risk and increase innovation in all systems engineering product design, the standard provides a methodology for anyone creating an AI system (or other technology) to ask a series of questions about the values and ethical biases of the end users who may buy or use that system. Once these answers are identified, they are then “translated” into design characteristics and then assimilated into traditional systems engineering and design processes.

Put simply, the logic is to know your customers or end users so well that you know their values. Fortunately, this is already common practice for advertisers and marketers who base design and sales plans on the demographics of who they’re trying to reach.

Moms in Denver, for instance, versus men ages 60-70 in Tokyo: If you recognize that these two audiences will have different values that will affect the use of anything you build, you’re already getting a sense of why having a methodology to analyze values is so critical in the age of the algorithm. Basically, if you don’t know what someone’s values are, you don’t know how they’ll respond to a product or service you build for them.

An example provided in the standard has to do with an airport scanning machine. These have been built in the past in a uniform way to optimize speed and efficiency over privacy. That meant scans of people’s bodies were often considered invasive by passengers.

Using a methodology like the one in IEEE 7000 means the value of “privacy” was examined through the lens of multiple end users, so that the updated design for the scanner only included a passenger’s body outline versus other physical characteristics.

Where these values-based methodologies are used by organizations moving forward as KPIs for success, there will also be the opportunity for conformance, certification and marks based on these kinds of results.

Meaning, in the same way that people will buy a certain toothpaste because it has a mark from the American Dental Association (in the U.S.), end users and customers will begin to buy things like a voice-assisted audio systems because they have been certified as “trustworthy AI” by an accredited organization.

Q. What is the definition of “responsible innovation” and why should companies prioritize these methodologies for AI and all operations?

A. There are many definitions of responsible innovation, the majority of which include a focus on environmental sustainability and societal issues such as mental health. The focus is to think about design in a systems or holistic fashion and recognize that, for instance, building an electric car using coal and claiming you’re helping the environment is likely irresponsible as well as dangerously ironic. Here’s how the Journal of Responsible Innovation defines what they do:

“JRI intends responsible innovation to be inclusive of such terms as responsible development and sustainable development, and the journal invites comparisons and contrasts among such concepts. While issues of risk and environmental health and safety are relevant, JRI especially encourages attention to the assessment of the broader and more subtle human and social dimensions of innovation – including moral, cultural, political, and religious dimensions, social risk, and sustainability addressed in a systemic fashion.”

Here IEEE provides another standard to help organizations address responsible innovation in their standard, 7010-2020 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Wellbeing.

To identify KPIs relating to planet and people as well as profit, the standard provides dozens of globally respected metrics and indicators organizations can utilize to help design AI systems in a responsible innovation mindset versus the de facto, “rush to market” sensibility prioritizing market and growth considerations in isolation.

While it’s assumed no organization wants to build their AI in an irresponsible fashion, it is only by defining what “responsible” means so that anyone knows the specifics of what’s involved in design.

By prioritizing human wellbeing and environmental sustainability and demanding metrics showing how AI design will improve these two areas, organizations not only build trust with end users and customers but literally will be building our responsible, innovative and purpose-driven future.

Twitter: @SiwickiHealthIT
Email the writer: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article