More

    AI Transparency Will Soon Become a Major Issue in 2019

    The biggest inhibitor to pervasive adoption of artificial intelligence (AI) in enterprise IT environments in 2019 may have little to do with the technology itself. Rather, as more governments around the world come to appreciate what can really be achieved using AI, committees are now actively studying how both new and existing legislation might be applied.

    For example, the U.S. Congress has created a bi-partisan AI caucus that is generally supportive of AI as a set of technologies that will enhance competitiveness. Speaking at a Transformers: Artificial Intelligence event hosted by The Washington Post last week, the chairman of the caucus, Rep. Pete Olson (R-Texas), made it clear that the issue is no longer whether AI will be employed, but rather understanding its impact.

    “It’s time to get on the train or get off the rails,” says Olson. “It’s coming down.”

    Narrow AI, Strong AI and Superintelligence

    Not fully appreciated, however, is the different impact various forms of AI will have. At a high level, there are essentially three flavors of AI. The one most commonly employed today is known as narrow AI, which means a set of algorithms have been trained to automate one specific task. The second type of AI is known as strong AI or general intelligence, which loosely describes any task that a machine can do that would have previously required a human to perform. The third type is known as superintelligence, which theoretically will exceed the brightest and most gifted human minds as AI models continue to learn. This latter form of AI is what gives birth to fears of a self-aware “Skynet” one day trying to wipe out humanity.

    As a practical matter, the IT industry is at least a decade or more away from having the capability to build a truly self-aware AI system. But instances of narrow AI are already broadly employed across a wide variety of consumer applications. Now many of those same concepts are being applied to business applications thanks mainly to a sharp reduction in the cost of collecting enough data in the cloud to be able to train an AI model.

    AI Transparency

    However, the issue that many businesses will soon find themselves wrestling with is transparency. AI models are created by humans that have biases. It’s relatively simply for those biases to be coded into the AI model created by human developers. Businesses that operate in, for example, highly regulated industries are going to be required to document what data they are using to drive the various machine and deep learning algorithms embedded in the AI models to make sure suspect or tainted data is not being relied on to automate a process.

    For example, Dr.Theresa Zayas Caban, chief scientist for the Office of the National Coordinator for Health Information Technology with the National Institute of Health, an arm of the U.S. Department of Health and Human Services, told conference attendees that AI models being applied to health care will need to be rigorously inspected.

    “These models will require immense validation,” says Zayas Caban.

    That validation is going to be required to drive market acceptance of AI models in health care anyway, adds Dr. Michael Abramoff, CEO of IDx, a provider of diagnostic tools for applying AI to the detection of retinopathy caused by diabetes. A lack of transparency could result in a backlash against the use of AI in health care, Abramoff told conference attendees.

    “We worry about any potential pushback,” says Abramoff. “We need to be very transparent.”

    The transparency issue, however, goes beyond highly regulated industries. Any organization that deploys a consumer-facing application enabled by AI is soon going to have to prove that biases are not unduly influencing their AI models. At the Washington Post event, Sherrilyn Ifill, president and director-counsel for the NAACP Legal Defense and Educational Fund, Inc., made it clear that organizations that rely on so-called “black box” approaches to building AI models that interact with the public should expect to land in court. The rise of AI does not in any way mean that laws enacted to protect the public from unscrupulous business practices don’t still apply, says Ifill.

    “There is going to be litigation,” says Ifill. “We have laws for a reason.”

    The truth of the matter is that organizations are moving to apply AI within legal frameworks that are subject to change, Beena Ammanath, vice president of AI for Hewlett-Packard Enterprise (HPE), told attendees.

    “We don’t necessarily have all the laws and rules figured out yet,” says Ammanath.

    It may take some time for government officials to unravel how a specific set of algorithms might be biased in one way or another. But the prospect of data scientists having to detail in open court what algorithms were applied to what data to create an AI model should give organizations cause for concern, especially when the other side starts to call in its own expert witnesses to examine those models.

    Because AI can be applied for both good and ill, there’s growing focus on how ethics need to be applied to AI, led by various academic institutions with support from vendors such as Google, IBM, Microsoft and SAP.

    The trouble is that ethics are far from uniform around the globe. What is deemed ethical in a democratic society tends to be very different from what might be considered the norm in a totalitarian society. There can be no doubt that governments around the world are actively researching ways to weaponize AI technologies.

    In the meantime, it’s clear that as organizations embrace AI, many of them are entering uncharted legal territory. Just because something can be done doesn’t necessarily mean it should be done. There’s a definite risk that whatever there is to be gained by applying AI could easily be wiped out when how that AI is being applied becomes controversial. While organizations should actively pursue AI research and development, any actual application of AI technologies should be closely vetted by senior leadership. After all, the team that created the AI model is not necessarily the same as the individuals who might find themselves standing in a dock trying to defend it.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles