Privacy Concerns Deepen as AI and Machine Learning Grow More Powerful

Thursday Jul 6th 2017 by Carl Weinschenk
Share:

Even legitimate uses of data as fodder for AI and machine learning is too intrusive for many people. As time passes, ever stricter policies and safeguards are necessary. In the relatively near future, however, even these may not be enough.

Artificial intelligence (AI) and machine learning could be boons to law enforcement personnel trying to predict and thereby avoid criminal behavior. But, according to a report at ZDNet from the Interpol World 2017 that is being held in Singapore, tricky privacy boundaries and limitations on the treatment of this data exist.

The report details the great advantages such technologies offer and points out, somewhat ominously, that lawbreakers are starting to use the same technologies to stay hidden and to learn as much about their targets as possible. The last section of the story focuses on the balance between AI and machine learning on one side and people’s right to privacy on the other. There seemed to be no conclusive answer to the issue of balance and boundaries. These issues are certain to proliferate as AI and machine learning become cheaper and more easily available.

This is the sort of question that will become more common:

A delegate then asked how this could impact tensions between privacy and security, especially as governments, in particular the US, sometimes overstepping the boundaries and the European Union especially sensitive about the need to protect data privacy.

It is not an academic question. As the conference was going on in Singapore, the U.S. Presidential Advisory Commission on Election Integrity, which was created via an executive order by President Trump in May, was largely being rebuffed in efforts to collect names, addresses, birth dates, political party affiliation, and the last four digits of Social Security numbers from the 50 states. At this point, 44 states have refused or limited the type of information provided, and the overall reaction has been harshly critical of the initiative.

It is unclear if there is a direct tie between the commission and the use of AI and/or machine learning. The main points are that such treatment of the data is possible and could happen. States’ reactions show a heightened awareness to the privacy rights of citizens and sensitivity to the dangers of misusing that information. CNN offers a detailed state-by-state rundown.

It is clear that the public is growing more aware and concerned. A Pegasystems survey found that 68 percent of about 6,000 respondents think that AI could improve their lives – but only 27 percent are willing to provide information to customer service that would make those improvements possible. Pegasystems pointed out that the current ambivalence – that people want the convenience AI brings but don’t want to surrender the information that would make it possible – could be an opportunity for savvy companies capable of threading that needle. The survey found that online retailers (at 34 percent) is the most trusted segment to handle AI. Health care (at 27 percent) is second and banks (20 percent) are third. The government was named as trusted by a paltry 11 percent of respondents.

Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at cweinsch@optonline.net and via twitter at @DailyMusicBrk.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved