University of Toronto graduate student Avishek “Joey” Bose, under the supervision of associate professor Parham Aarabi in the school’s department of electrical and computer engineering, has created an algorithm that dynamically disrupts facial recognition systems.

Why does this matter? The project has privacy- and even safety-related implications for systems that use so-called machine learning — and for all of us whose data may be used in ways we don’t realize.

Major companies such as Amazon, Google, Facebook and Netflix are today leveraging machine learning. Financial trading firms and health care companies are using it, too — as are smart car manufacturers.

What is machine learning, anyway? When you upload personal photos, videos, text and other data to platforms like Amazon or Facebook, their systems learn more and more about who you are over time. They amass data to discover what you look like, where you travel, with whom you spend time, what you like to buy, what your political preferences are — and on and on in terms of “useful” (to them, of course) personal details about you.

Joey Bose is concerned about the privacy issues these machine learning models present.

“We should have control over the data we create, and it should not be used to train machine learning models without our explicit consent,” he told LifeZette in an interview.

“By data, I mean images, text and so on that we generate through our use of social media. And it’s not OK for some companies to harvest that information and send us targeted ads without informing us” that they’re doing it — or why they’re doing it.

Bose explained he’s been interested in this topic for the past year or so, and interested in computer vision for the past three years. “Adversarial” machine learning essentially sets up a battle between competing artificial intelligence systems. In this case, artificial intelligence (AI) fights to identify faces — while the other battles to disrupt that very task, as the University of Toronto Engineering News explained.

In its current form, Bose said his project successfully disrupted Faster R-CNN, a facial detection algorithm developed by Facebook researchers Shaoqing Ren, Kaiming He, Ross Girshick and Jian Sun. But Bose’s work cannot yet be used on Facebook or other social media platforms — the research to enable it is still ongoing.

Eventually, the technology could let users apply a “filter” of sorts that would prevent Facebook and others from using technology to detect users’ photos or harvest information from them without their express consent.

Who do you think would win the Presidency?

By completing the poll, you agree to receive emails from LifeZette, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

“I started learning about computer vision [as an] undergraduate and was first interested in it because I have a medical condition that affects my ability to see,” Bose explained. “Unfortunately, there is no cure for it and it gets progressively worse over time.”

This project will change images in ways so tiny human beings won’t notice, but machines definitely will. So systems that harvest information will become “confused” and malfunction.

Bose added, “I saw computer vision as a way to augment my normal abilities.” He noted he’s had a few surgeries and is “A-OK” right now.

“Our project is about crafting adversarial attacks on face-detection algorithms,” he said, by way of further explanation. “An adversarial attack is any attack on a machine learning model, such that if you perturb the inputs slightly — in my case, images of faces — they appear almost unchanged. [Those nearly undetectable changes] cause the underlying machine learning systems to fail.”

So the project changes images in ways so tiny that human beings won’t notice, but machines definitely will. The net result: Systems that harvest information will become “confused” and malfunction.

Related: 23 and Me (and You): Let the Buyer Beware

“Adversarial machine learning is super-important, as we’re going into a world that is increasingly more reliant on machine learning models. It’s incredibly important to know where and when we can trust them, as these have real human consequences,” said Bose.

“In the case of self-driving cars, for example, deploying prematurely can cause actual loss of human life,” he said. He said some research has shown “it’s possible to manipulate stop sign images adversarially, such that the self-driving car that relies on computer vision systems cannot recognize it.”

Bose’s project is one step in the direction of his larger goal to prevent companies such as Amazon, Facebook and Google from leveraging users’ personal information without permission. “We believe you have the right to own your face and to distribute it,” he said.

Related: The Privacy Gap Among Health Apps

Bose says his project probably won’t be enough to do the job on its own. He believes legislation to rein in tech giants’ power and control over users’ information may be necessary. “Legislation is needed as all these companies rely on your data to create revenue.”

Bose’s project is part of his master’s degree work; he expects to graduate this summer. He will be presenting the results of his team’s project at the 2018 IEEE International Workshop on Multimedia Signal Processing in Vancouver, Canada, in August.

Michele Blood is a Flemington, New Jersey-based freelance writer and a regular contributor to LifeZette.

(photo credit, homepage image: courtesy of Avishek “Joey” Bose)