IE 11 is not supported. For an optimal experience visit our site on another browser.

AI photography is going viral — and raising concerns about safety, privacy and ethics

Here's what to do if you've used the service and want to delete your data.
/ Source: TODAY

Digital portraits made with artificial intelligence are all over the internet and look vibrant, imaginative and enticing ... but they also may be used to compile user data, experts say.

Social media is covered in fantastical portraits created by Lensa AI, a photo-editing app that released a "magic avatar" feature in November. Its popularity landed it the No. 1 spot in iOS App Store’s “Photo & Video” category in early December

The app needs at least 10 pictures of a person to generate a portrait of them. Then the system assigns numerical values to different facial points and adds those numbers up to create new digital images of the person.

The required facial scan has consequences though.

Companies “try to entice you to give your data away and you get something in return, which are pleasurable experiences," Juergen Schmidhuber, an internationally recognized computer scientist and leader in the AI field, tells

"At the moment, it’s just about faces and selling ads and so on," he continues. "But it’s going to be much crazier than that."

'Your face data'

Lensa’s opt-in system requires people to pay $7.99, submit pictures and agree to their terms and conditions before it creates portraits.

According to Lensa’s privacy policy, they “collect and store your Face Data for online processing function” and they are “automatically deleted within 24 hours after being processed by Lensa. In case of using Magic Avatars feature, the photos are automatically deleted after the AI results are generated.”

Mari Galloway, an AI and cybersecurity specialist, tells that users should still be concerned, despite claims in Lensa’s privacy policy.

“We don’t know what they’re going to do with that data, that information,” she says. “They don’t keep the photos and videos for longer than 24 hours. But do we really know what they’re doing with that? How are they deleting it? How is the data encrypted? We don’t really know those details because they don’t really share that information with us.”

Schmidhuber notes that simply deleting something from Facebook "is very, very difficult,” and he questions how Lensa could be deleting data so easily and regularly.

The company did not respond to TODAY’s request for comment. Prisma Labs, the company behind Lensa, did tweet on Dec. 6 that the user's photos are "erased permanently from our servers" as soon as the avatars are generated — a statement which publicly echoes the claims made in the company's aforementioned privacy policy.

Galloway flags another concern: What if Lensa has a change in ownership?

“Whatever information is still in their database, in their system, goes to the next person,” she says.

Personal data is typically provided when users create an account and can include information like their name, email or even a home address, Galloway says, adding that this type of information can be used for tracking purposes or potentially to open fake bank accounts or credit cards if it ends up in the wrong hands. To sign up and create an account on Lensa, users only have to provide an email address and give the app permission to charge them through the app store.

Galloway adds that it's "really important" we guard our personal data.

"We have to be very, very careful about what we put on our phone and what we share with those applications," she shares.

'All kinds of propaganda purposes'

AI images can only be created using photographs of real people, Schmidhuber explains. The more images databases have, the smarter the artificial intelligence becomes. It can help companies "use the data for generating additional faces for training networks to be even better," he said.

Once the database has enough images, it can create from that and does not need new submissions, Schmidhuber said. When it gets to that point, the dangers of AI include making digital people who can talk and impersonate real people.

"You can use that for all kinds of purposes that are not good," he says. "The neural network takes really long to generate sequences of frames that then look like a video of this person speaking, and maybe this person is saying something that in reality, she has never said so that’s what is known as fake videos."

From there, it becomes a slippery slope.

"If you have something that is able to generate really convincingly, you can also use it for all kinds of propaganda purposes, of course, and you can use it to entrust the opinions of other people," Schmidhuber says, adding that the military already uses facial scans to find targets.

Schmidhuber and Galloway both say AI networks will likely have racial bias. The people who create and develop the technology, consciously or not, also choose which images and data points it recognizes.

"It's mostly white folks or Asian folks that typically will create this technology," Galloway says. "And so it leaves out a whole entire group of people when it comes to identifying what folks look like."

Because of that, she says the technology is often encoded to read "lighter skin people."

"It gives them an advantage of looking more human, more normal than it does for darker-skinned people," she says. "This facial recognition technology (targets) minorities and Black people. There's been three or four different guys that have been targeted incorrectly because of this facial facial recognition technology."

I already got portraits made. Now what?

Galloway says it is not too late to remove data from Lensa's database.

"You only get rid of that information when you delete your account," she explains. "So just deleting the app doesn't mean that they're not still storing that information on you. If a breach happened, there's that information that's out there.

"You have to actually delete your entire account and have requests that they wipe all of that information for you."

Despite the many concerns regarding AI, Schmidhuber said the field has so much good to offer and that's worth continued investment and public confidence.

He compares AI to fire: It produces warmth and heat, but it can also burn. I's just a matter of controlling it.