Whether it’s an unflattering shot taken at a bar or a scenic photo revealing someone’s location, privacy control has become an increasing concern for social media users.
Currently, platforms like Facebook have limited options for individuals to dictate where their face ends up online, especially in photos taken without their consent or knowledge.
When looking through the settings options on social media apps, users are forced to go either all in or all out: either entirely open your profile to the public, or selectively filter through your followers to maintain privacy within your inner circle.
But what if there was a way for users to find a happy medium by replacing their face with a synthetic facial image in the backgrounds of images posted by others?
MU Associate Professor Dan Lin’s work in the I-Privacy Research Lab aims to expand the realm of possibilities for facial image recognition and privacy protection.
Lin and her team in the Department of Electrical Engineering and Computer Science received a grant of more than $700,000 from the National Science Foundation to pursue their work in using a face-swapping method to replace the faces of users found in the backgrounds of photos without compromising the integrity of the photos.
The goal of the project is to allow users to customize their privacy control settings in a manner that would allow them to select keywords of locations or setting they would not desire to be featured in.
From there, it is the job of social media platforms to use the facial-recognition software to replace these faces when an individual is uploading an image in said location or setting.
“What (social media platforms) can do is just protect other users from uploading or sharing their faces in photos.” Lin said. “So, we say instead of posting the user’s true face, someone who is in the background, we can use synthetic faces to replace that person. And that’s like a win-win situation, the ones who upload the photos will be happy, they can still share things they would like to share, and people in the background will not be affected by any privacy breach.”
Lin describes these synthetic faces as so natural that they will not only be fine-grained and high resolution, but as detailed as showing wrinkles around one’s eyes or spots on their cheeks. The use of synthetic face swapping draws inspiration from the phenomenon of deepfakes, the focus of another project of Lin’s where she is working towards creating detectors for fake images that are used with intentions to spread fake news.
With the two projects going hand in hand, Lin and her team aim to create a system using artificial imaging that would allow users to enjoy safely-posted photos without consuming fake news.
The research team also includes professor Chris Clifton from Purdue University, who specializes in the privacy metrics of the project. One of the challenges Clifton is working on solving is the lack of effective privacy protection for minority groups and women.
“There are advertising and marketing uses. There are so many things people are using social media for that kind of go beyond our initial expectations,” Clifton said. “And this is giving people the ability to have greater control over how the data that the imagery that they’re present in is used.”
With any project, Lin and Clifton expect pushback and challenges regarding news and media companies not wanting to replace or edit any aspect of a published image.
Brett Johnson, associate professor of journalism studies, sees the concept of face swapping as a double-edged sword.
“I can appreciate the allure behind it as perhaps a means to reestablish trust, although I can also see how some in the business may be a afraid of backlash because if we do this, we may put a veneer on reality and that could further damage trust with audiences,” said Johnson.
However, Lin explains that this is not about convincing companies to adopt the tool, but to offer the option for individuals to maintain their privacy without blurring out faces, which can be deblurred, but also tarnish the beauty of the original photo.
While the tool will certainly be available to news organizations, it is mainly targeted towards social media users and platforms.
“We’re getting the idea across that in almost all places, someone can take a picture without your knowledge, leading to unknown exposure,” said Ph.D. student Joshua Morris, who is in charge of system implementation and testing the ideas of preserving location privacy for people who are in the background of images. “People don’t realize that there may be many cases where they are identifiable in pictures they have no knowledge of.”
Lin expects to complete the project with a prototype ready for demonstration in October 2024. After that, social media users can look forward to seeing the new feature on mobile apps during the image uploading process.