Deepfakes and other digital creations show that we have reached a scenario that was previously only thought of in science fiction films: images of humans generated by computers are as convincing as the real ones, or more so, raising more and more doubts and increasing the need for policies. . To control misuse of technologies such as artificial intelligence.
A new study suggests that there is an additional problem with digital photography techniques: racial bias, where images of white people are considered more believable than those of others.
Want to learn more about innovation? Join the Época NEGÓCIOS group on WhatsApp
“Strikingly, AI-generated white faces can convincingly appear to be more realistic than human faces in reality – and people do not realize they are being fooled,” the report said. Watchman The research team was distributed between Australia, the United Kingdom, and the Netherlands.
The researchers, who published the study in the Academic Journal of Psychological Sciences, noted that the problem has important real-world implications, as the “hyper-realism” of AI images could facilitate identity theft and other crimes, with people becoming more likely to end up killed. Scammed by digital scammers.
However, this hyper-realism is best illustrated in the case of white people. This shows that image generation algorithms are largely trained on images of white people, and links them more to the idea of humanity in general.
The study showed that 124 white people had an equal amount of real and AI-generated white faces. The researchers say this approach was chosen “to avoid potential biases in how faces of one’s own race are recognized compared to faces of other races.”
Participants were then asked to say whether each face was AI-generated or real, and how confident they were in their decisions. Therefore, 66% of AI images were classified as human, compared to 51% of real images.
“These results suggest that AI-generated faces look ‘more realistic than real faces.’ We call this effect ‘hyperrealism.’ They also suggest that people, on average, are not very good at detecting AI-generated faces.” .
The team then compared the study to a previous study that included photos of black and Asian people. In this, the success rate was similar with real images and those generated by AI – supporting the statement that people do not judge the authenticity of human images well, but showing that in the case of blacks and Asians, “hyper-realism” is not that effective.
The researchers asked participants in the study involving white people which elements of the images made them believe they were real, without telling them which ones related to artificial intelligence. They noted that the images contained “features that were proportional and familiar to them, and no distinctive features that could be considered ‘foreign’,” as the researchers described. “Participants misinterpreted these characteristics as signs of ‘humanity,’” they add.
The dangers of excessive realism
Zak Witkower, co-author of the research and a researcher at the University of Amsterdam, said the detected racial bias could have implications for various online activities that use human faces as a basis. “This will result in more realistic situations for white faces compared to other racial faces,” he said. The team cites cases such as self-driving cars being less likely to detect black people, putting them at greater risk than white people.
Conflating perceptions of race and humanity can also perpetuate social biases, including locating missing persons, as technology may have an increasing impact in this area.
Claire Sutherland, co-author of the study at the University of Aberdeen, said the study highlighted the importance of tackling bias in artificial intelligence. “As the world changes so rapidly with the advent of artificial intelligence, it is critical that we ensure that no one is left behind or disadvantaged in any situation – whether because of race, gender, age or any other protected characteristic,” she said.
Want to see exclusive content from Época NEGÓCIOS? Get the digital version.
“Writer. Analyst. Avid travel maven. Devoted twitter guru. Unapologetic pop culture expert. General zombie enthusiast.”