In 2022, I organized a panel discussion to discuss this topic during the United Nations Internet Governance Forum, with the participation of neuroscientist Lisa Feldman Barrett, one of the leading experts in the study of human emotions, Microsoft and representatives of civil society.
At the end of the session, a consensus was reached: it is unwise to use AI to infer human subjectivity. In the same year, Microsoft, which had one of the main models for recognizing emotions through facial expressions, discontinued its system due to a lack of scientific evidence.
If you've gotten this far, you're probably thinking: “What a negative text.” It is better for science to abandon the use of artificial intelligence.
This is not the idea. I firmly believe that AI can bring many benefits to the advancement of science and new discoveries. An example is the case of AlphaFold, which revolutionized protein research.
What we need is to alert the scientific community to the critical use of tools so that we can rethink scientific practice. Today's model of science, in which everything is evaluated by the number of publications, reinforces the tendency to use artificial intelligence in some kind of process getting help from external sources. Is another model possible?
Recently, Master's student Matthews Petroni defended his thesis on the potential consequences of artificial intelligence in science. We use a speculative design approach to study future scenarios for the use of AI in scientific practice. After interviews and workshops, two scenarios for the next ten years were created:
“Friendly zombie fanatic. Analyst. Coffee buff. Professional music specialist. Communicator.”