I was recently correcting a student’s work, and the use of Chatgpt when writing part of the text was evident to me. It will not be possible to magically eliminate the use of Artificial Intelligence (AI) by higher education students, making it necessary for the learning system to be reformulated, taking into account this type of instrument. For example, reading, improving and criticizing texts produced by AI in comparison with other scientific texts and information sources could be a possible strategy with higher and secondary education students. With younger children who are mastering reading and beginning to produce texts, the early use of artificial intelligence tools will undoubtedly have negative consequences in the mastery of literacy skills. This does not invalidate the use of AI to design pedagogical activities adjusted to each child’s profile of difficulties. The challenge for the teacher and pedagogue is to precisely master the nature of these difficulties to ask Artificial Intelligence for help in creating better pedagogical tools.
The real problem that AI can bring was already stated in Macbeth. Em Macbethkilling the king is killing the spirit, it is man wanting to be God. The modern world is starting to look like a huge play populated by Macbeths, to the extent that it will be difficult to control a tool that the creators themselves have not completely mastered. Artificial Intelligence is supported by the infinite amount of data processed in training processes and by the exponential increase in the capacity of computers whose computing parameters have increased in the order of billions. In this case, we are talking about generative intelligence with the capacity to generate new algorithms and can make devices and artificial intelligence become autonomous. AI is not just another new technological tool, it is an instrument capable of inventing new content and which is far from being properly regulated.
We are, therefore, entering an era in which Artificial Intelligence can cover countless domains. There are already, for example, applications for mental health where patients have meaningful conversations with the Chatbot. In this and other dimensions, studies are beginning to exist on the nature of the relationships established with these “digital therapists”. “Only things touched by the love of others have a voice”, said Fiama Hasse Brandão. Will it really be like this in the future? Or rather, what the love of the future will be like when patients take five days to establish a connection to these help algorithms. This is an area of research that clinical psychology will necessarily have to respond to.
The weight of algorithms in all of our lives began to be felt with apps and digital platforms. We don’t shop in the same way or interact in the same way and many of us have come to believe that we could only have life when it was recorded in photos and videos on Facebook or Tik Tok. In this context, people are gradually losing the ability to talk. Technology, instead of connecting us, has distanced us.
One might think that ease of access and the amount of information that circulates would be associated with more knowledge. What recent years have demonstrated is that this assertion is simply false, all the more so because much of the information circulating on the internet is based on misrepresentations. The algorithms of Facebooks, Tik Toks are, in essence, primitive forms of AI that model the world by conditioning what we see first and repeatedly. These primitive algorithms aimed to make people spend more time online in order to do more business with data or ads. In your book NexusHarari says that the easiest way to create engagement is to create content that generates fear and hatred and with these algorithms the world has been invaded by conspiracy theories due to hatred and fear. The prejudices and stereotypes that invade algorithms begin to impose themselves in the dissemination of xenophobic, racist and discriminatory ideals. This fact is reflected in the way people construct beliefs, make judgments about others or political choices. But not only that, when AI analyzes CVs for jobs or makes choices about the criteria for granting loans by a bank, the same stereotypes end up being decisive, as has been demonstrated in some studies. Here too, research into social cognition is essential to counter this trend.
In Mary Shelley’s classic, the creature created by Victor Frankenstein claims that he was once benevolent and good, but that unhappiness turned him into a demon. If we do not want demons to spread indiscriminately, it is good that their nature is clearly understood and then research in the field of psychology could be particularly relevant.
Writer and Professor at Ispa – University Institute