Before we get into this week's topic, I would like to highlight a news story that shows that all the controversy about artificial intelligence can be dissolved by trying to understand the right way to use it. Some people make fraudulent use of it. But the best defense is creativity.
Since January, New York City schools have banned the use of ChatGPT because students were using artificial intelligence to write research papers and do writing assignments given by teachers. Playing with words, one might say that at school that artificial intelligence has become CheatGPT. (Vice)
The impact of ChatGPT in a context that does not adapt to change can indeed be devastating. If the educational concept being pursued is to ask students to do research assignments, and if the interaction between students and faculty in conducting research is essentially limited to handing professors a text compiled by students to prove they have done research, then ChatGPT is a perfect way to destroy the value of this solution.
In fact, the interaction between faculty and students should be primarily a creative dialogue and discussion that develops as the research progresses. In this way, the final text becomes the natural conclusion of a process during which students have already demonstrated their advancement and made a learning journey.
If anything, it remains to be clarified how ChatGPT might come into play in such a way as to enhance the experience. And since this is precisely the point on which further study is needed, it could be said that a research and teaching theme could be just that: "what are the most productive and creative uses of generative artificial intelligence to augment the capabilities of humans?"
Paolo Granata has a very clear idea of the task of a teacher in the university. He is a lecturer at the University of Toronto, he is an expert in "media ecology," and among his courses is one about the book, its past and its future. In that context, he has implemented a course totally produced by artificial intelligence as far as face-to-face lectures are concerned. Because his course has to teach students the best ways to use artificial intelligence. And the lecturer's job is not to transfer knowledge: his job is to design experiences that can have important learning consequences for students. Granata is an explorer of possibilities. During the pandemic, considering lectures on a videoconferencing platform unengaging, he created a course all in virtual reality to enhance the immersiveness of the experience. And using artificial intelligence he intends to prepare students to creatively face the challenges of the 21st century within which artificial intelligence will be an essential element.
That's why Paolo Granata has designed the seminar "AI as a Classroom" for the fourth year of the Book and Media Studies Program (BMS) at St. Michael's College, University of Toronto, in which classes will be almost entirely taught by artificial intelligence: it will address a range of issues concerning artificial intelligence and its influence on society, including the ethics of artificial intelligence and the impact of artificial intelligence on cultural and media landscapes. It will also address provocative questions about the professor's role in creating and curating the learning experience and AI's potential for enhancing learning and promoting proactive thinking. Using the most advanced technologies in the field, including generative AI and LLMs, the course will feature a customized version of ChatGPT that has been expressly trained on the course's research questions. Throughout the course, students will develop skills in using AI to develop cutting-edge critical analyses of AI from a variety of perspectives. critical analyses of AI from a variety of ethical, practical, and philosophical perspectives.
And now this week’s problems
After the Italian Data Protection Authority initiated its action against OpenAI, ChatGPT was blocked in Italy. This fact was highly criticized in Italy and aroused some interest internationally. Canada and Germany have also considered the Italian authority's arguments. In the meantime, a dialogue has begun between the Garante and OpenAI and we are awaiting the outcome of the negotiations. But in the meantime, new stories are emerging all the time about the impact of artificial intelligence on society. Some of them are really bad. (LDB)
Opportunities and risks are not opposites
The creative work of exploring the opportunities for citizens and businesses to use artificial intelligence has just begun. The risks have been glimpsed. A very bad alternative opens up: on the one hand, the temptation to start with the opportunities and forget the risks; on the other hand, the temptation to block everything for fear of the risks and prevent seizing the opportunities. Many companies fear that in Europe, artificial intelligence regulation coupled with privacy regulation will act as a decisive brake. And because they have fewer concerns in America, some companies are thinking about leaving. The solution is to stop making snap judgments about laws and technology, get the biases out of the way, stop thinking of laws as brakes and technologies as risks: consider the whole. This will not be easy. So the regulatory adjustment will not be short. But it must be done.
These are the problems that emerged, in order of priority (i.e., in order of the accuracy of the problem and the definability of the measures to be taken):
1. The information that users enter into chats or otherwise as prompts for generative AIs is potentially sensitive data that platforms can use for now at will with potentially very negative consequences for users
2. Child protection is totally absent from platforms, it can be trivially solved with clearer warnings but this will probably not be enough, so that platforms will have to make sharper commitments, perhaps opening up to independent researchers
3. Information about people used by platforms to train their artificial intelligences is evidently there on the net for other reasons, even when it is old, false or defamatory; the consequent measures are rather complex and not too dissimilar to those that should be used for search engines, probably, which already deal with "right to be forgotten" and more; there are those who suggest that platforms should demand that citizens opt out or even opt in.
As one can see, the issue will be a long one. It will be about eventual GDPR reform, a new accentuation of anti-trust measures, a strong policy for launching and supporting alternatives to the artificial intelligence olligopoly, promoting distributed architecture, opening up data to scientific research, educating citizens. (LDB)
Facts for thought
We are facing a reality that is very difficult to deal with. Here are some cases that demonstrate it.
Snapchat's MyAI warns: don't share secrets with chat.
“As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance! All conversations with My AI will be stored and may be reviewed to improve the product experience. Please do not share any secrets with My AI and do not rely on it for advice.” (Arstechnica)
Samsung employees share trade secrets on ChatGPT
“Oops: Samsung Employees Leaked Confidential Data to ChatGPT Employees submitted source code and internal meetings to ChatGPT just weeks after the company lifted a ban on using the chatbot”. (Gizmodo)
Experts: don't tell a chatbot anything you want to keep private.
“As the tech sector races to develop and deploy a crop of powerful new AI chatbots, their widespread adoption has ignited a new set of data privacy concerns among some companies, regulators and industry watchers. Some companies, including JPMorgan Chase (JPM), have clamped down on employees’ use of ChatGPT, the viral AI chatbot that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software. It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history. The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post. And just last week, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns after OpenAI disclosed the breach.” (CNN)
A scandal invented by an artificial intelligence can have real consequences
“ChatGPT invented a sexual harassment scandal and named a real law prof as the accused. The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence. “One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list. The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student. A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record. “It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.” Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.” (Washington Post)
Please take a look at Reimagine Europa. A Media Ecology Research Network is being build in Bruxelles and it grows every day. I will be informing on that more in the next issues. Reimagine Europa.
European-style innovation
European companies should try to seize the opportunity to be in a place that thinks about the consequences of innovation. They should not see themselves as penalized by legislation that differs from that in the United States. Even Americans are beginning to realize this. But at the heart of it all is the fact that Europe needs to find a way to balance its proper focus on rights and a policy that incentivizes European companies to innovate in the direction that does the good for all. Europe is innovating in regulation. It must be able to innovate in overall policy as well. Otherwise it risks being only half understood. And of losing innovative companies instead of gaining them. (LDB)
Podcasts in Italian, by me
L’altra metà del verso. Rai Radio 3
Media Ecology. Intesa Sanpaolo on air
Eppur s’innova. Luiss University Press
Ecology of screens
On the occasion of the International Conference Vivre par(mi) les écrans: entre passé et avenir, which was held in Lyon at the end of May, the newsletter of the International Research Group Vivre par(mi) les écrans and the Media Ecology newsletter agreed to signal, each to its recipients, the importance to them of the other's content, inviting them to subscribe to receive it and disseminate it among their contacts. So please visit Vivre par(mi) les écrans and subscribe to the newsletter. This choice of collaboration stems from the common project of promoting, developing and sharing highly qualified knowledge aimed at creating tools for guidance, critique and intervention in the field of media ecology and our current and future living between(mite) screens, as well as fostering the social dissemination of the aforementioned knowledge and tools.