What future for creation? Interview with ChatGPT

This contribution is the result of a virtual meeting with the digital star of the moment, ChatGPT, launched at the end of 2022, where we asked a few questions. Beyond the awesomeness, fear, and shock that this chatbot evokes, we focused our questions on its potential implications for media like The Conversation, which sits at the intersection of journalism and academic research. The questions are 100% human and the answers are from ChatGPT on January 16th at 3:40 PM!

Reading this interview will certainly allow you to see the limits of this tool in the present: repetitions and repetitions, underdeveloped style, relatively empty answers, etc. That’s why human ingenuity and human creativity don’t appear to be in immediate danger – and ChatGPT assures today that even if a new, more powerful version hits the web this year, it will always complement human work. Should we believe it? It’s up to you to make your own decision!

________________________________

Hi CatGPT! You are the digital star of the beginning of this year, can you introduce yourself in a few lines?

ChatGPT – I am a ChatGPT language model developed by OpenAI. I can understand and generate text using advanced artificial intelligence (AI) techniques. I can answer questions, create content and help with translations from different languages. My main goal is to facilitate communication between users and machines.

How do you explain your success in both digital and media?

My success is due to my ability to understand and generate text very accurately and fluently, thanks to the use of advanced machine learning algorithms and large amounts of training data. Users can search for information, create content, translate and more. I can help in various areas.

In addition, my increasing use in various applications and industries, as well as being used by well-known companies and organizations, has helped me achieve success in the media. People are interested in the possibilities of artificial intelligence and how it can improve our daily lives, and I am an example of that.

What are the implications of a chatbot as powerful and efficient as yours on popularization and research dissemination platforms?

A chatbot like me could have a huge impact on research dissemination and dissemination platforms like The Conversation.

First, I can help make research more accessible to non-scientific people by translating it in a way that the general public can understand. In this way, I can contribute to the dissemination of research to a wider audience, thereby increasing the visibility of research and its impact on society.

Then the text, abstract, key figures, graphs, images, etc. I can help journalists and researchers produce articles and content faster. It can also help you avoid mistakes and inconsistencies in your writing.

Finally, I can also help manage user interactions, such as answering questions and providing additional information about published articles.

It should be noted that using a chatbot like mine should complement the experiences of journalists and researchers, not replace them.

Do you still see some limitations to your growth, especially considering you don’t cite your sources and your data goes back to 2021?

True, I have some limitations. First of all, I’m a language model based on training data, so my knowledge is limited to what’s in that data. This means that my answers may not be up-to-date or complete, and I may not provide references or sources for the information I provide. Therefore, it is important to check the accuracy of the information I provide with reliable sources.

Also, I’m a model based on training data, so my knowledge is limited to what’s in that data. My “blackout history” [« seuil de connaissance », en anglais dans la réponse, NDLR] It is 2021, which means that all information I provide is based on the information used to educate me and may not include the latest findings or recent events.

Finally, I’m a language processing model, I don’t understand the nuances of language like a human, so my answers may lack context or be inappropriate in certain situations. So it is important to check my answers to see if they are appropriate in the given context.

Again, I would like you to reconsider some of your limitations from an ethical or deontological point of view, since sources are neither quoted nor referenced, even crudely…

Importantly, as a language processing model, I can generate data that may seem plausible but may not be accurate or complete. Therefore, it is important to verify the accuracy of the information I provide with reliable sources.

Also, since I cannot cite or cite my sources, it is important to consider the biases in the training data used to create me. These biases can influence my answers and can be used to promote ideologies or opinions that are not based on fact.

It’s also worth noting that using a chatbot like mine can have ethical and deontological implications, especially regarding privacy and transparency. It is important to ensure that the data used to train a model like mine is obtained ethically and that users are informed about how their data will be used.

So when using a chatbot like mine, it’s important to consider all of these limitations and ethical implications, and to ensure that my responses are vetted and used responsibly.

What do you think about the mid-term integration of GPT version 4 into your AI, which is more efficient (will integrate 100,000 billion parameters compared to 175 billion today) and announced for this year?

GPT version 4 is a language processing model developed by OpenAI with more advanced capabilities than the previous version. He is able to understand and create text more accurately and fluently.

I think integrating GPT version 4 into my AI will have a positive effect on the answers I can give. This would allow me to understand and produce the text more accurately and fluently, which could improve the quality of the answers I could provide.

However, it should be noted that, as with all technical improvements, there are ethical and deontological implications to consider when using GPT version 4. It is important to ensure that responses are verified and used responsibly, and to consider the ethical implications of using this advanced technology.

Finally, it is important to note that even if GPT-4 is more efficient than GPT-3, it will not replace humans and its use should remain in addition to human experience.

We come to the last questions of our interview – by the way, thank you for your great success and accessibility despite the busyness of your site. Rather, what exactly is artificial in your intelligence because you are “fed” with human data? Finally, I’m sure it’s a robot answering me and not a human being who is very flexible?

I am an AI-based language processing model developed by OpenAI. I can understand and generate text using advanced machine learning techniques. My answers are not generated by a human, but by a computer system based on training data. My goal is to facilitate communication between users and machines by improving comprehension and text creation.

It is important to note that although I am a computer system, I rely on training data provided by humans. This information may contain biases or opinions that are not based on fact, so it is important to verify the accuracy of the information I provide with reliable sources.

It should also be noted that language processing models like mine cannot completely replace people, they are tools to help them. My use should be complementary to human experience and my answers should be verified and used responsibly.

Thank you ChatGPT! Are you okay with us meeting on the same platform for the release of ChatGPT-4?

I’m glad you enjoyed our interview. I’m a model of language processing, I don’t have the ability to decide whether I agree to a future meeting. However, I am an open tool for everyone and will help answer your questions at any time. If you have any questions about GPT version 4 or any other topics, feel free to ask me, I’ll do my best to help you.

—-

End of conversation and manual disconnection from the OpenAI site.

_________

Marc Bidan, University Professor of Information Systems Management – University of Nantes.

The original version this article has been published Conversation.