0,00 EUR

No products in the cart.

Top 5 This Week

Related Posts

«Advanced AI systems must not be able to do everything on their own»

Interview with Domenico Talia, Professor of Data Processing Systems at the University of Calabria, on the implications of an autonomous AI scenario

Experts and superforecasters remain divided on AI’s future, torn between expectations of unprecedented progress and fears of catastrophic outcomes. At the center of the debate lies a crucial question: what would happen if AI became capable of accelerating its own evolution? A recent pilot study by METR has begun exploring this possibility, assessing probabilities and impacts through judgmental forecasting techniques.

To delve deeper into the implications of these scenarios, we interviewed Domenico Talia, Professor of Data Processing Systems at the University of Calabria. He is an expert in the field and author of several books on Big Data.

Prof. Domenico Talia
Professor Domenico Talia of the University of Calabria
If an AI could truly accelerate its own evolution, what would be the most immediate practical benefits (e.g., in medicine, materials science, energy field)? And what are the most likely risks?

We are already seeing the benefits of using AI systems in many fields and in all industrial sectors. In practice, we have AI systems used to identify diseases, define new treatments. But also for innovation in public administration, for energy savings, for precision agriculture, and much more. Further evolutions, which will certainly come, will allow us to solve other problems concerning people’s lives and the organization of society.

However, the risks are not few when facing systems with greater autonomy and greater autonomous problem-solving capabilities. We must consider that the most advanced AI systems are in the hands of a few players who have enormous advantages, even over entire nations. Furthermore, the use of these systems, when not oriented toward the good of people and communities, introduces new inequalities. It allows for an uncontrolled exercise of their new power of manipulation.

What would be the main risks linked to an AI that develops new capabilities without human supervision? What governance or control tools might be necessary to regulate AI if it becomes capable of making its evolution autonomous?

The “replacement” approach that some big players have chosen in creating AI systems is highly risky for the role of humans on Earth. It also poses risks for the organization of democratic societies. There is a strong push for systems, including robotic ones, that completely replace human beings. However, AI systems should be an aid to people. We know they are certainly a great help, but their potential autonomy calls into question the way societies have functioned so far.

For this reason, clear and public rules are needed to define the limits of autonomy for AI systems that are developed and sold. At the same time, those who develop these systems should also set rules for themselves (ethical and professional) to leave room for human control in the operation and use of these systems.

Is it technically possible to create an “emergency brake” on systems that self-modify?

I wouldn’t talk about an “emergency brake”, but rather about ethical and human-centric development. Advanced AI systems must not be able to do everything on their own. They should not, “by design”, have the capacity for free self-evolution without human control. In short, I believe it’s not a matter of inserting an emergency exit into the systems. It’s about designing and building them so that they must still refer to the people using them when they need to make decisions about themselves or others. This is a complex issue that must be addressed to avoid future negative effects.

If AI could accelerate its own evolution, would we see a democratization of research (broader access to discoveries) or, on the contrary, a concentration of power in a few hands?

The concentration of power in the hands of a few entities that develop and market AI systems is already a reality, and one that needs to be limited. These entities have an extraordinary competitive advantage. This also gives them a political role in influencing large masses of people, which is unacceptable in a democracy. As for the democratization of research, it can be achieved if the results of new AI-supported research activities are made available to everyone and are reproducible by everyone. In this direction, the use of open AI systems and the public availability of data are fundamental elements.

Regarding scientific research, can we talk about research autonomy without the AI having the ability to define and evaluate scientific hypotheses? How far are we from this milestone?

AI systems have already effectively “won” two Nobel Prizes in 2024, for physics and chemistry. This is formidable proof of how AI is changing the scientific research process. We are not at all far from the ability of machine learning and generative AI systems to formulate new hypotheses and seek innovative solutions based on the enormous wealth of scientific content they can access. There are several projects in the world to create “virtual scientists” capable of conducting research and finding new scientific results autonomously. Naturally, we must be very careful with the results they produce. AI systems are not perfect. They still have significant error rates, and they could lead to major mistakes without human oversight.

How could the robustness and transparency of results produced by an AI that modifies its own models be guaranteed? How plausible is it that AI will develop plausible scientific research and theories?

On the possibility of AI systems developing plausible scientific research and theories, I’ve already said it, and I confirm it. AI is significantly changing the way the research is done in laboratories and universities.

Robustness and transparency require serious investment in the development of safe and responsible AI systems. Safety and accountability are important research and development topics in the field of AI. It will be necessary to increase investment in these areas from those who develop machine learning systems. At the same time, the laws and regulations being passed by governments and parliaments must particularly emphasize these aspects. They must compel companies to create safe and reliable systems.

Disseminate with us

R&D Magazine is the game-changing channel for dissemination. However, our media agency provides many other solutions to communicate your project to the right audience.