0,00 EUR

No products in the cart.

Top 5 This Week

Related Posts

AI hype is loud. Accountability is louder

Between narrative marketing and systemic risk, companies face a strategic choice: adopt critically or fall behind

The hype has died down a bit, but a recent post on X about AI by American entrepreneur Matt Shumer—titled “Something big is happening”—continues to spark discussion and debate. Shumer frames the current moment as eerily reminiscent of February 2020. Back then, a small group of people warned of an impending crisis, while most looked the other way, dismissing it as alarmist. Now, he suggests, history is repeating itself—except this time the upheaval won’t be temporary, and its scale will be far greater.

What truly resonated with readers, however, was his take on who will feel the impact next. Over the past year, those in tech have watched AI evolve from a convenient assistant into something that often outperforms them at their own jobs. In Shumer’s view, that same reckoning is about to spread across almost every field, and not years from now, but in the very near future.

A matter of judgment—or just marketing?

Is AI developing taste, judgment, and decision-making ability—or does it just look that way? That’s the question at the heart of Shumer’s reflections on the release of GPT-5.3 Codex. His take is bold: the technology has evolved past the point of merely executing instructions. Critics, however, see something else. To them, Shumer’s framing reads as narrative marketing—a compelling story, but not an accurate one. The real story, they counter, is one of refinement and scale, not a change in kind. What feels like a leap forward is, in essence, experiential rather than ontological.

The deeper friction comes down to a single claim: Shumer’s assertion that the line between human and machine decision-making has become irrelevant. For many, that line hasn’t blurred at all. On the contrary, it matters now more than ever. That is the argument put forward by Fortunato Costantino, a legal scholar and professor of General Theory of Sustainability and Social Innovation at the European School of Economics. In a recent article published in Fortune Italia, he examines the most problematic aspects of Shumer’s rhetoric—specifically, the claim that the distinction between human and machine decision-making no longer matters.

For the legal scholar, it does matter—immensely so. It matters for decision-making accountability, for governance, for use in critical contexts ranging from business and public administration to geopolitics. To claim that the distinction is no longer relevant means, implicitly, shifting decision-making authority from the human sphere to the algorithmic one. And this is precisely where AI risks ceasing to be a tool and instead becoming an alibi—an elegant way to absolve responsibility for choices that remain, ultimately, human.

The blurring line between support and delegation

From this perspective, as Costantino notes, something truly significant is unfolding. It lies in the growing ability of machines to obscure their instrumental nature, making the line between cognitive support and delegation increasingly thin. Rather than seizing cognitive capital by force, AI absorbs it incrementally—offering in exchange greater efficiency, faster processing, and a lighter cognitive load. The result is a deep-seated reliance on external systems for tasks that have long been considered the bedrock of intellectual autonomy. Mastering the operation of AI tools is no longer sufficient. What’s required is a deeper literacy: the ability to grasp how these systems function, where they fall short, what biases they carry, and how they are designed to persuade. The persuasive dimension of AI is not an incidental flaw—it has become an inherent feature of today’s technological landscapes.

Matt Shumer’s post delivers a troubling message—one that prompts a closer look at why anxiety-driven narratives resonate so powerfully right now. The language he employs taps into something real: fear and uncertainty, two feelings that are so widespread across the world today.

What stands out is how his essay moves from collective unease to individual action, bypassing community entirely. But it also sidesteps the different levels of responsibility and the power dynamics at play. Rather than placing accountability on those who truly hold, develop, and distribute technologies with significant societal impacts, his rhetoric shifts the entire weight of the solution onto the individual. What remains is a solitary figure—afraid, disconnected, searching—and a clear directive: follow him, subscribe to ChatGPT. In that dynamic, the isolated individual becomes the ideal customer. The one with a community? Far less receptive to the pitch.

When AI begins to build itself

This tension between individual responsibility and systemic power finds a concrete echo in a concept introduced in a March 2026 paper: the oversight gap. The study in question – Measuring AI R&D Automation, by Alan Chan, Ranay Padarath, Joe Kwon of GovAI, together with Hilary Greaves and Markus Anderljung of the University of Oxford – focuses specifically on the use of AI to conduct research and development on AI itself, a phenomenon the authors term AIRDA. At the core of their analysis lies a governance problem: as AI begins to automate its own advancement, the gap between necessary oversight and actual control threatens to widen – and governance, the authors argue, begins with measurement.

Server room at CERN (Switzerland), an example of the physical infrastructure enabling artificial intelligence to begin conducting R&D on itself.

Measuring automation: metrics for AI accountability and governance

Automation could either deepen this gap, as systems grow more complex and human involvement shrinks, or help close it, if AI is harnessed to monitor and govern other AI. To address this uncertainty, the authors propose fourteen concrete metrics. These span several categories: experimental metrics measuring how well AI performs AI research, comparing humans, AI, and mixed teams; organizational metrics tracking how much time researchers delegate to AI and how many critical decisions involve it; operational metrics documenting errors in AI-generated results and instances where AI subverts controls; and economic metrics examining budget allocation between computation and human labor, as well as shifts in researcher headcount. No single metric tells the whole story. Together, they offer a framework for understanding a phenomenon that is otherwise easy to overlook.

The paper does not pretend to have all the answers. Instead, it issues a call to action: begin measuring the automation of AI research now, before it runs ahead of our ability to govern it. Without data, decisions are made in the dark. With data, there is at least a chance to steer development intentionally.

A strategic crossroads for business

One narrative warns of an imminent upheaval, framing it as an individual reckoning to be met with subscription and adaptation. Another insists that the line between human and machine decision-making still matters – profoundly – and that blurring it serves to absolve those who hold real power. Meanwhile, AI begins to automate its own advancement, accelerating the very systems that companies are being asked to trust.

For businesses, the stakes are clear. Adopting AI solutions without critical scrutiny risks outsourcing judgment to systems whose inner workings remain opaque, whose biases are often unexamined, and whose persuasive design can mask the transfer of responsibility. But the opposite approach-retreating entirely, refusing to engage – carries its own risks: falling behind, ceding competitive ground, and losing the organizational literacy needed to operate in a landscape that is rapidly being reshaped. What is required is not blind adoption nor wholesale rejection, but informed, deliberate engagement. 

Disseminate with us

R&D Magazine is the game-changing channel for dissemination. However, our media agency provides many other solutions to communicate your project to the right audience.