Human priorities, business context, and ethical principles shape AI by design, ensuring continuous value throughout its lifecycle
Innovation vs impact
Today, AI industry giants are racing to develop advanced Large Language Models (LLMs) and increasingly capable agentic AI. Mistral’s Le Chat, OpenAI’s ChatGPT, and Google’s Gemini; they all help users find information, generate content, and automate tasks. Adoption of these systems is quickly accelerating: the share of enterprises using AI jumped from 8.0% in 2023 to 13.5% in 2024, a 5.5% increase in just one year.
Yet AI itself is not new, so what explains this acceleration? Two key shifts stand out. The first is the Transformer architecture: a groundbreaking approach to processing sequential data. Combined with increased computational power, this architecture has driven much of the recent progress in AI. The second shift is the volume of data (i.e. text, video, etc.) on the internet. This data fuels AI training, enabling systems to learn statistical patterns, for example that “king” relates to “queen”.
As such, the focus on architecture, data, and other technical advancements is justified. Cutting-edge generative AI requires ingenious algorithms and massive datasets, along with sophisticated models, computational power, and specialized hardware. However, technical innovation alone doesn’t guarantee impact or value. In fact, research reveals a stark reality: most AI projects never reach production, and those that do often fail to deliver significant return on investment (ROI).
We argue that while foundational technical innovation in AI offers enormous opportunities, it is not enough. Translating technological breakthroughs into real-world value remains a major challenge. Focusing too heavily on technical aspects can result in impractical solutions that never create real impact.
Beyond architectures, models and data
How can we ensure AI creates an impact? Focusing only on architecture and data won’t do. It creates a critical blind spot: business and user needs. Beyond the technical aspects, AI systems are embedded in organizational processes and shaped by diverse stakeholders who influence their development, impact, and performance.
For instance, data annotators contextualize information for AI models, teaching them to recognize images or sounds. Regulators, like those in the EU, set boundaries, such as banning AI social scoring through the AI Act. Executives and decision-makers determine which AI systems are built by controlling budgets. All these groups play a crucial role in shaping AI.
Ultimately, however, end users decide whether an AI system is practical and valuable. Even the most advanced systems fail if they don’t meet user needs. Without adoption, the technology’s potential impact disappears. This aligns with a key finding from the MIT report: generative AI projects often fail because they ignore how people interact with the technology.
Generative AI projects often fail because they ignore how people interact with the technology.
History shows the path forward
To move forward, we can take inspiration from the past. Thirty years ago, as computers and the internet became mainstream, the focus was on technical advancements: processing power, storage, and operating systems. This was natural; the first challenge was making the technology work.
But as their use grew, so did questions about their value. In 1997, Harvard Business Review published “The Real Problem with Computers”, questioning whether computers truly boosted business productivity. Similarly, in 1995, Newsweek’s “Why the Web Won’t Be Nirvana” argued that the internet’s potential was overhyped.
These questions on value led to the development of human-centered design, a discipline that prioritized users over technology. The focus expanded beyond technical functionality to ask: What real value do these systems offer? How do they align with how people actually work? How can we protect users from unintended harm?
We believe AI is now undergoing a similar transformation. The focus is shifting from technical capabilities to users, though with added complexity. AI systems are not static; they learn, adapt, and make autonomous decisions. Our questions must reflect this evolution. Businesses must ask: What value can AI truly provide to users? How can we ensure these systems align with team workflows as they learn and adapt? What safeguards are needed to protect users from automated decisions?
Users across the AI development lifecycle
A clear understanding of the development process is the first step to building human-centered AI solutions. One widely used framework is the Cross-Industry Standard Process for Data Mining (CRISP-DM), introduced in 1999 and now a field standard. CRISP-DM emphasizes an iterative approach, with data as the core driver of each phase. However, as discussed in this article and further explored in The Power of WHY in Data Science, this data-centric focus can limit AI projects.
CRISP-DM 2.0: placing business challenges and users at the core
In “CRISP-DM 2.0 for the Semiconductor Industry and Other Complex Domains,” the authors propose an updated version. Here, business challenges and research problems (not data) are placed at the core, giving users greater emphasis. The framework also introduces two key improvements: all phases are interconnected, reflecting the dynamic nature of AI projects, and ethical, moral, and legal considerations are explicitly integrated throughout the AI lifecycle.


Although the number of phases and the technical activities within each remain unchanged, the two frameworks are fundamentally different, with the updated version requiring a distinct skill set, mindset, and approach, as business objectives and user needs are considered throughout the project, rather than only at the beginning.
User-centered considerations across each phase of the AI lifecycle
The following overview illustrates how each phase incorporates these considerations, ensuring that AI initiatives deliver real-world value and remain aligned with stakeholder expectations.
- Business Understanding – Business objectives and user needs are explicitly defined, establishing the foundation for the entire project.
- Data Understanding – Data is analyzed to ensure it is relevant, complete, and capable of addressing the defined research questions, solving business challenges, and meeting user needs.
- Data Preparation – Data is processed and structured to meet both technical requirements and the practical needs of users and stakeholders.
- Modeling – Model design and training decisions are guided by business goals and user expectations, not just technical performance metrics.
- Evaluation – Models are assessed based on their ability to deliver real-world value and satisfy user requirements, alongside standard technical evaluation.
- Deployment – Implementation is aligned with workflows and processes, ensuring that the AI system provides actionable insights and tangible benefits for users.
At every stage, feedback loops with stakeholders ensure that evolving user needs and business priorities are continuously addressed, while ethical, moral, and legal considerations remain aligned with best practices, laws, and regulations.
Case study: predictive maintenance
Many AI projects fail to deliver lasting value, but one manufacturing company’s experience offers a valuable lesson (For confidentiality reasons, certain details, names, and events have been modified). The company launched a predictive maintenance project to increase equipment uptime and cut costs. Initially, the goal was to predict product quality by monitoring equipment performance. However, a feasibility study showed this to not have the expected impact. The business and data understanding phases, along with a light level of data preparation to test the available data, revealed that these measurements lacked sufficient predictive power for quality.
The project returned to the starting point with valuable lessons learned. The team shifted focus to monitoring equipment component health, technically a different challenge than equipment performance. While product quality improved indirectly due to better-performing equipment, the primary goal was increasing uptime, to boost efficiency, productivity, and cost savings.
It is important to note that this success was achieved only through iterative collaboration with users, domain experts, and stakeholders during the phases. By engaging the right people throughout the process, the team was able not only to identify the right problem to solve but also to pivot quickly when the project reached a dead end, uncovering new opportunities grounded in a deep understanding of business objectives and user needs, especially since stakeholders had never explicitly requested predictive maintenance. This human-centered approach enabled the project to create meaningful impact, including an 18% reduction in cumulative downtime, which translates into direct cost savings in the low millions of euros per year.
Challenges
Shifting the focus of AI projects from technical aspects to user needs presents major challenges. This approach demands a mindset and skill set centered on understanding user needs, business goals, and context; areas often overlooked in traditional AI education, which prioritizes technical skills. As a result, many AI professionals lack the tools to drive human-centered outcomes and create lasting value.
The challenge is compounded by organizational and leadership gaps. Many senior leaders and organizations lack the awareness to advocate for human-centered AI. Structural barriers, such as viewing AI as a plug-and-play solution rather than an integrated enabler, further complicate adoption. Misaligned incentives, silos, and limited cross-functional collaboration make it difficult to prioritize users, reducing the chances of delivering meaningful, lasting impact.
These challenges can be addressed by reshaping education to balance technical skills with user understanding, business context, and collaborative problem-solving. Organizations should support this by fostering cross-functional collaboration and aligning incentives with real value, ensuring AI solutions create lasting impact.
The EU leading the way
It is interesting to note that the EU has been pioneering human-centered AI as early as 2019, with its AI strategy placing «people at the centre of the development of AI». Today, this vision is tangible: EU funding, such as Horizon Europe grants, now often requires projects to consult end users during development. The EU AI Act is the strategy’s cornerstone, embedding human-centric principles into law.
Some organizations view European AI regulations skeptically, seeing them as obstacles rather than opportunities. While these policies, focused on safety, transparency, and human-centric design, are sometimes criticized for slowing innovation or adding bureaucracy, they ultimately form the foundation for what the authors of this article argue is needed to create value in any case: a focus on people.
By promoting human-centered practices, EU regulations can help ensure AI solutions are not only innovative, but valuable, ethical, and responsible. And this approach seems to be gaining traction. Globally, the EU’s approach is influencing standards, with countries like Brazil, Canada, and India adopting similar frameworks.
Building AI with people, not for people
In today’s AI landscape, where value, trust, and safety are under intense scrutiny, a human-centered focus is essential. Involving people in AI development creates understanding, builds trust, and promotes ethical outcomes, creating systems that deliver real value for businesses, individuals, and society.
Success in AI requires more than developing AI for people; it demands developing AI with them. Embedding users and fostering collaboration throughout the process ensures alignment with real needs, business goals, and societal impact. The challenge is clear: organizations must adopt human-centered AI practices, while AI practitioners need to expand their skills beyond technical expertise to build ethical, trustworthy, and value-driven solutions.
References:
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In U. von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems (Vol. 30, pp. 5998–6008).
- Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025, July). The GenAI Divide: State of AI in Business 2025 (Report v0.1). Project NANDA, MIT.
- Gonzalez Huesca, J. M., & Pechenizkiy, M. (2025). CRISP‑DM 2.0 for the Semiconductor Industry and Other Complex Domains. In Proceedings of the AI2ASE Workshop. Association for the Advancement of Artificial Intelligence (AAAI)




