Back

Beyond the Hype: How to Harness AI Without Losing the Human Touch

95% of AI projects fail. Here’s how to land in the 5% - and still do work that feels human

Omri Pick

September 4, 2025

In the past year, we’ve heard a consistent ask from clients:
Can we deliver this faster? - maybe with AI?

For CMOs, it’s about hitting launch dates.
For Heads of Product, it’s about de-risking the roadmap.
For research teams, it’s about scaling capacity without losing confidence.

The pressure to integrate AI is real - but unless it’s done carefully, it risks undermining the very trust and insight you’re trying to build. And while AI can augment tasks across the board - from research to design delivery - it’s easy to get caught up in the hype and lose the integrity of the work.

A recent MIT study puts this into stark perspective. It found that 95% of AI projects are failing to generate meaningful business value, largely because they’re poorly integrated and misaligned with real human workflows. Only 5% succeed, and they do so by staying focused, behaviourally aware, and strategically grounded.

What’s going wrong, and why it matters

The MIT research draws on hundreds of deployments and executive interviews. The big takeaway? Many organisations chase shiny AI projects like chatbots, dashboards, or sales automation layers that rarely survive past the pilot stage.

The real wins come from embedding AI where it fits naturally, by streamlining manual tasks, accelerating early analysis, and supporting human judgment rather than trying to replace it.

Smaller, nimble agencies are leading the charge - not because they have more AI capability, but because they have better integration, clearer workflows, and faster learning loops.

What Behavjor is seeing (and doing)

At Behavjor, we see the same challenge across sectors:

  • Pressure to deliver quickly, without compromising quality
  • Curiosity about AI’s role in design and research
  • Concern about losing the behavioural depth that makes work meaningful

Our view? AI has a place - but only when it serves the process, not disrupts it.

We call this being Human-led or running an AI Augmented process. Sure we believe in taking advantage of these intelligent tools for their strengths - but all tasks and decisions must be human-led because experience matters, and the subtleties of expression and emotional context are critical to deep human insights.

Without personal empathy and emotional investment in understanding the problem space, it’s actually really difficult to be creatively inspired and deliver new and innovative solutions. This is the reason we often call this part of our process ‘immersion’ - because it’s a creative prerequisite for good ideas.

3 Ways we use AI to augment HCD

1. Research acceleration without insight dilution

We use AI to support - but not shortcut our research. That might mean:

  • Structuring and tagging large volumes of qualitative data.
  • Transcribing interviews with built-in sentiment and theme detection.
  • Spotting behavioural patterns faster so researchers can focus on deeper synthesis.
  • Help co-ordinate the project schedule, process and comms.

The thing you can’t replace? Here are our human-led activities:

  • We always do our own synthesis - AI outputs are great correlating evidence later, but not a replacement for human judgment, context, and synthesis
  • Researchers decide what matters - AI can provide supporting evidence, but not conclusions.
  • We never ask AI to generate ‘insights’ - it lacks emotional and experiential context. The subtleties that lead to deep human truths can’t be derived from word frequency alone. Crafting a powerful insight still requires human understanding, intuition, and intent.

2. Faster, broader design ideation

Across ideation, prototyping, and testing, AI helps our teams move quickly by:

  • Generating early idea starters or concepts.
  • Rapidly mocking up variant prototypes.
  • Running simulations or usability benchmarks.

But we never let tools dictate direction. Designers remain in control - making judgment calls based on user context, goals, and behavioural cues.

Human-led activities:

  • Don’t let the machine lead you to conceptual mediocrity - AI tends to average out ideas, while humans go to the edges and think laterally.
  • Designers must still engage deeply with research - AI outputs aren’t a shortcut for empathy.
  • AI can’t replace creative quality. We don’t use it to write final copy or generate brand imagery - its outputs often miss tone, context, and conceptual accuracy.
  • Leaving creativity to AI is like asking it to guess your intent - quick, but rarely right. Human designers go further, connecting ideas laterally and shaping concepts with emotional and cultural accuracy.

3. Kick starting activities across classic frameworks

AI works best when aligned with proven HCD or CCD frameworks. We use it to break the ice - getting past the blank canvas phase that often stalls progress at the start of a project. Its methodical structure helps get us to a good starting point, so that the real human thinking and doing can start where it matters most.

  • Discover: AI clusters feedback and transcripts, helping spot emotional cues or patterns
  • Define: Drafting early problem framings or customer summaries and positioning statements
  • Ideate: Deliver an initial a wide spread of ideas, that we can build on
  • Prototype: Generate variants and help with low-fidelity assets or rapid low code prototypes.
  • Test: Structuring tests, recruitment, schedules, analysing feedback, and prioritising refinements

Human-led activities:

  • Humans must interpret and reframe AI-suggested patterns within real-world context - something no model can reliably do.
  • We guide problem definition based on behavioural nuance, not just text patterns - AI lacks the intent or clarity to identify what truly matters.
  • Creative direction and concept shaping stay in human hands. AI might generate options, but only humans can judge relevance, tone, and impact.
  • We apply ethical and cultural filters throughout - AI doesn’t know what’s off-brand, inappropriate, or too abstract. We do.


The result?

More time on what matters: decision-making, behavioural insight, and iteration.

But isn't everyone doing this?

Most firms say they ‘use AI responsibly.’ At Behavjor, our differentiator is behavioural expertise. We don’t just speed things up - we connect AI outputs to lived human behaviour through immersion, synthesis, and context. That way, insights remain powerful, actionable, and inspiring, not just efficient.

One quote from the MIT study captures this beautifully:
Most failures stem not from the tech itself, but from poor integration into the organisation’s daily workflows.

It’s not about adding AI on top. It’s about embedding it into the way we already work - with behavioural integrity and human oversight at every stage.

Even pioneers like IDEO agree: AI can help expand creative possibilities, but real impact comes from human context. In their recent experiments, AI helped teams generate more ideas, faster - but the quality and framing of those ideas still relied on people. That’s exactly how we see it too.

Final thoughts: how we stay in the successful 5%

According to the MIT study, only 5% of AI initiatives break through and scale.

We believe they succeed because:

  • They’re chosen appropriately with a deep understanding of users, workflows and objectives
  • Integrated tightly into the parts of a workflow that can be viably accelerated
  • Focused on integration into teams and shifting behaviours positively, not just efficiency
  • Human-led not driven by automation, hype and hope.

That’s our approach at Behavjor, and it’s how we help clients - from enterprise brands to public sector teams - to deliver better, faster, and more grounded outcomes.

We don’t just run the work for our clients - we work with them. Our process is deliberately collaborative. We embed your teams into key phases, openly share how we’re using AI, and actively transfer that knowledge into your world. That way, the benefits extend beyond the project - building internal confidence, capability, and clarity.

Because in the end, it’s not just about delivering a great outcome - it’s about enabling your team to repeat it, build on it, and scale it with confidence.

Curious how we do it? Or where AI might make sense in your design or research pipeline? Let’s chat.

Link to MIT report:
https://www.artificialintelligence-news.com/wp-content/uploads/2025/08/ai_report_2025.pdf

TALK TO US

Want to bounce
fresh ideas?

Contact Us