Accelerate
to market
and go live
in just weeks
Services
Application Innovation

Rearchitect your legacy systems and build new ​cloud-native applications.

Intelligent Apps & Data

Streamline data workflows and boost your applications with advanced intelligence.

Developer Experience

Optimize your development ​operations with our GitHub Accelerators.

Amplify
digital innovation
with strategic
business solutions
Trusted by leading
enterprise
brands
Case Studies
View All
Learn best practices & techniques from the #dreamteam
Don't just dream of the future... Build It!
Don't just dream of the future... Build It!
Accelerate
to market
and go live
in just weeks
Schedule a
call to
learn more

Please choose a time slot below to schedule a call with our experts to discover how we can assist you in reaching your goals!

Ready to get started?
Member Name
Title and role @ architech
Schedule a meeting to learn more...
Accelerate
to market
and go live
in just weeks
Case Studies
View All
Trusted by
leading enterprise
brands
Learn best
practices &
techniques from
the #dreamteam
Don't just dream
of the future...
Build It!

Please choose a time slot below to schedule a call with our experts to discover how we can assist you in reaching your goals!

Ready to get started?
Schedule a
call to
learn more
Schedule a meeting to learn more

At the Design Leadership Summit 2025 in Toronto this fall, my colleague Darcy Reaume and I facilitated an interactive session for over 50 of Toronto’s top design leaders. We spoke with professionals like us who are struggling in the trenches, under pressure to navigate the black hole of possibility that is AI. From our conversations we gleaned an honest snapshot of where the tech community is right now, and where people are genuinely stuck.

If you’re a leader feeling pressure to "do something with AI" while simultaneously unsure what that something should be, you're not alone. Here's what we heard.

Everyone is Moving, But No One Knows Where

The overwhelming sense across our sessions was motion without clear direction. Organizations are running AI experiments, proliferating features, exploring new tools, and analysing what competitors are doing. There's a lot of activity. And to be fair, some of it is working. Rapid prototyping is faster, Model Context Protocol (MCP) means faster context reviews and less grind tracking tasks, idea generation has more range, AI is helping designers articulate their thinking in ways that bridge communication gaps. The problem isn't that people aren't trying things! It's that all this exploration is happening without a coherent sense of what success looks like or where any of this is headed.

Design leaders described feeling caught between the pressure to move quickly and the absence of any framework for deciding what to move toward. Teams are busy, but it's an anxious kind of busy. The kind where you're not sure if you're making progress or just making a mess. When everything feels like it might be important, it becomes nearly impossible to distinguish signal from noise.

This scattershot approach isn't necessarily wrong - exploration is how you learn, after all - but it's taking its toll. People are tired. And underneath the exhaustion a more uncomfortable question is starting to surface: what are the unforeseen consequences of trying to build what we can’t see because we have to move at lightning speed? Frankly: are we breaking things beyond repair at the expense of realizing some kind of AI, fast?

The Foundation Isn't Holding

As organizations push further into AI implementation, they're discovering that the infrastructure they assumed would support this work isn't holding up the way they expected. This isn't about organizational immaturity or lack of preparation. It's more fundamental than that.

Design systems exist. But AI tools aren't reliably ingesting them. Or when they do, they don't treat it as inviolable. And even when they do ingest it properly, if documentation is unclear or inconsistent, AI hallucinates. The foundation problem runs both directions: AI doesn't respect what's there, and what's there often isn't good enough for AI to work with reliably. Processes exist, but there is this resurfacing perception that Design Thinking fundamentally slows progress. The guardrails that designers have spent years building - human-centred research insights, design systems, user-centricity, accessibility, brand integrity, to name a few - are being bypassed or ignored by the very tools that are supposed to accelerate design work.  

This creates a strange dynamic. Organizations have invested heavily in design infrastructure, but that investment isn't paying off in the AI context the way anyone expected. The tools that promise to make product teams more efficient are simultaneously making it harder to maintain the standards that define good design: not because it's poorly built, but because AI isn’t respecting it.

And there's a generational tension emerging. Younger designers are increasingly looking for ways to bypass AI tools altogether, choosing manual approaches to preserve the integrity of their craft and avoid work that feels compromised by automation. They can spot AI-generated content, and they don't want their work associated with it. What seemed like a shortcut to efficiency is becoming a liability for brand equity and team culture. Organizations are investing in AI to accelerate design work while the next generation of their talent pool is actively resisting it to protect the quality and authenticity of what they create.

No One Can Agree on What We're Trying to Accomplish

Beneath the readiness question sits an even thornier problem: conflicting objectives. Design leaders described working in environments where different stakeholders want fundamentally different things from AI, and no one has successfully aligned those competing visions.

Some organizations are chasing speed. Others are focused on cost reduction. Some are trying to use AI to free up designers for more strategic work, while others are expecting AI to be a magic key that unlocks infinite productivity. There are initiatives driven by genuine user needs sitting alongside initiatives that feel like solutions in search of problems. Investment is happening, tools are being purchased, teams are being asked to adopt new workflows, but the "why" behind any of it remains murky.

Without clear goals, everything becomes harder. How do you measure success when success hasn't been defined? How do you prioritize when the goal posts are always moving? How do you make thoughtful decisions about where AI fits and where it doesn't when you haven't agreed on what problem you're trying to solve? The absence of coherent objectives isn't just making AI work difficult. It's raising more fundamental questions about what design is supposed to be doing in the first place.

What is Design's Role in All of This, Anyway?

This is where things get existential. Multiple design leaders voiced versions of the same concern: Is AI transformation offering value to humans, or just to businesses?

There's a real anxiety underneath this question. As organizations rush to implement AI, design leaders are finding themselves pressured to move quickly on initiatives that don't feel connected to design's actual value. The goals feel more like mandated tech adoption than solving problems. There's a nagging sense that while everyone is focused on AI capabilities, the fundamental design challenges - understanding user needs, creating coherent experiences, building systems that actually work for people - are getting lost.

Leaders are being asked to make decisions about AI readiness, tool selection, and implementation strategy, but many don't feel equipped to evaluate these things. They know the value of design thinking, of investigating the problem space, of pushing to understand the problem before making sweeping decisions. But they’re worried about slowing things down; about being seen as obstacles rather than enablers. Mostly, they're worried that in the rush to adopt AI, organizations are losing sight of what design is supposed to be optimizing for in the first place: the human experience.

How We're Thinking About This

At Architech, we've been working with organizations navigating these exact challenges, and we've developed an approach that starts with three philosophies:

1. Assess AI maturity first

How ready are you to deploy AI in the first place? First, assessing your organization's actual AI maturity - not what you wish it were, but where you honestly are right now. Second, triaging where AI genuinely fits versus where it's being forced into spaces it doesn't belong. And third, evaluating whether your foundation - your systems, your processes, your ways of working - can actually support what you're trying to build.

It’s the evergreen design thinking process: understand your problem space first. Don’t sacrifice the human-centred research. Aim before you fire. Be sure the investment you're making will pay off.

2. Define the constitution and governance

Instead of jumping straight to implementation, we're recommending that teams start by collaborating to spec out what they're trying to build, what success looks like, and what guardrails need to be in place, together. It sounds simple, but it changes the conversation. It forces alignment on objectives before resources get committed. It surfaces gaps in readiness before they become expensive problems. And it creates an inviolable rule set for any AI tools that get integrated.

We're also developing practical approaches to the transparency and governance challenges that came up repeatedly working with AI-generated code: understanding when to trust it, when to question it, and how to maintain quality standards in a workflow that increasingly involves non-human contributors.

3. Execute as partners, not consultants

The partnership model matters too. At Architech we don't fly in, drop in a framework or a piece of software, and disappear. We work through the messy implementation alongside your team. We're partners in figuring out what works, not vendors selling a pre-packaged solution. Because honestly, no one has this fully figured out yet. We're all building the future together.

Where We Go From Here

The design community, and the tech community at large, is in a genuinely uncertain moment. The exploration is exhausting without clear direction. The infrastructure we built isn't holding up the way we expected. Objectives remain misaligned. And the fundamental question of what AI should be optimizing for - humans or business - sits unresolved. But uncertainty doesn't mean paralysis. It means being thoughtful about where you are, honest about what you don't know, and deliberate about how you move forward.

If any of this resonates - if you're feeling the same pressures, wrestling with the same questions, or stuck in the same places - we're here for that conversation whenever you're ready to have it.

Schedule a call

Nick Alexander is the Design & Innovation Lead at Architech, where he focuses on practical approaches to AI integration and design practice evolution. The interactive session referenced in this piece was co-facilitated with Darcy Reaume at the Design Leadership Summit in Toronto.

You may also like