Practical perspectives on digital delivery, drawn from our work and the industries we support. This edition explores AI-led workflows, evolving search behaviour, and the tools we believe are worth paying attention to.

Welcome to the first Quarterly Briefing Note from the Wyoming Interactive team.
To say the last 12 months in digital marketing have been “dynamic” is a wild understatement. While the advent of AI has left the industry spoiled for choice when it comes to new tools and platforms, many teams have found that the time and effort needed to learn and apply them often outweigh the benefits they promise.
There’s no question that AI offers significant benefits for digital delivery, but it certainly isn’t a panacea. With that in mind, we thought it might be useful to share an honest view of our own successes and failures, highlighting where emergent AI tools and technologies have delivered real value - and where they haven’t.
In addition to keeping you up to speed on the latest AI tools across UX, engineering, analytics, and content strategy and creation, we’ll also be curating recent digital insights and best practices that have caught our eye.
We hope you enjoy the briefing, and please do let us know if there’s anything you’d like more (or less) of in future editions.
Until next time,
Naomi Gibson
AI is a priority for digital leaders, but enthusiasm alone rarely translates into tangible benefits. The real question isn’t whether to use AI, but how to use it in ways that genuinely support teams and outcomes.
At Wyoming Interactive, we take a grounded view. AI isn’t replacing roles; it’s reshaping the workflows that sit within them. Used deliberately, AI can accelerate low-risk, repetitive tasks, freeing teams to focus on work that demands judgment, creativity, and domain expertise. Used without a plan, it can just as easily introduce risk, noise, and misplaced confidence.
In this article, we share how we think about AI in practice: why we start with workflows rather than tools, where AI adds value today, and why verification and human interpretation remain essential.
Drawing on examples from UX research, engineering, and digital marketing delivery, we outline a practical framework for using AI responsibly - without falling into the trap of using it for its own sake.
Generative Engine Optimization (GEO) is becoming a powerful tool for marketers. As customers’ online behaviors shift toward AI-powered search experiences, marketers are increasingly incorporating large language models into their plans to ensure they reach customers as and when those new search behaviors emerge.
Some statistics illustrate why it’s so important to consider GEO strategies: 84% of users report a significant increase in their use of AI, with 35% now using AI tools every day. In search specifically, websites that previously ranked on page one of search results have seen a 16-64% decline in organic traffic. While a 16% decline may signal disruption, a 64% reduction is potentially existential.
As we explore in this explainer piece, this doesn’t mean SEO strategies can simply be abandoned. Instead, as our SXO specialist Holly Ellis explains, the most effective approach is to make SEO and GEO work in tandem - capturing consumers at different points in their discovery journey.
We’re continually testing tools that claim to improve how digital teams work. Each quarter, we take a closer look at one tool we’ve used in practice, and share where it fits into real workflows, where it helps, and where it doesn’t.
As AI coding tools mature, the question’s no longer whether they can generate code, but how they fit into real delivery environments with security, governance, and long-term maintainability constraints.
Over the past quarter, our engineering team’s been trialing the AI coding assistant Tabnine as part of day-to-day development work on client projects. The goal was simple: to understand where AI genuinely accelerates delivery and where human judgment remains essential.
Where it worked well
In its current (assistant-led) form, Tabnine proved most effective as an intelligent coding companion. By analyzing the full codebase, it helped our engineers:
For teams working across multiple languages and IDEs, the consistency of suggestions was a practical benefit.
Where limitations emerged
The team found the gains were incremental rather than transformative. Senior-level human review was still required to ensure changes aligned with architectural standards, internal conventions, and long-term supportability. In higher-security environments, local deployment also introduced additional infrastructure overhead that teams needed to plan for.
More importantly, though, the trial highlighted something broader: the tooling landscape itself is shifting.
From assistants to agents
Tabnine, like many tools in this space, is moving beyond chat-style assistance. It’s heading toward agent-based models, where AI’s given an objective and allowed to modify files, run tests, and iterate autonomously before human review.
This represents a fundamental change in how teams work. While it promises further efficiency gains for well-scoped tasks, it also raises the bar for governance. Clear acceptance criteria, architectural boundaries, and human oversight become non-negotiable.
AI coding tools are already valuable accelerators when used to support experienced engineers. The next wave - agent-led delivery - is promising in theory, but demands stronger process discipline in practice.
For enterprise teams, success will depend less on choosing the 'right' tool, but rather more on how AI is embedded within accountable, reviewable delivery models.
So much is happening across all the industries we cover, sometimes it can be hard to keep up. Here’s what our expert teams consider to be the most important updates.
California’s 2026 CPRA updates don’t just add new rules; they change how consent must be designed.
Dark patterns are explicitly invalid, opt-out flows must be symmetrical, and users must see confirmation that their preferences have been honored. Many U.S. enterprises are responding by standardizing UX rather than fragmenting experiences by state.
To do so, organizations need to think more carefully about how consent’s experienced - not just how it’s documented.
As consent choices increase and identifiers disappear, analytics platforms like Google Analytics 4 rely more heavily on modeling to fill data gaps.
While this currently applies primarily to higher-traffic sites and remains optional, it signals a broader shift toward blended reporting that combines observed and inferred behavior.
For smaller sites especially, users declining consent results in data loss rather than modeling.
The bigger risk isn’t modelling itself, but teams misreading what their data represents. For some, it’s inferred performance; for others, it’s missing data entirely.
Angeliki Alvanou
Analytics Lead
Google’s AI Mode can now infer needs from contextual signals such as Gmail booking confirmations, order history, Google Photos, location data, and past behavior. That context’s used to surface relevant services before a user actively searches, with recommendations emerging from real-world activity rather than explicit queries.
As this evolves, it raises important questions about how products and services are discovered - particularly when systems anticipate need before users articulate it.