Celebrating Measurement: Ruepoint at the AMEC Awards and AI Day 2025

This week in London, I had the privilege of joining two events that perfectly capture where our profession is heading – the AMEC Global Communication Effectiveness Awards and AMEC AI Day. One celebrated the best of what communication measurement can achieve today; the other explored how we’ll define it tomorrow. Between both, one message stood out to me: our industry isn’t just keeping up with change – we’re leading it, through evidence, curiosity, and collaboration. 

Four Wins at the 2025 AMEC Awards

This year, Ruepoint received four awards at the 2025 AMEC Awards, a moment of pride for our entire team.

🏆 Best Crisis Communication Measurement and Reporting

🏆🏆 2 x Most Effective Planning, Research and Evaluation (Europe) 

🏆 Step Change Award for best improvement of a measurement journey.

The awards were received on-site by Raina Lazarova, Kevin Fagan, Susan Ryan, Esma Riza, and me (Iskren Lilov) – a small team representing a much larger collective effort across Ruepoint and Muck Rack.

In her role as AMEC Chair, Raina Lazarova also had the honour of presenting the Don Bartholomew Award to Aseem Sood, CEO of Impact Research and Measurement, in recognition of his outstanding contribution to the global measurement community, a reminder that this field continues to grow through shared learning and leadership.

From Awards to Action: Insights from AMEC AI Day

The previous day, at AMEC AI Day, measurement leaders, researchers, and practitioners explored one central question: how do we measure impact in the age of generative AI?

The day’s discussions underscored that AI visibility and generative search aren’t future trends; they’re present realities reshaping how audiences discover, trust, and act on information.

Generative Pulse: A New Framework for AI-Driven PR Success

In her session, Raina Lazarova introduced Generative Pulse, Muck Rack’s new tool for understanding and improving AI visibility. Based on analysis showing that 95% of AI-cited links come from non-paid sources, the model helps teams benchmark their AI footprint, identify influential niche publications, and connect insights directly to outreach strategies.
The practical takeaway:

– Benchmark your brand’s AI visibility across engines and competitors.
Prioritise earned coverage that is recent, frequent, and relevant.
– Sync journalist targeting and pitching workflows to trusted AI sources.
– Track impact over time — in authority gains, narrative lift, and answer share.

Agency Panel: PR agencies on GEO, AI visibility and responsible measurement

Matt Oakley (Hotwire), Giuseppe Polimeno (Ketchum), James Crawford (PR Agency One), and Johna Burke (AMEC)  agreed that Generative Engine Optimisation (GEO) and AI visibility can offer valuable new context, but only when treated as supplementary indicators, not replacement metrics. Used responsibly, GEO helps track how brand narratives surface in AI-generated answers; used blindly, it risks becoming the next AVE. The consensus: experiment, but with purpose. Test hypotheses, be transparent about data and models, and work toward shared frameworks so the industry leads responsibly rather than deferring to opaque, proprietary systems.

The Answer Is Changing: How Brands Are Adapting To AI Search

Jonny Bentwood (Golin) highlighted that generative search has already changed how people,  particularly Gen Z and B2B audiences, find information.
AI answers lean heavily on earned citations from mid-tier, high-utility sources, rather than only top-tier outlets. For communicators, that means new priorities: create content that’s machine-readable, credible, and useful. Press releases, FAQs, and expert Q&As are once again powerful, not just for human readers, but for the algorithms that now shape brand perception. The message was simple: start now. Small, consistent improvements across earned and owned media can compound over time to position your brand within the “first answer.”

Don’t Just Build Agents, Build Teammates

Cien Solon (Launch Lemonade) brought the conversation back to business outcomes. Her framework for creating AI teammates started with writing a “job spec” defining the Role, Context, Objective, Task, and Expected Output for each system. Then onboard it with your own knowledge base using RAG, test fast, refine often, and align every build to your North Star metric – whether that’s growth, efficiency, or revenue. Her closing line captured it perfectly: “Treat your AI like a teammate — give it clarity, context, and purpose, and it will multiply what your team can do.”

Measuring What Matters: AI, Agentic Tools, and the Next 6 Months of Comms Analytics

To close the day, Andrew Bruce Smith offered a grounded roadmap for where to focus next. His message: match the right model to the right task. Use reasoning models for qualitative coding, lighter assistants for summaries, and integrate code-based tools for quantitative accuracy.

The biggest opportunity ahead lies in multimodal analysis – blending text, visuals, and voice to extract richer insights from coverage. And his final advice was perhaps the most valuable: build small, useful tools that solve real pain points. Whether it’s a message consistency tracker or an executive share-of-voice explainer, start small, iterate fast, and keep insights tied to decisions,  not dashboards.

The Common Thread

Across both events, one theme stood out: AI may be transforming how we measure communication, but people still define what matters.

From awards that celebrate evidence-based storytelling to conversations that push our industry forward, the future of measurement is human-led, data-informed, and purpose-driven – exactly where Ruepoint aims to be.

Iskren Lilov

Iskren Lilov

Head of Marketing and Communications of Ruepoint. Passionate about communication and replacing "growth hackers" with sustainable brand engineers - may not sound as catchy, but makes up for it in effectiveness and long-term recognition. Let's connect on LinkedIn!

Leave a Comment