11 takeaways From Clearscope’s Roundtable on the Future of Search in 2026
Topic: AI Content
Published:
Written by: Liz Biscevic
Search isn’t dying, but the job is changing fast.
In Clearscope’s roundtable on the future of search, Lily Ray, Kevin Indig, Ross Hudgens, and Steve Toth broke down what’s actually shifting across SEO, AI search, AEO, and GEO.
Below are 11 of the most useful takeaways, including the tactical “do this next” ideas the panel shared.
1) SEO isn’t dead. The goalposts moved from clicks to outcomes.
The panel pushed back on the “SEO is dying” narrative for a simple reason: search behavior isn’t disappearing: people still wake up with problems, questions, comparisons to make, and purchases to research. What is changing is how often that research turns into a click.
We’re already seeing impressions and clicks drift apart (the “crocodile chart” effect): users can get a usable answer directly in the results or an AI interface, even when your content helped create that answer. So the work isn’t becoming irrelevant—it’s becoming less directly tied to “sessions” as the scoreboard.
Practical shift: stop selling “traffic” as the primary win. Start selling revenue-adjacent outcomes—qualified leads, demo requests, pipeline influence, trials, purchases, and brand lift—and update reporting to match. That means combining classic SEO metrics with AI visibility indicators (mentions/citations, branded demand, assisted conversions), and setting expectations up front that “visibility” may increasingly arrive without a click, but can still translate into business results.
2) Freshness has an outsized impact right now, but don’t fake it.
The panel strongly agreed: recency signals can disproportionately influence visibility, especially in systems that prioritize up-to-date info.
But there was an important warning: “artificial refreshing” (updating dates without meaningful updates) can backfire, especially in Google ecosystems.
Best practice: update content when you have something real to add (new data, new examples, new screenshots, changed recommendations, new definitions).
3) Off-site signals are rising: brand mentions and “being talked about” increasingly shape visibility.
A repeated theme from the panel was that large language models don’t behave like traditional link graphs. They’re not simply counting backlinks or passing PageRank in the way SEOs are used to. Instead, they appear to place more weight on contextual references—where your brand or product is mentioned, how often it’s discussed, and whether those mentions come from sources the model already trusts.
That’s why being referenced across credible, relevant sources can sometimes matter more than classic link-building tactics alone. If you’re suddenly seeing citations or mentions coming from places you would’ve historically deprioritized—industry blogs, business publications, software roundups, or niche review sites—that’s not noise. It’s a signal. Those sources are actively informing how models understand your category and which brands they surface as recommendations. Treat that data as intelligence: it tells you which publishers, formats, and narratives are shaping AI-generated answers—and where your off-site strategy should double down next.
4) Listicles, affiliate, and sponsored placements work… but the window may not last.
The panel was candid: listicles and paid placements can influence visibility today. But there was a strong warning: tactics that work in a “new frontier window” often get regulated later.
What to do with this:
If you pursue placements, prioritize reputable publishers.
Assume enforcement tightens over time.
Build durable authority alongside any short-term boosts.
5) “AI-only pages” can help, but don’t forget humans.
There was interest in “LLM pages” that act like grounding documents (clear who you are, what you do, what you’re best at, where you operate, what you integrate with, etc.). But the more durable recommendation was simple: publish essential business facts in plain language, for humans and machines.
A good rule: if a customer would find it clarifying, an LLM probably will too.
6) Homepage clarity is back, in a very 2010 kind of way.
A practical point from the discussion: models can miss things that feel “obvious” if they’re buried in navigation or implied. Clear statements on your homepage and key pages help.
Examples of what to spell out:
Who you serve (ICP)
What you do (primary category + use cases)
Where you operate
Key trust signals (proof points, retention, outcomes)
7) ICP mapping pages are not optional in B2B anymore.
Multiple panelists described a “mapping” approach as a way to make your offering legible to both buyers and AI systems. The idea is to clearly spell out who you serve, in what contexts, and why you’re a good fit—rather than assuming models or users will connect the dots on their own.
Industry pages
Company-size pages
Role-based pages
Use-case pages
Comparison and alternative pages
This isn’t new, but it’s becoming more valuable because AI experiences naturally push users toward comparison and vendor evaluation.
8) Technical basics still matter, plus a few AI-specific gotchas.
On llms.txt, the group was mostly “sure, if you have time, but don’t expect miracles.” There’s no strong evidence it’s a top lever yet, and no major model has confirmed that it meaningfully influences retrieval or rankings today. For most teams, it falls firmly into the “nice to have” category rather than something worth deprioritizing higher-impact work for.
More actionable technical takeaways:
Rendering matters more when models struggle with client-side experiences.
Server response time can be a silent killer (LLMs are sensitive to friction fetching content).
Avoid hiding key content in accordions/tabs if you want models to reliably parse it.
9) Automation in SEO is about better output, not replacing people.
The panel agreed that AI is improving throughput and quality, not eliminating human roles. The biggest gains are in higher-volume keyword and topic research, faster content updates, and making line editing increasingly commoditized. Strategy, judgment, and prioritization remain firmly human-led.
Where automation is working best:
Expanding keyword and topic research volume without sacrificing quality
Speeding up content refreshes and data updates
Making line editing and surface-level refinements increasingly commoditized
Where humans still matter most:
Strategy, prioritization, and judgment
Original insights, positioning, and narrative
Deciding what to publish and why, not just generating text
The consensus: teams that treat automation as a force multiplier (not a content factory) will pull ahead fastest.
10) Grounding-friendly content wins: FAQs, declarative language, and named sources.
Several “classic SEO” tactics are showing up again, but for a different reason than before. As AI systems synthesize answers from multiple sources, they favor content that is easy to parse, unambiguous, and self-contained—content that doesn’t require inference or heavy interpretation to extract meaning. In practice, that’s pushing best practices from the featured snippet era back into the spotlight.
FAQs perform well, especially when they’re visible (not hidden in accordions).
Clear headings and direct answers help.
Declarative writing (“X is…” “You should…”) tends to be easier for systems to extract.
Don’t just link sources. Name them in the copy (study name, author, year) so the citation survives extraction.
This is very similar to featured snippet optimization, but applied to AI answers.
11) Measurement is the hardest part. Define success with clients, then build your own KPI stack.
The panel didn’t pretend there’s a universal KPI set yet. Tool metrics vary, prompt volume data is limited or opaque, and referral traffic alone misses a large portion of AI-driven discovery—especially unlinked mentions and brand influence that happen without a click. For now, teams have to be comfortable operating with directional signals rather than perfectly clean attribution.
Practical measurement stack ideas discussed:
Use traditional keyword demand as a proxy for “prompt demand.” Search data is still the best large-scale indicator of what people care about, even if prompts are longer and more conversational.
Group intent themes and track performance at the topic level (not single prompts). AI systems operate on intent and concepts, not exact phrasing, so measuring clusters is more durable than chasing individual queries.
Track what LLMs say about your brand versus what your sales team would say (accuracy + positioning).This surfaces gaps, misconceptions, or missed differentiation that directly affect buying decisions.
Add self-attribution to forms (“How did you hear about us?”) to capture AI-driven discovery. It’s imperfect, but today it’s one of the clearest ways to connect AI visibility to real pipeline and revenue impact.
What this means for teams planning 2026 search strategy
If you only change one thing, change what you measure and what you optimize for. Traditional rankings still matter, but the center of gravity is shifting toward:
Visibility inside AI answers
Off-site reputation and mentions
Content designed for extraction (clear, fresh, structured, sourced)
First-party audience building so you’re not dependent on any one channel (especially publishers)
Teams that adapt fastest won’t be the ones chasing every new tactic. they’ll be the ones aligning search, content, PR, and brand around how discovery actually happens now. In an AI-first search world, sustainable advantage comes from being consistently understood, trusted, and recommended, whether or not a click ever happens.
Expert Tips: One Thing to Focus On Now
Steve Toth
Double down on comparison and alternative page strategies for B2B SaaS. LLMs naturally push users toward vendor comparisons.
Lily Rae
E-E-A-T remains critical - expert, authorative brands get content picked up faster by search engines and LLMs.
Ross Hudgens
Assume AI mode becomes default with Gemini, plan for worst-case scenario while hoping for better outcomes.
Kevin Indig
Trust is the most underrated factor - building audience trust through any channel provides significant advantages across all platforms.
The Future of SEO Is Conversational
Search is now a conversation. Learn how to get cited by ChatGPT, Gemini & Perplexity—and why AEO is the new SEO.
Read moreThe 2026 SEO Playbook: How AI Is Reshaping Search
Learn how generative engines, AI answers, and conversational interfaces are reshaping SEO in 2026, and what brands must do to stay visible across both retrieval and reasoning systems.
Read moreAI and Personalization: Essential for Brand Visibility
Answering a customer’s highly specific, personal question doesn't merely make you visible; it positions you as the singular, relevant authority in that precise moment.
Read more