Primary Position SEO NYC https://primaryposition.com/ Primary Position SEO NYC Wed, 10 Sep 2025 04:22:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://primaryposition.com/wp-content/uploads/2020/02/cropped-favicon-1-32x32.jpg Primary Position SEO NYC https://primaryposition.com/ 32 32 LLMO vs SEO: An Inconvenient Truth https://primaryposition.com/blog/llmo-seo/ https://primaryposition.com/blog/llmo-seo/#respond Wed, 10 Sep 2025 04:22:10 +0000 https://primaryposition.com/?p=7802 It seems every time the digital marketing landscape shifts, a new acronym emerges to tell us that everything we knew is wrong. First, it was AEO (Answer Engine Optimization). Then, GEO (Generative Engine Optimization). The latest buzzword to hit our feeds and LinkedIn timelines is LLMO: Large Language Model Optimization. We’ve read the think-pieces. We’ve […]

The post LLMO vs SEO: An Inconvenient Truth appeared first on Primary Position SEO NYC.

]]>

It seems every time the digital marketing landscape shifts, a new acronym emerges to tell us that everything we knew is wrong. First, it was AEO (Answer Engine Optimization). Then, GEO (Generative Engine Optimization). The latest buzzword to hit our feeds and LinkedIn timelines is LLMO: Large Language Model Optimization.

We’ve read the think-pieces. We’ve seen the gurus promise they have the secret sauce. The pitch is compelling: AI models are replacing Google Search, so you need to stop doing SEO and start doing LLMO to get your brand cited in AI-generated answers.

Here at Primary Position, our business is built on delivering real results, not chasing shiny new objects. And the inconvenient truth is this: LLMO isn’t a new discipline. It’s just a new, unproven name for what good SEO professionals have been doing for years.

The Three “Pillars” of LLMO are a Redundant List

The advocates of LLMO will tell you it has three core pillars:

  1. Create Authoritative Content (E-E-A-T): They argue that AI models prioritize content from trusted, expert sources.
  2. Structure Content for Clarity: They emphasize using clear headings, bullet points, and answering questions directly.
  3. Optimize for Mentions and Citations: They suggest the goal is to get your brand mentioned in an AI answer, not just linked to.

Does any of this sound new to you? It shouldn’t. This is the exact philosophy that has defined successful SEO since Google’s Panda and Helpful Content updates.

  • E-E-A-T is SEO: Google’s own quality rater guidelines have hammered home the importance of Experience, Expertise, Authoritativeness, and Trustworthiness for years. This isn’t a new concept for an LLM; it’s the fundamental principle of a post-keyword-stuffing internet. If your content demonstrates E-E-A-T, you’re already optimizing for any system that values quality.
  • Structured Content is SEO: We’ve been telling clients to use clear headings, concise paragraphs, and structured data (Schema markup) since forever. This makes content easier for Google’s crawlers to understand and helps with featured snippets. It’s also just good UX for human readers. It’s not a secret strategy for a chatbot.
  • Citations are SEO: While we can’t measure an “AI citation rate,” we’ve been measuring “brand mentions” and building entity authority for years. Getting your brand cited in a major news outlet, a reputable blog, or a Wikipedia page is a time-tested SEO tactic. It builds the exact kind of real-world authority that makes both Google’s algorithm and an LLM’s model trust you as a source.

 

The Elephant in the Room: How Do You Measure It?

This is where the LLMO narrative completely falls apart.

Successful SEO is a measurable science. We can track keyword rankings, organic traffic, impressions, and conversions. We have a direct line of sight from our work to the results.

LLMO, on the other hand, is a black box. How do you measure success?

  • Do you know when an AI model’s training data was last updated? No.
  • Can you track how often your content is cited by ChatGPT? No.
  • Is there a dashboard that shows your “AI citation rank”? No, and there likely never will be.

Chasing an unmeasurable metric is a fantastic way to spend a lot of time and money without ever knowing if your efforts are paying off.

The Verdict: LLMO is Redundant, SEO is Resilient\

The digital world is indeed changing. AI models are a powerful new interface, but they aren’t creating a new rulebook. They are simply rewarding the same high-quality, trustworthy, and well-structured content that good SEO has always championed.

The best way to “optimize” for LLMs is to simply do your job well. Focus on creating the most helpful, accurate, and authoritative content on the internet. Build a strong brand presence. Earn trust from both humans and algorithms.

The goal isn’t to play a new game called “LLMO.” The goal is to keep playing the same game you’ve always played, but to do it with integrity. LLMO isn’t a new truth; it’s a slick new label for the inconvenient truth we’ve known all along: there are no shortcuts to high-quality content, and no amount of hype can change that.

The post LLMO vs SEO: An Inconvenient Truth appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/llmo-seo/feed/ 0
Whats in a Name? The Website Title https://primaryposition.com/blog/website-title/ https://primaryposition.com/blog/website-title/#respond Wed, 10 Sep 2025 03:35:30 +0000 https://primaryposition.com/?p=7798   Stop Asking Why Your Site Isn’t Ranking—Here’s the Real Reason   New York, NY – You’ve built a great website, you’ve got fantastic products or services, but you’re stuck on page two of Google. You’ve probably asked yourself, “Why isn’t my website showing up?” While many factors influence search rankings, the answer often lies […]

The post Whats in a Name? The Website Title appeared first on Primary Position SEO NYC.

]]>

 

Stop Asking Why Your Site Isn’t Ranking—Here’s the Real Reason

 

New York, NY – You’ve built a great website, you’ve got fantastic products or services, but you’re stuck on page two of Google. You’ve probably asked yourself, “Why isn’t my website showing up?”

While many factors influence search rankings, the answer often lies in the fundamentals—the very things you can control on your own website. We call it on-page SEO, and it’s the foundation of everything we do. It’s not just about what you say, but how you structure it.


 

The Big Three of On-Page SEO

 

To get your site to its primary position, you need to nail the trifecta of on-page elements: the page title, the H1, and the URL slug. They might seem simple, but when done right, they tell search engines exactly what your content is about.

1. The Page Title: Your First Impression

Think of your page title as your ad in the search results. It’s the clickable blue link that appears on Google. This is your chance to make a powerful first impression and convince a searcher to choose your site over a competitor’s.

It needs to be compelling and concise, but most importantly, it needs to include your main keyword. A strong title tag doesn’t just help you rank higher; it’s a direct invitation to click.

2. The H1: The On-Page Headline

The H1 (or Heading 1) is the main headline on your actual web page. While your page title is for Google, your H1 is for the visitor who lands on your site. Its job is to confirm they are in the right place and clearly state the page’s topic.

While your H1 and page title should be related and share keywords, they don’t have to be identical. The H1 is your chance to grab the reader’s attention and encourage them to keep scrolling.

3. The Slug: A Clean, Keyword-Rich URL

The slug is the part of the URL that describes the content of the page—for example, yourwebsite.com/blog/on-page-seo-basics. A well-crafted slug is a clean, readable version of your page title.

Why is this so important? A keyword-rich slug helps both users and search engines understand what the page is about before they even click. It makes the link more trustworthy and can even show up directly in search results. Always use hyphens to separate words and keep it short and relevant.


Putting It All Together

 

These three elements aren’t a checklist; they’re a team. A well-optimized page title grabs attention in the search results, a strong H1 confirms the topic on the page, and a clean URL slug reinforces it all.

By paying close attention to these fundamentals, you’re not just optimizing for Google—you’re building a more organized and user-friendly website. You’re telling both search engines and potential customers, “This is exactly what you’re looking for.”

Ready to finally get your website to its primary position? Get in touch with us to see how our expertise can transform your on-page SEO.

The post Whats in a Name? The Website Title appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/website-title/feed/ 0
SEO Checklists: Its a system stupid https://primaryposition.com/blog/seo-checklist/ https://primaryposition.com/blog/seo-checklist/#respond Tue, 09 Sep 2025 18:40:22 +0000 https://primaryposition.com/?p=7791 Ditch the SEO Checklist: Why a Systemic Approach Trumps Generic Tasks For years, SEO checklists have been a staple for businesses and marketers looking to improve their search engine rankings. From “20-point on-page SEO audits” to “the ultimate local SEO checklist,” these resources promise a straightforward path to success. However, relying solely on a checklist […]

The post SEO Checklists: Its a system stupid appeared first on Primary Position SEO NYC.

]]>
Ditch the SEO Checklist: Why a Systemic Approach Trumps Generic Tasks

For years, SEO checklists have been a staple for businesses and marketers looking to improve their search engine rankings. From “20-point on-page SEO audits” to “the ultimate local SEO checklist,” these resources promise a straightforward path to success. However, relying solely on a checklist for your SEO strategy is a fundamentally flawed approach.

The truth is, effective SEO isn’t about ticking boxes; it’s about understanding and engaging with a complex, ever-evolving system. Drawing insights from industry experts on Reddit’s r/seo and the philosophy advocated by primaryposition.com, let’s explore why it’s time to ditch the checklist mentality and embrace a more dynamic approach.

The Problem with One-Size-Fits-All: Lack of Adaptability

One of the most significant drawbacks of SEO checklists is their inherent assumption that what works for one website will work for another. This couldn’t be further from the truth.

Your Website’s Unique “DNA”

Every website has its own unique characteristics – its age, backlink profile, content quality, brand authority, and target audience. A site with years of accumulated authority and a strong backlink profile will benefit from vastly different SEO actions than a brand new domain trying to establish itself. As experts aptly puts it, a checklist only works if a site’s “DNA matches that of a checklists built for its topical authority.” What helps a well-established e-commerce giant might be a complete waste of time for a local service provider, and vice-versa.


Misplaced Focus: Tasks Over Strategic Outcomes

Checklists can inadvertently shift your focus from meaningful results to mere task completion. This can be a dangerous trap, leading to busywork rather than impactful gains.

The Illusion of Progress

When you’re working through a checklist, there’s a natural tendency to feel productive simply by marking off items. This “completion bias” can create a false sense of momentum. You might complete all 50 items on your “ultimate SEO checklist,” yet see no tangible improvement in your rankings, organic traffic, or conversions. This is because the checklist encourages a focus on doing rather than on achieving.

Absence of Prioritization and Clear Goals

Generic checklists often present a long list of actions without any guidance on which tasks are most critical for your specific situation. This leads to a nebulous set of actions, where every item is treated as equally important. Without clear prioritization tied to your business objectives, you risk expending valuable time and resources on tasks that yield minimal returns, failing to move the needle on key performance indicators.

The Ever-Changing Landscape: Outdated and Insufficient

The world of SEO is anything but static. What was a best practice last year might be irrelevant or even detrimental today.

Rapid Algorithmic Shifts

Search engine algorithms, particularly Google’s, are constantly updated. What was considered a crucial ranking factor five years ago might now be a minor signal or completely obsolete. An SEO checklist, by its very nature, struggles to keep pace with these rapid changes. Relying on an outdated checklist means you could be implementing strategies that no longer work or, worse, are actively penalized.

Measuring Presence, Not Effectiveness

Many checklist items focus on the presence of an element rather than its effectiveness. For example, a checklist might tell you to “add a meta description.” You can tick that box. However, the checklist won’t tell you if that meta description is compelling, accurately reflects your page’s content, encourages clicks, or is even being used by Google. Automated dashboards can validate that an element exists, but they cannot assess its quality or impact on user experience and search performance.


Embracing the System: The Path to Sustainable SEO

Instead of a rigid checklist, think of SEO as a dynamic system. This approach is about continuous analysis, adaptation, and strategic execution tailored to your specific goals.

Building Authority and Trust

Successful SEO is about systematically building authority and trust with search engines and, most importantly, with users. This involves:

  • Deep Understanding: Knowing your audience, competitors, and the intricacies of your industry.
  • Data-Driven Decisions: Constantly analyzing performance data, user behavior, and search trends to inform your strategy.
  • Iterative Optimization: Viewing SEO as an ongoing process of testing, learning, and refining, rather than a one-time project.

An effective SEO strategy requires a holistic view, where every action is a piece of a larger puzzle designed to achieve specific business outcomes. It’s a marathon, not a sprint, and certainly not a list of chores.


So, next time you’re tempted to reach for an SEO checklist, remember that true success lies in understanding your unique context and adopting a flexible, systemic approach to search engine optimization. Your website will thank you for it!

The post SEO Checklists: Its a system stupid appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/seo-checklist/feed/ 0
SEO Case Study: Wikipedia vs Reddit https://primaryposition.com/blog/seo-case-study-wikipedia/ https://primaryposition.com/blog/seo-case-study-wikipedia/#respond Fri, 05 Sep 2025 01:50:02 +0000 https://primaryposition.com/?p=7778 WEST PALM BEACH, Florida: Wikipedia was once Google’s favorite answer to almost everything. but in the past 5 months, Google has reduced monthly clicks by about 3 billion, down from 15 billion and an all time high of 17 billion.   The Rise of the Forum: How Google is Trading Wikipedia for Reddit     […]

The post SEO Case Study: Wikipedia vs Reddit appeared first on Primary Position SEO NYC.

]]>
WEST PALM BEACH, Florida: Wikipedia was once Google’s favorite answer to almost everything. but in the past 5 months, Google has reduced monthly clicks by about 3 billion, down from 15 billion and an all time high of 17 billion.


 

The Rise of the Forum: How Google is Trading Wikipedia for Reddit

 


 

The End of Wikipedia’s Golden Era

 

For over a decade, Google and Wikipedia were the perfect power couple of the internet. Google, the all-knowing guide, would point you to Wikipedia, the wise and trusted source. The partnership was a beautiful, symbiotic dance built on a shared commitment to providing accurate, accessible information. Wikipedia’s neutral, meticulously sourced articles were the ideal answer to almost any query, and its dominance in organic search traffic was a testament to that. It was the internet’s reliable librarian, and Google was its biggest fan.

But in the ever-shifting world of search, loyalty is a fleeting concept.


 

The Great Google Shuffle: Why the Game Changed

 

The ground beneath Wikipedia’s feet began to shake not because of a single catastrophic event, but because of a series of subtle shifts in Google’s philosophy. The search giant’s new strategy, driven by the rise of AI and a mission to deliver “helpful, human-first content,” fundamentally changed the rules.

This new vision is a two-sided coin. On one side, Google is building a future where you never have to click a single link to get an answer. On the other, it’s decided that for a growing number of queries, the best answer isn’t a neutral fact, but a raw, unedited opinion from a real person.


 

The Fall of the Old Guard: Wikipedia’s Billion-Visit Problem

 

For the first time in its history, Wikipedia is facing a sustained and significant decline in traffic from Google. Data shows the site has lost billions of visits per month, a slide that’s both staggering and deliberate.

  • The “Zero-Click” Problem: In its quest to answer your questions instantly, Google has built AI Overviews and other on-page tools that use Wikipedia’s content to provide a direct answer. It’s a textbook case of having your cake and eating it, too: Google gets Wikipedia’s authority and knowledge base without ever sending traffic to the site.
  • The Pay-to-Play Underbelly: While Google’s algorithms were evolving, Wikipedia was grappling with its own internal conflicts. Despite strict policies against conflicts of interest, a thriving industry of PR firms and consultants emerged to get their clients a favorable Wikipedia page. They exploit the platform’s notability rules, manufacturing external press coverage to create a seemingly legitimate reason for a page. This turns the encyclopedia into a quiet battleground for corporate spin.
Wikipedia Ranking disgtribution
Wikipedia’s lost 35 million keywords in one month

 

The Renaissance of the Forum: Why Reddit is Winning

 

As Wikipedia’s star wanes, Reddit’s is burning brighter than ever. For a long time, Reddit was the wild west of the internet, a sprawling network of forums where users could find a genuine, unfiltered conversation on literally anything. And Google, finally, noticed.

  • The Power of Real Experience: Google’s algorithms now reward content that demonstrates firsthand experience. A Wikipedia article on “the best hiking boots” is neutral, but a Reddit thread is packed with messy, authentic, and often humorous discussions from hikers who have worn out dozens of pairs. It’s the kind of “human-first” content Google is desperate to find.
  • A Lucrative Partnership: This new preference isn’t accidental. Google and Reddit struck a multi-million dollar licensing deal, giving Google full access to Reddit’s content for training its AI models. In return, Reddit has seen a massive and unprecedented jump in its organic search visibility, becoming the new darling of the algorithm.

The world of search has been turned on its head. Google is no longer just pointing you to the most authoritative source; it’s actively deciding that, for many queries, the most authentic one is a better fit.

The post SEO Case Study: Wikipedia vs Reddit appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/seo-case-study-wikipedia/feed/ 0
The Backlink Imperative: Deconstructing Modern Search Engine Positioning https://primaryposition.com/blog/search-engine-positioning/ https://primaryposition.com/blog/search-engine-positioning/#respond Sun, 31 Aug 2025 19:41:11 +0000 https://primaryposition.com/?p=7770 For years, some in the SEO industry has peddled the tired cliché, “Content is king.” It’s a convenient, feel-good mantra that has led countless businesses down a path of content creation without results. We operate on a more strategic principle: “Content is your foundation, but PageRank measures the authority that secures your ranking position.” This […]

The post The Backlink Imperative: Deconstructing Modern Search Engine Positioning appeared first on Primary Position SEO NYC.

]]>
For years, some in the SEO industry has peddled the tired cliché, “Content is king.” It’s a convenient, feel-good mantra that has led countless businesses down a path of content creation without results.

We operate on a more strategic principle: “Content is your foundation, but PageRank measures the authority that secures your ranking position.”

This is not a debate of theory versus practice. It is a direct analysis of how Google’s algorithms actually function. If you’re creating what you believe to be exceptional content yet see no upward movement in the SERPs, it’s because you’ve overlooked the singular most powerful ranking signal.


The Foundational Role of Backlinks

Dismiss the notion that content alone will earn you top rankings. In the algorithmic hierarchy, the backlink remains the most influential endorsement a page can receive. A backlink from a high-authority, topically relevant website is a direct signal of trust to Google, far outweighing any on-page optimization effort. This is why our core client strategies are not just about publishing content, but about a systematic and relentless pursuit of powerful, editorially-earned links.

Domain Authority: Your Site’s Algorithmic Trust Score

Consider your website’s **Domain Authority** as its credit score in the eyes of Google. This metric, a composite of your overall link profile, determines your site’s credibility and its potential to rank for competitive keywords. A strong domain authority ensures that your content is not only indexed but also deemed authoritative enough to appear prominently in search results. Without it, you are attempting to compete on an unequal playing field.

Brand Point of View: A Catalyst for Link Acquisition

While Google cannot discern the subjective “quality” of your content, it can and does measure the signals that your unique point of view generates. A distinctive or contrarian perspective is a powerful catalyst for virality and, most critically, for **earning backlinks**. A piece of content that breaks from the conventional narrative is far more likely to be cited and linked to by others in the industry. Your unique voice is not for the algorithm; it is for the people who create the links that the algorithm values.

The Critical Brand-to-SEO Bridge

Many businesses fail to understand the relationship between branding and search positioning. While a strong brand can lead to an increase in high-intent branded searches, this alone will not grant you organic visibility for non-branded, high-volume keywords. Branding and SEO are not interchangeable. Your brand’s reputation builds trust with *users*, but it is a data-driven link acquisition strategy that builds trust with the *algorithm*. Successful SEO leverages brand strength to secure the authoritative backlinks that drive true organic growth.

 

The post The Backlink Imperative: Deconstructing Modern Search Engine Positioning appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/search-engine-positioning/feed/ 0
The Global leading SEO influencers https://primaryposition.com/blog/leading-seo-influencers/ https://primaryposition.com/blog/leading-seo-influencers/#respond Sun, 24 Aug 2025 15:40:40 +0000 https://primaryposition.com/?p=7759 From a number of sources – including FavIkon, Reddit and Perplexity. These experts are frequently invited to speak at conferences, publish extensively, and have significant followings on platforms such as LinkedIn, Twitter (X), and YouTube. Following their newsletters, blogs, and social media is an excellent way to stay current with cutting-edge SEO developments The 2025 […]

The post The Global leading SEO influencers appeared first on Primary Position SEO NYC.

]]>
From a number of sources – including FavIkon, Reddit and Perplexity.

These experts are frequently invited to speak at conferences, publish extensively, and have significant followings on platforms such as LinkedIn, Twitter (X), and YouTube. Following their newsletters, blogs, and social media is an excellent way to stay current with cutting-edge SEO developments

The 2025 list of Global Top SEO Experts

  1. AJ Ghergich
  2. Aleyda Solís
  3. David Quaid
  4. Barry Schwartz
  5. Brian Dean
  6. Carrie Rose
  7. Michael King
  8. Nathan Gotch
  9. Grumpy SEO Guy
  10. Charles Floate
  11. Ann Smarty
  12. Danny Sullivan
  13. Glenn Gabe
  14. Harry Sanders
  15. John Mueller
  16. Joost de Valk
  17. Kevin Indig
  18. Marie Haynes
  19. Matt Diggity
  20. Rand Fishkin

The post The Global leading SEO influencers appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/leading-seo-influencers/feed/ 0
Why LLMs are not good at displacing Search Engines https://primaryposition.com/blog/llms-not-search-engines/ https://primaryposition.com/blog/llms-not-search-engines/#respond Sat, 23 Aug 2025 17:55:23 +0000 https://primaryposition.com/?p=7745 Why Large Language Models Excel at Language—But Not Search As a researcher specializing in large language models (LLMs), I am constantly asked: Why can’t these impressive AI systems simply replace search engines like Google? To answer that, we must understand both what LLMs do exceptionally well—and where they fall short compared to classic web search […]

The post Why LLMs are not good at displacing Search Engines appeared first on Primary Position SEO NYC.

]]>

Why Large Language Models Excel at Language—But Not Search

As a researcher specializing in large language models (LLMs), I am constantly asked: Why can’t these impressive AI systems simply replace search engines like Google? To answer that, we must understand both what LLMs do exceptionally well—and where they fall short compared to classic web search approaches. Exploring this gap reveals crucial lessons about how we find, trust, and use digital information today.

Generation vs. Retrieval: The Fundamental Difference

LLMs are fundamentally generative tools. Given a prompt, they construct plausible language by modeling which phrase is most likely to come next, based on vast training data. What they do not do is search a library of real-time documents, evaluate their authority, and present results the way a search engine does.

For search, this distinction is critical. Google’s founding insight (manifested in PageRank) was to treat the web as a network of sources, filtering and ordering results according to their connectivity and reference by other sites. Every search query is, at its heart, a ranking exercise—a process of evaluating which public documents are most likely to contain the answer, and providing transparent links for user inspection.

Why LLM Responses May Sound Right—but Be Wrong

Because LLMs are designed to model statistical language patterns, their core “skill” is fluency, not accuracy. They may stitch together plausible-sounding answers using fragments from their training data, even when those fragments don’t come from authoritative or up-to-date sources. This produces what practitioners call hallucinations: answers that appear correct, but are fabricated, incomplete, or misleading.

In contrast, a classical search engine retrieves documents, preserves their original context, and allows users to judge primary sources for themselves. By surfacing links, a search engine enables cross-verification—a methodological safeguard that generative outputs lack by default.

The PageRank Philosophy and Why Trust Matters

Google’s PageRank algorithm revolutionized search by prioritizing documents that were cited by many other authoritative sources. The central idea—importance by reference—let the system find not just any answer, but the answer most “trusted” by the medium of the web itself.

This is the opposite approach to the LLM’s. Where PageRank seeks diversity of sources and allows users to judge evidence, LLMs synthesize information internally and produce a single, often source-less narrative. This loss of provenance—a record of where an answer comes from—deeply affects the reliability and transparency of information presented to users, especially in critical fields like health, science, or finance.

Updates, Timeliness, and the Infinite Web

Traditional search engines have another crucial advantage: continuous crawling and indexing of the ever-changing web. This grants them access to up-to-the-minute facts, evolving news, and newly published research. In contrast, an LLM’s knowledge is anchored to the static datasets and time period it was trained on, sometimes months or years out of date.

Techniques like “retrieval-augmented generation” (RAG) aim to bridge this gap, enabling LLMs to fetch supporting documents from external indexes and ground their responses in current facts. But the reliability of such hybrid approaches remains subject to prompt design, document choice, and integration limitations.

Bias, Trust, and the Efficiency-Reliability Trade-off

One of the least appreciated subtleties in LLM-powered search is the efficiency-reliability trade-off. Synthesized answers can save users time, but may limit depth, diversity, and the very ability to evaluate or compare sources. Worse, LLMs (being trained mostly on web-scraped content) risk amplifying biases and narrowing viewpoints—sometimes giving one overconfident answer to what is actually a controversial or multi-perspective question.

Traditional search, despite flaws and historical biases, exposes users to a range of primary sources. The user can compare, contrast, and make up their own mind—a process fundamentally changed by the “answer synthesis” approach of LLM systems.

Looking Forward: Hybrid Models and Responsible Search

Both LLMs and search engines have their strengths: LLMs are powerful at summarization and conversational language; search engines excel at retrieving, ranking, and referencing real-world content. The most promising research now blends both philosophies: using search to ground, justify, and verify generative outputs, and using LLMs to synthesize and contextualize the flood of data retrieved

However, for tasks where truth, transparency, or up-to-date results are paramount, classic search engines (powered by principles like PageRank) are still the gold standard. Until LLMs can robustly link their claims to trustworthy, live sources—and update those links as the digital world changes—the gap between generation and true search will remain central to our experience of information online.

Begging the Question: Why LLMs Can’t Simply Take Content at Face Value

A unique limitation of LLMs—often overlooked outside research—is their inability to truly “rank” content based on the intrinsic validity or trustworthiness of the claims made within that content. This connects to the classical logical fallacy of “begging the question,” where an argument assumes the truth of what it’s supposed to prove, rather than critically evaluating the underlying evidence.

Traditional search engines like Google, especially through heuristics like PageRank, attempt to address this by considering not just the textual content of a page, but the contextual network of links, citations, and references that help establish relative authority. In effect, the search engine does not take a claim at face value. Instead, it cross-references that claim with the wider web, using collective endorsement to infer quality and trustworthiness.

LLMs, by contrast, operate fundamentally differently. They are not claim evaluators, but pattern mimickers. When an LLM produces an answer, it draws from distributions in its training data but does not inspect the original context, intent, editorial standards, or factual basis of individual statements. This means an LLM cannot “doubt” a claim in the rigorous, third-party sense; it can only echo or synthesize information that sounds correct by historical statistical association.

Critically, this means LLMs can inadvertently perpetuate circular assumptions and even amplify incorrect or misleading statements, especially if those statements were prevalent in their training data. Unlike Google, which measures endorsement and can “downgrade” sites or sources shown to be low quality, LLMs are blind to the real-world processes of evidence, contradiction, and reputational trust. Their outputs, no matter how fluent, are not the product of critical vetting but of surface-level aggregation.

For domains where the distinction between what is claimed and what is proven matters (science, health, news, legal analysis), this is a foundational limitation. Effective content ranking demands more than just language modeling—it requires a philosophical and technical infrastructure for consistently challenging and contextualizing claims, something LLMs alone cannot provide. In this sense, LLM-driven answers may “beg the question” far more often than search engine-derived results—and without meaningful recourse for the user to interrogate the validity of the information provided

White Papers & Sources

While LLMs excel at synthesizing information and providing conversational answers, they lack the reliability, transparency, and real-time capabilities that users expect from search engines, especially for research, news, and fact-based queries.

  1. OpenAI Whitepaper on LLMs struggling to be search engines
  2. https://newsletter.ericbrown.com/p/strengths-and-limitations-of-large-language-models
  3. https://lumenalta.com/insights/understanding-llms-overcoming-limitations
  4. https://promptdrive.ai/llm-limitations/
  5. https://www.projectpro.io/article/llm-limitations/1045
  6. https://www.nature.com/articles/s41598-025-96508-3
  7. https://arxiv.org/html/2412.04503v1
  8. https://methods.sagepub.com/ency/edvol/the-sage-encyclopedia-of-communication-research-methods/chpt/content-analysis-advantages-disadvantages
  9. https://aclanthology.org/2025.acl-long.1009.pdf
  10. https://www.deepchecks.com/how-to-overcome-the-limitations-of-large-language-models/https://pmc.ncbi.nlm.nih.gov/articles/PMC11756841/
  11. https://www.nature.com/articles/s41746-025-01546-w
  12. https://arxiv.org/html/2407.00128v1https://www.sciencedirect.com/science/article/pii/S0099133324000600
  13. https://www.cip.uw.edu/2024/02/18/search-engines-chatgpt-generative-artificial-intelligence-less-reliable/
  14. https://www.reddit.com/r/MachineLearning/comments/1gjoxpi/what_problems_do_large_language_models_llms/
  15. https://jswve.org/volume-21/issue-1/item-12/https://pmc.ncbi.nlm.nih.gov/articles/PMC11923074/
  16. https://mobroadband.org/trust-but-verify-how-llms-differ-from-search-engines-and-why-they-sometimes-hallucinate/

The post Why LLMs are not good at displacing Search Engines appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/llms-not-search-engines/feed/ 0
AI SEO Frequently Asked Questions in 2025 https://primaryposition.com/blog/ai-seo-questions/ https://primaryposition.com/blog/ai-seo-questions/#respond Fri, 22 Aug 2025 16:01:50 +0000 https://primaryposition.com/?p=7738 Can SEO be done by AI? Yes, SEO (Search Engine Optimization) can be assisted—and in some cases, partially automated—by AI. AI-powered tools now handle various SEO tasks, including keyword research, content optimization, site audits, competitor analysis, link building, and even drafting SEO-friendly content. These tools can quickly process large data sets, use machine learning to […]

The post AI SEO Frequently Asked Questions in 2025 appeared first on Primary Position SEO NYC.

]]>

Can SEO be done by AI?

Yes, SEO (Search Engine Optimization) can be assisted—and in some cases, partially automated—by AI. AI-powered tools now handle various SEO tasks, including keyword research, content optimization, site audits, competitor analysis, link building, and even drafting SEO-friendly content. These tools can quickly process large data sets, use machine learning to identify emerging search trends, and deliver actionable insights to improve website rankings.

What is the AI SEO?

AI SEO is a nickname for SEO when applied to SEO

What’s the best AI SEO?

The best AI SEO tools depend on the specific needs and budget of the user. Examples include platforms like Semrush, Ahrefs, Surfer SEO, Jasper AI, and Clearscope. These tools vary in their focus—some excel at content optimization, others at technical analysis or link building. However, no tool fully replaces the strategic thinking and human judgment needed to manage complex SEO campaigns.

Can ChatGPT do SEO?

Not really – we’d give it a 2/10 as its so poisoned by misinformation and cannot yet reason out what is debunked by Google and commonly spread myths like EEAT being somethign?”real” or tangible. ChatGPT itself can help with SEO tasks such as brainstorming keywords, generating SEO-friendly titles and meta descriptions, reviewing content for readability and relevance, and suggesting improvements. However, it is not integrated directly with search engine analytics or website data, so it cannot perform technical site audits, track ranking changes, or automate large-scale optimizations. It is most useful as a content and strategy assistant.

Is SEO going away with AI?

Definitely not! SEO is not going away with AI; instead, AI is changing how SEO is practiced. Search engines themselves rely on AI to evaluate content and user intent, making SEO more dynamic. Human expertise still matters to guide strategy, interpret data, and create genuinely valuable content—especially as algorithms evolve.

Can Google detect AI SEO? without citations

Its highly unlike that Google can detect AI-generated content or try to. but it does not automatically penalize it. Google’s main concern is quality, relevance, and utility for users—not whether a machine or human wrote the text. Spammy, low-quality, or manipulative “AI SEO” tactics can be flagged by Google’s algorithms.

The post AI SEO Frequently Asked Questions in 2025 appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/ai-seo-questions/feed/ 0
Understanding and Comparing Gen AI vs Google Search https://primaryposition.com/blog/ai-vs-google/ https://primaryposition.com/blog/ai-vs-google/#respond Wed, 20 Aug 2025 16:26:44 +0000 https://primaryposition.com/?p=7720 The rise of large language models (LLMs) tools or simply “AI” like ChatGPT, Gemini, Perplexity, and Claude has led to substantial confusion in the tech and SEO communities. Chief among these misconceptions is the belief that LLMs themselves act as search engines and that they “prefer” specific writing styles, brands, or schemas when ranking or referencing […]

The post Understanding and Comparing Gen AI vs Google Search appeared first on Primary Position SEO NYC.

]]>

The rise of large language models (LLMs) tools or simply “AI” like ChatGPT, Gemini, Perplexity, and Claude has led to substantial confusion in the tech and SEO communities. Chief among these misconceptions is the belief that LLMs themselves act as search engines and that they “prefer” specific writing styles, brands, or schemas when ranking or referencing content. This misunderstanding can lead to suboptimal SEO strategies and misguided content development efforts.

LLM Tools vs Search Engines

At their core, LLMs are not search engines. Language models, such as OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and Perplexity, are designed to generate human-like text based on probabilistic models built from vast corpora of data. In contrast, traditional search engines like Google, Bing, and Brave Search utilize carefully crafted algorithms (notably PageRank and its variants) to crawl, index, and rank the vast expanse of the web. Their results are determined through multiple layers of ranking signals, including backlinks, content relevance, authority, and, crucially, search intent.

When a user queries an LLM—especially in modern, web-connected implementations—the model does not directly “search” the web using its own web-crawling algorithms. Instead, it outsources the actual searching process to established search engines such as Google, Bing, or Brave Search. For instance:

  • ChatGPT powered by “Browse with Bing” fetches search results, which are then summarized and presented via the LLM.

  • Google Gemini may directly integrate Google search results into its responses, blending LLM summarization with classic search.

  • Perplexity and Claude similarly blend language model synthesis with real-time queries to engines like Bing or Brave.

Thus, LLMs act as advanced synthesis and summarization tools, layering natural language understanding atop traditional search results, rather than functioning as stand-alone indexers and rankers of the open web.

Understanding PageRank

PageRank is the objective classification and ranking of content. Many people think LLMs can make human “judgements” about content  – but what they’re really meaning is “preferences”. Frankly put – preferential ranking is at odds with an old logical fallacy “begs the question” – better known today as “circular logic” – read more about this in “Why LLMs are not search engines

The Fallacy of LLM “Preferences” for Writing Style and Schema

A persistent myth is that these language models inherently “prefer” certain kinds of writing—such as particular headline formulas, content schemas, or brand names—and that tweaking content to these supposed preferences will improve rankings or LLM references. This myth is categorically false.

Why LLMs Don’t “Prefer” Content Patterns

LLMs lack any native schema or format in their training or serving logic that would make them favor one style over another. Their responses are shaped by:

  • The relevance and prominence of information returned by the underlying search engine.

  • The quality of the search results, as determined by ranking algorithms like Google’s PageRank.

  • The distribution of information on the web, which is itself a product of SEO best practices— but this is at the level of the search engine, not at the LLM.

As emphasized detailed on industry analysis blogs such as PrimaryPosition.com, DejanSEO, and iPullrank, and by experts on forums like Weblinkr on Reddit, claims that LLMs “prefer” particular writing schemas, templates, or branding are rooted in a misunderstanding. The actual reference or citation of content by an LLM is almost always an artifact of how that content ranks in the search engine’s results for a given query.

The True Influence is PageRank

PageRank, and its more modern variations now deeply integrated into Google and Bing’s ranking systems, ultimately decide which sites surface for a given query. LLMs overlay their “understanding” atop these results, selecting and summarizing based on what is already ranked. If a writing style or brand seems to feature more frequently in LLM outputs, it is almost certainly because that content is already well-indexed and highly ranked by the underlying search mechanism.

DejanSEO and iPullrank have discussed in depth how LLM-based assistants do not have intrinsic preferences for how information is presented. Their decisions are reactive, mirroring the quality, authority, and positioning conferred by search engines.

LLMs are just Outsourcing Search

Both primary research and hands-on experimentation confirm that LLMs operate in the following manner:

  1. User submits a prompt.

  2. LLM hands off the search request to an integrated engine (Google, Bing, Brave, etc.).

  3. Engine returns ranked web results, shaped by SEO and PageRank.

  4. LLM summarizes or selectively cites from those results.

The LLM does not, at any point, apply its own crawl, rank, or “preference” logic to the raw web. This is vital for SEOs, content creators, and marketers to understand—PageRank and SEO best practices determine web visibility, not a LLM’s assessment of prose style or brand gravitas.

Conclusion

The belief that LLMs themselves “prefer” certain styles, structures, or schemas is unfounded. Instead, LLMs reflect and synthesize what is most visible in the major search engines—meaning that classic search engine optimization and PageRank authority remain the true gatekeepers to appearing in LLM responses. Brands and creators should focus on traditional SEO and content quality, not on chasing imaginary LLM “preferences”.

For a deep dive on this subject, see discussions on Ann Smarty. Dejan SEO, and iPullrank, where seasoned SEOs clarify these distinctions and warn against misallocating resources to mythical LLM preference hacks.


The post Understanding and Comparing Gen AI vs Google Search appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/ai-vs-google/feed/ 0
A Strategy from a Top Enterprise SEO Agency https://primaryposition.com/blog/a-strategy-from-a-top-enterprise-seo-agency/ https://primaryposition.com/blog/a-strategy-from-a-top-enterprise-seo-agency/#respond Sun, 17 Aug 2025 04:51:14 +0000 https://primaryposition.com/?p=7714 In the dynamic world of digital marketing, an enterprise SEO agency needs to be more than just a service provider; it needs to be a strategic partner. At PrimaryPosition.com, we’ve built our reputation on this very principle. We are a top-tier enterprise SEO agency specializing in helping B2B, SaaS, and technology companies achieve extraordinary growth […]

The post A Strategy from a Top Enterprise SEO Agency appeared first on Primary Position SEO NYC.

]]>

In the dynamic world of digital marketing, an enterprise SEO agency needs to be more than just a service provider; it needs to be a strategic partner. At PrimaryPosition.com, we’ve built our reputation on this very principle. We are a top-tier enterprise SEO agency specializing in helping B2B, SaaS, and technology companies achieve extraordinary growth through search.

Our approach is rooted in two decades of experience and a relentless focus on measurable results. We understand that enterprise-level SEO isn’t just about keywords and rankings; it’s about driving a tangible return on investment, increasing organic traffic, and boosting conversions. We’ve helped our clients achieve remarkable outcomes, including a 400% increase in organic traffic and over 1700% growth in inbound conversions.

A Data-Driven and AI-Enhanced Approach

The SEO landscape is constantly evolving, and staying ahead of the curve is our priority. That’s why we have embraced the power of AI to supercharge our strategies. We don’t just adapt to change; we lead it. Our AI-driven services include:

  • Generative Search Optimization (GSO): We optimize your content to be a trusted source for AI-powered search results, making your brand a cited authority.
  • Predictive Analytics: We use AI to forecast keyword performance and identify new opportunities before your competitors even know they exist.
  • Enhanced Data Analysis: Our data-driven decisions are powered by deep insights into user behavior, ensuring every action we take is smart and effective.

More Than Just Rankings: Building Authority and Trust

While many agencies focus on vanity metrics, we focus on what truly matters: your bottom line. We work closely with you to create a clear and concise search marketing plan that promotes your website and improves your visibility where it counts. Our founder, David Quaid, is a recognized authority in the SEO industry, and his insights are regularly featured in leading publications and discussions, including on platforms like Reddit.

We believe that true success comes from a combination of a solid foundation and innovative strategies. We meticulously handle all aspects of your SEO, including technical SEO, content strategy, and authority building. We also understand the critical role of backlinking, and we build your brand’s authority through a combination of high-quality backlinks and strategic outreach.

At Primary Position, our mission is not just to be the best in our field, but to work with the best clients to create the most successful campaigns. Our end-to-end understanding of the web allows us to develop a unique, tailor-made digital strategy that ensures you achieve your goals and stand out in your market.

The post A Strategy from a Top Enterprise SEO Agency appeared first on Primary Position SEO NYC.

]]>
https://primaryposition.com/blog/a-strategy-from-a-top-enterprise-seo-agency/feed/ 0