Robots Meta Tag Generator
Control search engine indexing and crawling behaviour with robots meta tags
Configure Tags
0/60 characters
0/160 characters
Generated Code
<meta name="robots" content="index, follow"> <meta property="og:type" content="website"> <meta name="twitter:card" content="summary_large_image"> <meta name="theme-color" content="#000000"> <meta name="viewport" content="width=device-width, initial-scale=1">
π‘ Tip: Copy these tags and paste them inside the <head> section of your HTML document.
Other Meta Tag Generators
About Robots Meta Tag Generator
Robots meta tags give you fine-grained control over how search engines interact with individual pages on your website. While robots.txt controls which pages crawlers can visit, robots meta tags control what they do with those pages after visiting β whether to index them, follow their links, show snippets in search results, or cache them.
Getting robots meta tags right is critical for technical SEO. A misplaced noindex tag on an important page can remove it from search results entirely. Missing noindex tags on pages like thank-you pages, admin screens, and duplicate content can waste crawl budget and dilute your site's quality signals. Both mistakes are common and both have real consequences for search visibility.
The robots meta tag generator creates correctly formatted directives for every scenario β from standard index/follow configurations for public pages, to complex combinations of noindex, nofollow, nosnippet, and noimageindex for pages requiring precise crawl control.
Robots Meta Tags vs robots.txt
Robots meta tags and robots.txt work at different levels and serve complementary purposes. robots.txt controls access β it tells crawlers which URLs they're allowed to visit. Robots meta tags control behaviour β they tell crawlers what to do with pages they've already visited.
This distinction has an important implication: if you block a page in robots.txt, search engines can't read the noindex tag on that page. For pages you want to prevent from appearing in search results, noindex meta tags are usually the correct approach, not robots.txt blocking. Use robots.txt to block pages you never want crawled, and noindex tags for pages you don't want indexed.
The X-Robots-Tag HTTP header is an alternative to meta robots tags that works for non-HTML files like PDFs and images. For HTML pages, the meta robots tag is the standard approach.
Key Considerations
index vs noindex
noindex tells search engines not to include the page in their index β it won't appear in search results. Use noindex for: thank-you pages, checkout flows, login pages, admin interfaces, duplicate content, and paginated pages beyond page 1. Only index pages you'd be happy for any user to find via search.
follow vs nofollow
nofollow tells search engines not to follow the links on a page or pass link equity through them. A noindex page can still have its links followed. Use nofollow sparingly β it's most appropriate for pages with untrusted third-party links or paid link pages.
nosnippet and max-snippet
nosnippet prevents Google from showing a text snippet in search results. max-snippet:[n] limits the snippet to n characters. These are useful for paywalled content where showing too much in snippets reduces subscription conversions.
Per-Bot Directives
You can target specific crawlers using the bot's name as the meta name. <meta name='googlebot' content='noindex'> only applies to Google. <meta name='robots'> applies to all compliant crawlers β allowing you to have different indexing behaviour per search engine.
Common Robots Tag Issues
Accidental noindex
- β’noindex left on pages after migrating from staging to production
- β’CMS default settings adding noindex to new pages then forgetting to remove it
- β’noindex inherited from a page template applied site-wide
- β’Plugin or theme adding noindex to category or archive pages unintentionally
Missing noindex
- β’Thank-you and confirmation pages appearing in search results
- β’Paginated pages beyond page 1 indexed without unique content
- β’URL parameter variants indexed alongside canonical versions
- β’Development and staging pages accidentally accessible and indexed
Configuration Conflicts
- β’noindex in meta tag but page blocked in robots.txt β Google cannot read the noindex
- β’Conflicting robots directives from multiple plugins
- β’Googlebot-specific and general robots tags conflicting with each other
Implementation Guide
Common Robots Meta Tag Configurations
Ready-to-use robots meta tags for the most common scenarios:
<!-- Standard public page -->
<meta name="robots" content="index, follow" />
<!-- Page to exclude from search results -->
<meta name="robots" content="noindex, follow" />
<!-- Completely exclude β no index, no link following -->
<meta name="robots" content="noindex, nofollow" />
<!-- No snippet, but still indexed -->
<meta name="robots" content="index, follow, nosnippet" />
<!-- Limit snippet length -->
<meta name="robots" content="index, follow, max-snippet:150" />
<!-- Google-specific directives -->
<meta name="googlebot" content="noindex, nofollow" />
<!-- Full control with all directives -->
<meta name="robots" content="index, follow, max-snippet:200, max-image-preview:large, max-video-preview:-1" />Next.js 15 Robots Metadata
Configure robots directives using the Next.js metadata API:
// app/your-page/page.tsx
export const metadata = {
// Standard public page
robots: {
index: true,
follow: true,
googleBot: {
index: true,
follow: true,
"max-image-preview": "large",
"max-snippet": -1,
},
},
};
// Page to exclude from search:
export const metadataNoIndex = {
robots: { index: false, follow: true },
};
// Thank-you page:
export const metadataThankYou = {
robots: { index: false, follow: false },
};Audit Robots Tags Across Your Site
Find accidental noindex tags before they harm rankings:
// Simple check for noindex on a URL
const checkRobotsMeta = async (url: string) => {
const response = await fetch(url);
const html = await response.text();
return {
url,
hasNoIndex: html.toLowerCase().includes("noindex"),
robotsTags: html.match(
/<meta[^>]*name=["']robots["'][^>]*>/gi
),
};
};
// Run against your most important pages after
// any CMS update, plugin change, or deploymentCommon Use Cases
- Preventing thank-you, checkout, and admin pages from appearing in search
- Removing duplicate or paginated content from Google's index
- Protecting paywalled content from being fully shown in snippets
- Auditing a site migration to catch accidental noindex directives
- Configuring staging environments to prevent accidental indexing
Pro Tip
After any major site update, CMS migration, or plugin change, audit your robots meta tags. Accidental noindex tags are one of the most common causes of mysterious ranking drops β a plugin update or template change can silently add noindex to thousands of pages. Catching it quickly minimises the damage.