Google robots.txt noindex

Google no longer supports noindex in robots.txt

Even though Google has never officially supported ‘noindex’ in robots.txt, it ends on September 1st.

In its efforts to standardize robots.txt, Google has announced that it will no longer support it and other rules in robots.txt as of September 1st.
“In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options,” the company said.

What are the Alternatives to noindex?

Recommended and most easiest way is to set up a password for the page, post or site. In wordpress there are some easy to use plugins for that


What is the robots.txt – file?

A robots.txt file tells search engine crawlers which pages or files the crawler can and can not request from your site. This is mainly to avoid overloading your site with requests. It is not a mechanism to exclude a website from Google.

From: https://www.seo-suedwest.de/

Example

# robots.txt for example.com
# I exclude these webcrawlers
User-agent: Sidewinder
Disallow: /
User-agent: Microsoft.URL.Control
Disallow: /
# These directories / files should not
# be searched
User-agent: *
Disallow: /default.html
Disallow: / Temp / # this content will not be recaptured by search engines; whether previously captured content is removed is undefined
Disallow: /Privat/Family/Birthdays.html # Not secret, but should not be crawled by search engines.

Share this post on
SEO Webdesign Bali Website Logo

Site Navigation

Scroll to Top