How is it possible that my page /admin/login.asp is found in Google with the query "inurl:admin/login.asp" while it isn't with the "site:www.domain.xx" query?

I've this line of code in my robots.txt:

User-agent: *
Disallow: /admin/ 

And this in the HTML code of the page:

<meta name="robots" content="noindex, nofollow" /> 

Any ideas?

  • Did you add the "noindex" meta tag at a later time? w3d over 6 years ago

2 answers

w3d 25
0
points

I assume this is a link only reference by Google and there is no title or description? Otherwise there could be a problem with your robots.txt file (or it's a Google bug?!)

Having a disallow rule in your robots.txt file just means the robots should not spider the page and index its contents. If you link to this page then Google might still provide a URL-only link in the SERPs - this is by design. However, having a "noindex" robots meta tag in the page itself (like you appear to have) should prevent even the link from appearing the SERPs - which makes me wonder whether the meta tag was added later? Or the page is new and G hasn't checked it yet?!

If the meta tag was a more recent addition then it may disappear with time, or you can actively request this URL to be removed.

Answered over 6 years ago by w3d
  • You say there's a difference between the disallow rule in the robots.txt file and a "noindex" robots meta tag? waanders over 6 years ago
0
points

No, both uploaded at the same time, 4 months ago. See also my question @ Stack Overflow.

Answered over 6 years ago by waanders