Google must peek at pages they are expressly told not to index in robots.txt on purpose just to see what in the heck is being hidden. There is no technical reason imaginable that they can't read the same robots.txt everyone else is reading as neither Yahoo, MSN or Teoma have ever crawled the pages marked off limits yet Google just can't seem to control themselves and keep their damned bots off those pages.
So which is it Google?
- Everything at Google is still in BETA so what do you expect
- Our engineering dept. just can't get all the bugs out, get over it
- We peek regardless because we're Google and we can
1 comment:
You still catching the Googlebots where they shouldn't be?
Don't bother asking questions of Google as they don't have any answers and my 15 month old grandaughter can write better code than Google can.
Post a Comment