Robots are programs that automatically crawl the Web and retrieve documents. Web browsers like Internet Explorer or FireFox are operated by humans and don’t automatically retrieve text from referenced documents. Robots are are most often referred to as crawlers, bots, or spiders. These robots visit sites by requesting documents from them. Search engines like Google, Yahoo! and MSN Search employ robots to crawl web documents for the purposes of being indexed and provided as search engine results.
Robots decide to visit a site based on a historical list of URLs, especially of documents with many links elsewhere. A directory or any web page that lists external links is a candidate for a robot visit. Most search engines allow you to submit URLs manually, which will then be queued and visited by the robot. Robots select URLs to visit and to parse as a source for new URLs. Most robots–benevolent robots–routinely check for a special file called “robots.txt” which can be installed by the server administrator of any web site. There may be reasons which a webmaster would want to exclude a robot from visiting his site. One very common reason is for exclusion is due to the large amount of bandwidth that robots eat up. A webmaster may also want the robot to exclude sensitive information or images or other files.
To prevent robots visiting your site put these two lines into the /robots.txt file that lives in the root directory of the server:
But rarely does a webmaster want to exclude robots from visiting an entire site. Webmasters can write a structured text file instructing robots to stay away from certain areas of the server. Webmasters can even choose which robots to allow or disallow. Below is an example of how an exclusion may be written inside a robots.txt file:
# /robots.txt file for http://www.google.com
The first two lines, starting with ‘#’, specify a comment
The first example specifies that the robot called “Googlebot” is allowed to go anywhere.
The second example indicates that the robot called “sillycrawler” has all relative URLs starting with ‘/’ disallowed. Because all relative URL’s on a server start with ‘/’, this means the entire site is disallowed.
The third example indicates that all robots should not visit URLs starting with /tmp or /cgi-bin. The “*” is a special token that refers to “any other User-agent”; wildcard patterns or regular expressions cannot be used in either User-agent or Disallow lines.
Source: Robots.txt FAQ by Martijn Koster.