13.3. robotparser — robots.txt 剖析器

注意

The robotparser module has been renamed urllib.robotparser in Python 3. The 2to3 tool will automatically adapt imports when converting your sources to Python 3.

此模块提供单个类, RobotFileParser , which answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the robots.txt file. For more details on the structure of robots.txt files, see http://www.robotstxt.org/orig.html .

class robotparser. RobotFileParser ( url='' )

This class provides methods to read, parse and answer questions about the robots.txt file at url .

set_url ( url )

Sets the URL referring to a robots.txt 文件。

read ( )

读取 robots.txt URL and feeds it to the parser.

parse ( lines )

剖析行自变量。

can_fetch ( useragent , url )

返回 True useragent is allowed to fetch the url according to the rules contained in the parsed robots.txt 文件。

mtime ( )

Returns the time the robots.txt file was last fetched. This is useful for long-running web spiders that need to check for new robots.txt files periodically.

modified ( )

Sets the time the robots.txt file was last fetched to the current time.

The following example demonstrates basic use of the RobotFileParser class.

>>> import robotparser
>>> rp = robotparser.RobotFileParser()
>>> rp.set_url("http://www.musi-cal.com/robots.txt")
>>> rp.read()
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
False
>>> rp.can_fetch("*", "http://www.musi-cal.com/")
True
					

上一话题

13.2. ConfigParser — 配置文件剖析器

下一话题

13.4. netrc — netrc 文件处理

本页