urlparse
— 将 URL 剖析成组件
¶
注意
The
urlparse
module is renamed to
urllib.parse
in Python 3. The
2to3
tool will automatically adapt imports when converting your sources to Python 3.
源代码: Lib/urlparse.py
此模块定义的标准接口能将 URL (统一资源定位符) 字符串分解成组件 (编址方案、网络位置、路径等),将组件组合回 URL 字符串,及将给定基 URL 的相对 URL 转换成绝对 URL。
模块被设计成匹配互联网 RFC 相对 URL (统一资源定位符)。它支持下列 URL 方案:
file
,
ftp
,
gopher
,
hdl
,
http
,
https
,
imap
,
mailto
,
mms
,
news
,
nntp
,
prospero
,
rsync
,
rtsp
,
rtspu
,
sftp
,
shttp
,
sip
,
sips
,
snews
,
svn
,
svn+ssh
,
telnet
,
wais
.
New in version 2.5:
支持
sftp
and
sips
schemes.
The
urlparse
模块定义了下列函数:
urlparse.
urlparse
(
urlstring
[
,
scheme
[
,
allow_fragments
]
]
)
¶
Parse a URL into six components, returning a 6-tuple. This corresponds to the general structure of a URL:
scheme://netloc/path;parameters?query#fragment
. Each tuple item is a string, possibly empty. The components are not broken up in smaller parts (for example, the network location is a single string), and % escapes are not expanded. The delimiters as shown above are not part of the result, except for a leading slash in the
path
component, which is retained if present. For example:
>>> from urlparse import urlparse
>>> o = urlparse('http://www.cwi.nl:80/%7Eguido/Python.html')
>>> o
ParseResult(scheme='http', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',
params='', query='', fragment='')
>>> o.scheme
'http'
>>> o.port
80
>>> o.geturl()
'http://www.cwi.nl:80/%7Eguido/Python.html'
遵循的句法规范在 RFC 1808 , urlparse recognizes a netloc only if it is properly introduced by ‘//’. Otherwise the input is presumed to be a relative URL and thus to start with a path component.
>>> from urlparse import urlparse
>>> urlparse('//www.cwi.nl:80/%7Eguido/Python.html')
ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',
params='', query='', fragment='')
>>> urlparse('www.cwi.nl/%7Eguido/Python.html')
ParseResult(scheme='', netloc='', path='www.cwi.nl/%7Eguido/Python.html',
params='', query='', fragment='')
>>> urlparse('help/Python.html')
ParseResult(scheme='', netloc='', path='help/Python.html', params='',
query='', fragment='')
若 scheme argument is specified, it gives the default addressing scheme, to be used only if the URL does not specify one. The default value for this argument is the empty string.
若
allow_fragments
argument is false, fragment identifiers are not recognized and parsed as part of the preceding component, even if the URL’s addressing scheme normally does support them. The default value for this argument is
True
.
The return value is actually an instance of a subclass of
tuple
. This class has the following additional read-only convenience attributes:
|
属性 |
索引 |
值 |
值若不存在 |
|---|---|---|---|
|
|
0 | URL 方案说明符 |
scheme 参数 |
|
|
1 | 网络位置部分 | 空字符串 |
|
|
2 | 分层路径 | 空字符串 |
|
|
3 | 用于最后路径元素的参数 | 空字符串 |
|
|
4 | 查询组件 | 空字符串 |
|
|
5 | 片段标识符 | 空字符串 |
|
|
用户名 | ||
|
|
口令 | ||
|
|
主机名 (小写) | ||
|
|
整数形式的端口号 (若存在) |
见章节 Results of urlparse() and urlsplit() for more information on the result object.
字符在
netloc
attribute that decompose under NFKC normalization (as used by the IDNA encoding) into any of
/
,
?
,
#
,
@
,或
:
将引发
ValueError
. If the URL is decomposed before parsing, or is not a Unicode string, no error will be raised.
Changed in version 2.5: Added attributes to return value.
2.7 版改变: 添加 IPv6 URL 剖析能力。
Changed in version 2.7.17:
Characters that affect netloc parsing under NFKC normalization will now raise
ValueError
.
urlparse.
parse_qs
(
qs
[
,
keep_blank_values
[
,
strict_parsing
[
,
max_num_fields
]
]
]
)
¶
Parse a query string given as a string argument (data of type application/x-www-form-urlencoded ). Data are returned as a dictionary. The dictionary keys are the unique query variable names and the values are lists of values for each name.
可选自变量 keep_blank_values is a flag indicating whether blank values in percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included.
可选自变量
strict_parsing
is a flag indicating what to do with parsing errors. If false (the default), errors are silently ignored. If true, errors raise a
ValueError
异常。
可选自变量
max_num_fields
is the maximum number of fields to read. If set, then throws a
ValueError
if there are more than
max_num_fields
fields read.
使用
urllib.urlencode()
function to convert such dictionaries into query strings.
2.6 版新增:
Copied from the
cgi
模块。
Changed in version 2.7.16: 添加 max_num_fields 参数。
urlparse.
parse_qsl
(
qs
[
,
keep_blank_values
[
,
strict_parsing
[
,
max_num_fields
]
]
]
)
¶
Parse a query string given as a string argument (data of type application/x-www-form-urlencoded ). Data are returned as a list of name, value pairs.
可选自变量 keep_blank_values is a flag indicating whether blank values in percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included.
可选自变量
strict_parsing
is a flag indicating what to do with parsing errors. If false (the default), errors are silently ignored. If true, errors raise a
ValueError
异常。
可选自变量
max_num_fields
is the maximum number of fields to read. If set, then throws a
ValueError
if there are more than
max_num_fields
fields read.
使用
urllib.urlencode()
function to convert such lists of pairs into query strings.
2.6 版新增:
Copied from the
cgi
模块。
Changed in version 2.7.16: 添加 max_num_fields 参数。
urlparse.
urlunparse
(
parts
)
¶
Construct a URL from a tuple as returned by
urlparse()
。
parts
argument can be any six-item iterable. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had unnecessary delimiters (for example, a ? with an empty query; the RFC states that these are equivalent).
urlparse.
urlsplit
(
urlstring
[
,
scheme
[
,
allow_fragments
]
]
)
¶
这类似于
urlparse()
, but does not split the params from the URL. This should generally be used instead of
urlparse()
if the more recent URL syntax allowing parameters to be applied to each segment of the
path
portion of the URL (see
RFC 2396
) is wanted. A separate function is needed to separate the path segments and parameters. This function returns a 5-tuple: (addressing scheme, network location, path, query, fragment identifier).
The return value is actually an instance of a subclass of
tuple
. This class has the following additional read-only convenience attributes:
|
属性 |
索引 |
值 |
值若不存在 |
|---|---|---|---|
|
|
0 | URL 方案说明符 |
scheme 参数 |
|
|
1 | 网络位置部分 | 空字符串 |
|
|
2 | 分层路径 | 空字符串 |
|
|
3 | 查询组件 | 空字符串 |
|
|
4 | 片段标识符 | 空字符串 |
|
|
用户名 | ||
|
|
口令 | ||
|
|
主机名 (小写) | ||
|
|
整数形式的端口号 (若存在) |
见章节 Results of urlparse() and urlsplit() for more information on the result object.
字符在
netloc
attribute that decompose under NFKC normalization (as used by the IDNA encoding) into any of
/
,
?
,
#
,
@
,或
:
将引发
ValueError
. If the URL is decomposed before parsing, or is not a Unicode string, no error will be raised.
2.2 版新增。
Changed in version 2.5: Added attributes to return value.
Changed in version 2.7.17:
Characters that affect netloc parsing under NFKC normalization will now raise
ValueError
.
urlparse.
urlunsplit
(
parts
)
¶
Combine the elements of a tuple as returned by
urlsplit()
into a complete URL as a string. The
parts
argument can be any five-item iterable. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had unnecessary delimiters (for example, a ? with an empty query; the RFC states that these are equivalent).
2.2 版新增。
urlparse.
urljoin
(
base
,
url
[
,
allow_fragments
]
)
¶
Construct a full (“absolute”) URL by combining a “base URL” ( base ) with another URL ( url ). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL. For example:
>>> from urlparse import urljoin
>>> urljoin('http://www.cwi.nl/%7Eguido/Python.html', 'FAQ.html')
'http://www.cwi.nl/%7Eguido/FAQ.html'
The
allow_fragments
argument has the same meaning and default as for
urlparse()
.
注意
若
url
is an absolute URL (that is, starting with
//
or
scheme://
),
url
’s host name and/or scheme will be present in the result. For example:
>>> urljoin('http://www.cwi.nl/%7Eguido/Python.html',
... '//www.python.org/%7Eguido')
'http://www.python.org/%7Eguido'
If you do not want that behavior, preprocess the
url
with
urlsplit()
and
urlunsplit()
, removing possible
scheme
and
netloc
parts.
urlparse.
urldefrag
(
url
)
¶
若 url contains a fragment identifier, returns a modified version of url with no fragment identifier, and the fragment identifier as a separate string. If there is no fragment identifier in url ,返回 url unmodified and an empty string.
另请参阅
This is the current standard (STD66). Any changes to urlparse module should conform to this. Certain deviations could be observed, which are mostly for backward compatibility purposes and for certain de-facto parsing requirements as commonly observed in major browsers.
This specifies the parsing requirements of IPv6 URLs.
Document describing the generic syntactic requirements for both Uniform Resource Names (URNs) and Uniform Resource Locators (URLs).
Parsing requirements for mailto URL schemes.
This Request For Comments includes the rules for joining an absolute and a relative URL, including a fair number of “Abnormal Examples” which govern the treatment of border cases.
This specifies the formal syntax and semantics of absolute URLs.
urlparse()
and
urlsplit()
¶
结果对象来自
urlparse()
and
urlsplit()
函数是子类化的
tuple
type. These subclasses add the attributes described in those functions, as well as provide an additional method:
ParseResult.
geturl
(
)
¶
Return the re-combined version of the original URL as a string. This may differ from the original URL in that the scheme will always be normalized to lower case and empty components may be dropped. Specifically, empty parameters, queries, and fragment identifiers will be removed.
The result of this method is a fixpoint if passed back through the original parsing function:
>>> import urlparse
>>> url = 'HTTP://www.Python.org/doc/#'
>>> r1 = urlparse.urlsplit(url)
>>> r1.geturl()
'http://www.Python.org/doc/'
>>> r2 = urlparse.urlsplit(r1.geturl())
>>> r2.geturl()
'http://www.Python.org/doc/'
2.5 版新增。
The following classes provide the implementations of the parse results:
urlparse.
ParseResult
(
scheme
,
netloc
,
path
,
params
,
query
,
fragment
)
¶
具体类为
urlparse()
results.
urlparse.
SplitResult
(
scheme
,
netloc
,
path
,
query
,
fragment
)
¶
具体类为
urlsplit()
results.