(爬虫预热)01Requests模块
一、Requests模块的作用。
创新互联-专业网站定制、快速模板网站建设、高性价比上饶网站开发、企业建站全套包干低至880元,成熟完善的模板库,直接使用。一站式上饶网站制作公司更省心,省钱,快速模板网站建设找我们,业务覆盖上饶地区。费用合理售后完善,10多年实体公司更值得信赖。
Requests 的左右就是充当http的客户端,向服务端发送http请求,然后获得对应的响应头和响应体。
二、包含的请求方式。
#请求方式:
#requests.post()
#requests.get()
#requests.delete()
#requests.head()
#requests.options()
三、基本用法。
response = requests.get("https://www.baidu.com") #向指定url发送get请求。
(response.text) #从服务端返回的response中获取html文档信息。 (response.status_code) #从服务端返回的response中获取本次响应的状态码。 (response.cookies) #从服务端获得本次响应的cookies。
(1)基本get请求。
#带参数的get请求,有两种传递参数的方式。 第一种方法: import requests response = requests.get(" #在本次的GET请求中一共传了两个参数,分别是name = hamasaki age = 40 . print(response.text) 第二种方法: 另外一种传参方式,就是通过生成dict,这种传参的方式比较常用: import requests data = {"name":"hamasaki","age":40} response = requests.get("http://httpbin.org/get",params=data) print(response.text)
(2)通过Requests 获取二进制的数据。
import requests response = requests.get("https://githup.com/favicon.ico") ("favicon.ico","wb") as f: f.write(response.content)
(3)添加headers。
importrequests
headers = { 'Content-Type':'application/json;charset=utf-8', ''Host':'www.baidu.com'} response = requests.get(url="www.baidu.com",headers=headers) print(response.text)
(4)基本post请求。
import requests
headers = {
'Content-Type': 'application/json;charset=utf-8',
'accept': '*/*',
'accept-encoding':'gzip, deflate, br',
'accept-language':'zh-CN,zh;q=0.9',
'access-control-request-headers':'content-type',
#'access-control-request-method:':'POST',
'origin': 'https://www.nike.com',
'referer':'https://www.nike.com/cn/zh_cn/e/nike-plus-membership',
'user-agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.96 Safari/537.36',
}
post_data = {
"client_id":"HlHa2Cje3ctlaOqnxvgZXNaAs7T9nAuH",
"grant_type":"password",
"password":"Suhaozhi123",
"username":"+861394227097",
"ux_id":"com.nike.commerce.nikedotcom.web"
}
response = requests.post(url="https://unite.nike.com/login)
print(response2.status_code)
(5)post请求上传文件。
import requests
files = {'file':open('ayumi.jpg','rb')}
response = requests.post("http://httpbin.org/post",files=files)
(6)使用代理。
import requests
proxy_dict = { #普通http,https代理。
"http":"http://127.0.0.1:9743",
"https":"https://127.0.0.1:9743"
}
response = requests.get("https://www.baidu.com",proxies=proxy_dict)
print(response.status_code)
#入需输入用户名密码的代理
import requests
proxy_dict = {
"http":"http://user:password@127.0.0.1:9743",
"https":"https://user:password@127.0.0.1:9743"
}
response = requests.get("https://www.baidu.com",proxies=proxy_dict)
print(response.status_code)
#socks代理
pip install 'requests[socks]'
import requests
proxy_dict = {
"http":"socks5://127.0.0.1:9743",
"https":"socks5://127.0.0.1:9743"
}
response = requests.get("https://www.baidu.com",proxies=proxy_dict)
print(response.status_code)
四、Respouse相关用法。
response.status_code #获取状态码。
response.headers #获取响应头。
response.cookies #获取cookies对象。
response.url #获取请求时的url。
response.history #获取历史记录。
(1)获取cookie
import requests
response = requests.get("https://www.baidu.com")
for k,v in response.cookies.items():
print(k+"="+v)
五、异常处理相关:
import requests
from requests.exceptions import ReadTimeout,ConnectionError,RequestException
try:
response = requests.get("http://httpbin.org/get",timeour=0.5)
print(response.status_code)
except ReadTimeout:
print("Timeout")
except ConnectionError:
print("connection error")
except RequestException:
print("error")
新闻名称:(爬虫预热)01Requests模块
本文链接:http://scyanting.com/article/gpjids.html