Giter Site home page Giter Site logo

proxyofxun's Introduction

讯代理文档

  • 独享代理大批量接入示例代码(DDproxy文件夹内)

  • 动态转发代理接入示例代码(ZFproxy文件夹内)

  • 欢迎热心用户提供示例代码,我们将给予奖励

proxyofxun's People

Contributors

zhixunkeji2017 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

proxyofxun's Issues

python示例代码在 request2.26无法完成proxy认证

requests2.26版以前均工作正常,但在2.26版中失败,返回
{"code":200,"msg":"auth fail, no auth header"}

请教在requests2.26如何完成认证?

查文档,猜测源于这个Improvements:
Session.send now correctly resolves proxy configurations from both the Session and Request. Behavior now matches Session.request. (#5681)

Custom Headers
If you’d like to add HTTP headers to a request, simply pass in a dict to the headers parameter.
For example, we didn’t specify our user-agent in the previous example:

url = 'https://api.github.com/some/endpoint'
headers = {'user-agent': 'my-app/0.0.1'}
r = requests.get(url, headers=headers)
Note: Custom headers are given less precedence than more specific sources of information. For instance:
Authorization headers set with headers= will be overridden if credentials are specified in .netrc, which in turn will be overridden by the auth= parameter. Requests will search for the netrc file at ~/.netrc, ~/_netrc, or at the path specified by the NETRC environment variable.
Authorization headers will be removed if you get redirected off-host.
####################################################################
Proxy-Authorization headers will be overridden by proxy credentials provided in the URL.
####################################################################
Content-Length headers will be overridden when we can determine the length of the content.

mistake in : /example/ZFProxy/Python/main.py

some mistake in : /example/ZFProxy/Python/main.py
line:33 proxy = {"http": "http://" + ip_port, "https": "https://" + ip_port}
https port can't be 80

suggestion:proxy = {"http": "http://" + ip_port, "https": "https://" + ip}

献上动态代理的python3代码

# _*_coding:utf8_*_
# Project: lin_mass_tools
# File: test.py
# Author: ClassmateLin
# Email: [email protected]
# Time: 2020/2/24 12:34 下午
# DESC:
import urllib3
import time
import random
import requests
import hashlib

urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)


class XunProxy:
    """
    讯代理
    """
    user_agent_list = [
        'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1464.0 Safari/537.36',
        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.16 Safari/537.36',
        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.3319.102 Safari/537.36',
        'Mozilla/5.0 (X11; CrOS i686 3912.101.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116'
        ' Safari/537.36',
        'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 '
        'Safari/537.36',
        'Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
        'Chrome/32.0.1667.0 Safari/537.36',
        'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0.6',
        'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36',
        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36',
        'Mozilla/5.0 (X11; CrOS i686 3912.101.0) AppleWebKit/537.36 (KHTML, like Gecko)'
        ' Chrome/27.0.1453.116 Safari/537.36']

    def __init__(self, order_no='', secret=''):
        """
        :param order_no:
        :param secret:
        """
        self._ip = "forward.xdaili.cn"
        self._port = "80"
        self._ip_port = self._ip + ":" + self._port
        self._order_no = order_no
        self._secret = secret
        self._proxy = {"http": "http://" + self._ip_port, "https": "https://" + self._ip_port}
        self._headers = self._get_headers()

    @property
    def proxy(self):
        """
        返回代理
        :return:
        """
        return self._proxy

    @property
    def headers(self):
        """
        返回请求头
        :return:
        """
        self._headers['user-agent'] = random.choice(type(self).user_agent_list)
        return self._headers

    def _get_headers(self):
        """
        获取请求头
        :return:
        """
        timestamp = str(int(time.time()))
        string = "orderno=" + self._order_no + "," + "secret=" + self._secret + "," + "timestamp=" + timestamp
        string = string.encode()
        md5_string = hashlib.md5(string).hexdigest()
        sign = md5_string.upper()
        auth = "sign=" + sign + "&" + "orderno=" + self._order_no + "&" + "timestamp=" + timestamp
        headers = {
            "Proxy-Authorization": auth,
            "user-agent": random.choice(type(self).user_agent_list)
        }
        return headers

    def test(self, url):
        """
        测试访问
        :param url:
        :return:
        """
        try:
            res = requests.get(url, headers=self._headers,
                               proxies=self._proxy, verify=False, allow_redirects=False, timeout=5)
            if res.status_code == 200:
                return True
        except Exception as e:
            print(e.args)
            return False

    def get_cur_ip(self):
        """
        获取当前IP, 检测代理是否可用
        :return:
        """
        url = 'https://api.ipify.org/?format=json'
        try:
            res = requests.get(url, headers=self._headers,
                               proxies=self._proxy, verify=False, allow_redirects=False, timeout=5).json()
            if 'ip' in res:
                print(res['ip'])
                return True
        except Exception as e:
            return False


def get(url, headers, proxy):
    """
    :param url
    :param headers:
    :param proxy:
    :return:
    """
    res = requests.get(url, headers=headers, proxies=proxy, verify=False, allow_redirects=False, timeout=5)
    print(res.status_code)


if __name__ == '__main__':
    pro = XunProxy(order_no='ZF202022450xxxxx', secret='abb2878586f049b6xxxxx')
    print(pro.get_cur_ip())
    print(pro.test('https://www.baidu.com'))
    get('https://www.baidu.com', pro.headers, pro.proxy)

scrapy is not support Disable SSL certificate verification

scrapy is not support Disable SSL certificate verification ,so i cannot connect https via the dynamic proxy。
Python爬虫框架scrapy本身不支持忽略https证书验证,所以动态转发https在scrapy上不能成功访问到,希望提供重写HttpDownloadHandler的解决方案!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.