Giter Site home page Giter Site logo

python_api's Introduction

python_api's People

Contributors

arunura avatar briluza avatar camiro avatar chuckwoodraska avatar codejester avatar dependabot[bot] avatar derricw avatar g-clef avatar jmeridth avatar jnwatson avatar mariodsantana avatar mccarthym avatar piffey avatar saurabh-prakash avatar timothycrosley avatar underrun avatar vepiphyte avatar wesinator avatar wesleya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python_api's Issues

[IRIS INVESTIGATE] Limit the number of results

When I search for ip information,very generic domains like microsoft or google comes back the exception that there are too many results.
I don't want to specify other search parameters, but in that case with iris investigate is it possible to get a limit of results, like return only 10.
Thanks

exceptions.ServiceException: Unknown Exception

While sending bulk lookups through the IRIS investigate API, I hit an exception in the Unknown Category. (exceptions.ServiceException: Unknown Exception).

I did some debugging and it looks like the URI is large and I am getting a HTTP Status : 413: Payload Too Large response. (https://tools.ietf.org/html/rfc7231#section-6.5.11) . I have also gotten the 414 URI too long.

Work around is to lessen the amount of domains sent to try to keep it under the limit but ideally, I would love to be able to send 100 at a time to keep it nice a quick.

Thanks for taking a look.

reverse_whois limit on results?

Hi devs,

While utilizing the reverse_whois endpoint it looks like I can only get the first 5,000 results when doing a call like so in a basic async function:

call = await self.api.reverse_whois(query=[email], mode='purchase')
results_obj = call.response() 
domain_list = results_obj.get('domains')

Is there documentation I have missed or a better way to implement it so that it will paginate or scan through the AsyncResults object? (Python 3.6 and domaintools-api 0.2.4)

parsed WHOIS flattened() exception when flattening IP address record

The ParsedWhois.flattened() method raises a KeyError exception when used to flatten the WHOIS record for an IP address, but it works fine when flattening domain info.

from domaintools.api import API

api = API("myuser", "mykey")
bad = api.parsed_whois("172.217.164.164").flattened()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/username/.pyenv/versions/huntlib/lib/python3.8/site-packages/domaintools/results.py", line 55, in flattened
    value = parsed[key]
KeyError: 'domain'

Since we're not looking up a domain, the returned value doesn't have a domain key.

My dev system is OS X 10.14.6 running Python 3.8.3 and domaintools-api version 0.5.2.

Allow for complex queries in API

As of my latest testing, I have found no way to complex queries within the Python API. For example, if I wanted to do a query in the UI where the registrant name was bob AND the mx ip was 1.2.3.4 I could do this.

Or more spcifically, I can do ANDs or ORs of the same field for example mx_host = mx1.google.com AND mx_host = mx2.google.com.

While this is a specific example that I struggling with now, at a higher level, the API functionality should be at parity with the UI functionality so that folks can interact with the product at both levels and not run into road blocks.

aiohttp==3.4.4 dependency issue on Ubuntu 16.04.5

The issue is Python 3.5.2 which is the highest version that can be installed easily on Ubuntu 16.04-LTS
But:

    RuntimeError: aiohttp 3.x requires Python 3.5.3+

In a virtualenv pip install fails with the following message

Collecting aiohttp==3.4.4 (from domaintools_api->-r REQUIREMENTS (line 20))
  Could not find a version that satisfies the requirement aiohttp==3.4.4 (from domaintools_api->-r REQUIREMENTS (line 20)) (from versions: 0.1, 0.2, 0.3, 0.4, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.5.0, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.14.1, 0.14.2, 0.14.3, 0.14.4, 0.15.0, 0.15.1, 0.15.2, 0.15.3, 0.16.0, 0.16.1, 0.16.2, 0.16.3, 0.16.4, 0.16.5, 0.16.6, 0.17.0, 0.17.1, 0.17.2, 0.17.3, 0.17.4, 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.18.4, 0.19.0, 0.20.0, 0.20.1, 0.20.2, 0.21.0, 0.21.1, 0.21.2, 0.21.4, 0.21.5, 0.21.6, 0.22.0a0, 0.22.0b0, 0.22.0b1, 0.22.0b2, 0.22.0b3, 0.22.0b4, 0.22.0b5, 0.22.0b6, 0.22.0, 0.22.1, 0.22.2, 0.22.3, 0.22.4, 0.22.5, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.2.0, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 2.0.0rc1, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.6.post1, 2.0.7, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.3.0a1, 2.3.0a2, 2.3.0a4, 2.3.0, 2.3.1a1, 2.3.1, 2.3.2b2, 2.3.2b3, 2.3.2, 2.3.3, 2.3.4, 2.3.5, 2.3.6, 2.3.7, 2.3.8, 2.3.9, 2.3.10, 3.0.0b0)
No matching distribution found for aiohttp==3.4.4 (from domaintools_api->-r REQUIREMENTS (line 20))

403 NotAuthorizedException in code when Account Information endpoint disabled

I'm getting an unexpected 403 NotAuthorizedException on .whois() running in Python 3.
Installed with pip3 install --user domaintools_api

macOS and Ubuntu

from domaintools import API
import dtconfig

api = API(dtconfig.user, dtconfig.apiKey)
print(api.username, api.key)

#whois = api.whois('google.com')
#print(whois)

parsed_whois = api.parsed_whois("google.com")
print(parsed_whois)

output:

NotAuthorizedException                    Traceback (most recent call last)
<ipython-input-33-2391f6a41ef5> in <module>
      9 #print(whois)
     10 
---> 11 parsed_whois = api.parsed_whois("google.com")
     12 print(parsed_whois)

/usr/local/lib/python3.7/site-packages/domaintools/api.py in parsed_whois(self, query, **kwargs)
    125     def parsed_whois(self, query, **kwargs):
    126         """Pass in a domain name"""
--> 127         return self._results('parsed-whois', '/v1/{0}/whois/parsed'.format(query), cls=ParsedWhois, **kwargs)
    128 
    129     def registrant_monitor(self, query, exclude=[], days_back=0, limit=None, **kwargs):

/usr/local/lib/python3.7/site-packages/domaintools/api.py in _results(self, product, path, cls, **kwargs)
     57         """Returns _results for the specified API path with the specified **kwargs parameters"""
     58         if product != 'account-information' and self.rate_limit and not self.limits_set and not self.limits:
---> 59             self._rate_limit()
     60 
     61         uri = '/'.join(('{0}://api.domaintools.com'.format('https' if self.https else 'http'), path.lstrip('/')))

/usr/local/lib/python3.7/site-packages/domaintools/api.py in _rate_limit(self)
     51         """Pulls in and enforces the latest rate limits for the specified user"""
     52         self.limits_set = True
---> 53         for product in self.account_information():
     54             self.limits[product['id']] = {'interval': timedelta(seconds=60 / float(product['per_minute_limit']))}
     55 

/usr/local/lib/python3.7/site-packages/domaintools/base_results.py in __iter__(self)
    167 
    168     def __iter__(self):
--> 169         return self._items().__iter__()
    170 
    171     def has_key(self, key):

/usr/local/lib/python3.7/site-packages/domaintools/base_results.py in _items(self)
    150                 return self.items()
    151 
--> 152             response = self.response()
    153             for step in self.items_path:
    154                 response = response[step]

/usr/local/lib/python3.7/site-packages/domaintools/base_results.py in response(self)
    131     def response(self):
    132         if self._response is None:
--> 133             response = self.data()
    134             for step in self.response_path:
    135                 response = response[step]

/usr/local/lib/python3.7/site-packages/domaintools/base_results.py in data(self)
     82         if self._data is None:
     83             results = self._get_results()
---> 84             raise_after = self.setStatus(results.status_code, results)
     85             if self.kwargs.get('format', 'json') == 'json':
     86                 self._data = results.json()

/usr/local/lib/python3.7/site-packages/domaintools/base_results.py in setStatus(self, code, response)
    117             raise BadRequestException(code, reason)
    118         elif code == 403:
--> 119             raise NotAuthorizedException(code, reason)
    120         elif code == 404:
    121             raise NotFoundException(code, reason)

NotAuthorizedException: {'error': {'code': 403, 'message': 'Your API user account is not authorized to access this product.'}, 'resources': {'support': 'http://www.domaintools.com/support/'}}

dtconfig.py contains the following:

user = 'user'
apiKey = '[UUID lowercase]'
host = 'https://api.domaintools.com'

The same dtconfig user and key work correctly with manual api calls in the following code snippet:

uriFirst = '/v1/' + domain + '/whois'
            signature = signer.sign(timestamp, uriFirst)
            uriSecond = '?api_username=' + user + '&signature=' + signature + '&timestamp=' + timestamp
            fullUrl = host + uriFirst + uriSecond
            #print(fullUrl)
            try:
                retrieve = requests.get(fullUrl).json()

Am I missing something ?

async calls can block on sync calls made via requests

Issue

When reviewing recorded async API calls with VCR, I realized that a request was being made to https://api.domaintools.com/v1/account URL prior to performing the requested URL via aiohttp. This is done for rate limiting. This request is made with a requests.Session.get() call, which will block the ioloop for the current thread.

A brief call stack is the following:

# domaintools python_api code
whois_history, api.py:176
_results, api.py:59
_rate_limit, api.py:53
__iter__, base_results.py:169
_items, base_results.py:152
response, base_results.py:133
data, base_results.py:83
_get_results, base_results.py:65
_make_request, base_results.py:60
# Down into requests.Session code here
get, sessions.py:536

Environment

Python & os:

  • Python 3.7.0
  • Ubuntu 16.04

Relevant libraries:

  • aiohttp==3.4.4
  • domaintools-api==0.3.1
  • requests==2.20.0
  • vcrpy==1.13.0

No Error Handling of Limit Exceeded In Iris Investigate Queries

When doing a large Iris Investigate query that exceeds the 5000 domain response limit you receive:

`/usr/local/lib/python3.7/site-packages/domaintools/base_results.py in iter(self)
167
168 def iter(self):
--> 169 return self._items().iter()
170
171 def has_key(self, key):

/usr/local/lib/python3.7/site-packages/domaintools/base_results.py in _items(self)
152 response = self.response()
153 for step in self.items_path:
--> 154 response = response[step]
155 self._items_list = response
156

KeyError: 'results'`

Instead of what you would get from the base API call:

{"response":{"limit_exceeded":true,"has_more_results":true,"message":"Maximum 5000 returned - you may need to refine your query.","missing_domains":[]}}

Async not Python 3.7 compatible

E       TypeError: 'async for' received an object from __aiter__ that does not implement __anext__: coroutine```

TLDR: Python 3.7 does not allow `async __aiter__` implementations.

I'll have a PR up shortly.

Error message

I keep getting this: "from: can't read /var/mail/domaintools"

_rate_limit breaks when api returns None as the hour/minute rate limit.

Describe the bug
This package doesn't expect to receive None as a possible response for rate limite per product, and sometimes this is what the api returns.
Error happens in domaintools/api.py", line 68, in _rate_limit
self.limits[product['id']] = {'interval': timedelta(seconds=60 / float(product['per_minute_limit']))}
TypeError: float() argument must be a string or a number, not 'NoneType'`

To Reproduce
Run any task like "iris enrich" with creds from an account that has None as the rate limit for a product.

Expected behavior
This value should be parsed correctly before being passed to the float() function, and the script would go on.

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version: 22.04

Additional context
This started happening only recently, seems like something changed in domaintools api, which started sending those None values

414 Request-URI Too Large exception when requesting a list of domains with long names

Description:
When using the iris_enrich function with a batch size of domain less than the limit of 100, the package returns the 414 Request-URI Too Large exception. I guess it is related to the sum of the string length of the requested domains.

Can the API handle this situation? Or is it supposed to be managed by the end-user? If so, what is the limit?
The example below estimates the maximum total length of 1728 characters.

Tested on:

  • Python 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
  • DomainTools API 0.5.2

Reproducible example:

import os
from time import sleep
from domaintools import API, __version__

print(f"DomainTools API verison: {__version__}")

API_USERNAME = os.getenv("API_USERNAME", "your_hardcoded_username_here")
API_KEY = os.getenv("API_KEY", "your_hardcoded_password_here")

dtools_api = API(API_USERNAME, API_KEY)

domains = [
    'ics-informationsystems.com', 'githubusercontent.com', 'familiaganadora.com.ar',
    'infrastructuremalta.com', 'instantlocaldates.com', 'saglikliadimlarprojesi.org',
    'rollersadnessstranded.com', 'finedininglovers.co.uk', 'sempliceassicurazione.it',
    'magazintraditional.ro', 'freeiworktemplates.com', 'starthealthystayhealthy.lk',
    'costcowaterdelivery.com', 'flowergardengirl.co.uk', 'theparentingindex.com',
    'zyjsmacznieizdrowo.pl', 'polandspringbornbetter.com', 'formation-et-expertise.fr',
    'freshlymadesimplyfrozen.com', 'jeanmarcmorandini.com', 'ws-cookie-manager.com',
    'universidadeeuropeia.pt', 'runative-syndicate.com', 'galderma-kundenservice.de',
    'entremamasnido.com.mx', 'sanpellegrinofestivals.com', 'atrium-innovations.com',
    'landlordmanoeuvre.com', 'animeunityserver30.cloud', 'microsoftonline-p.com',
    'nurturewellnessvillage.com', 'naturesheartsuperfoods.co.uk', 'cuidarseesdisfrutar.com.mx',
    'visualwebsiteoptimizer.com', 'novocraquedacozinha.com.br', 'restaurantbenjamins.ro',
    'anticagelateriadelcorso.com', 'beveragelcafootprint.com', 'saglikliadimlarprojesi.com',
    'zyjzdrowoisportowo.pl', 'birliktekoruyalim.com', 'microsoftstoreemail.com',
    'hotelmarshalgarden.ro', 'zdrowystartwprzyszlosc.pl', 'microsofttranslator.com',
    'pureliferippleeffect.com', 'pickyeatersarabia.com', 'globalallianceforyouth.org',
    'hidratatemejor.com.mx', 'promilnurturethegift.com.ph', 'youthparliament.com.pk',
    'starthealthystayhealthy.com.bd', 'wholeearthfarms.com.ar', 'secretsdegourmets.com',
    'microsoftonline-p.net', 'greengarden-events.ro', 'prosecutorcessationdial.com',
    'action-gegen-hellen-hautkrebs.de', 'nestealovethebeach.com.ph', 'my-acticol-nutritionist.com',
    'cloudflareinsights.com', 'rxdirectplussavings.com', 'reducecatallergens.com',
    'alpotellitlikeitis.com', 'galdermaorderform.com', 'assicurazionircaonline.com',
    '20questionsaboutwater.com', 'mowembarknegligence.com', 'projektpodklucz.com.pl',
    'datadoghq-browser-agent.com', 'nutricionyejercicio.es', 'mistressavouchdeity.com',
    'multipleintelligence.com.ph', 'electricfoldinggate.com', 'foodandeverythingelse.ng',
    'cetaphilfriends.com.sg', 'sculptraaesthetic.com', 'nqticketdorado.com.mx',
    'iyibuyusuniyiyasasin.com', 'lawiswiskawayanresort.com', 'cerealpartnersfoodservice.co.uk',
    'goldendrop-baby.co.kr', 'experienciadolce.com.uy', 'proplanveterinarydiets.ca',
    'llenalacalledevida.es', 'warsztaty-vitaflo-mpku-katowice.pl', 'astonishinglysimplecoffee.com',
    'healthybreakfast.com.cn', 'petsatworkalliance.com', 'smaspecialfeeds.co.uk',
    'chameleoncoldbrew.com', 'ristorantepizzeriamaghera.it', 'weekenddiscoveries.com.ph',
    'nurturenetwork.com.ph', 'wykanczaniewnetrz.com', 'agircontrelesrougeurs.lu',
    'impactinformation.com', 'centre-equestre-divonne.com', 'rcbconlinebanking.com',
    'specialfeedsforspecialneeds.co.uk'
]

# Example 1: 414 Request-URI Too Large Error with batch of 100 domains
# domains_response = dtools_api.iris_enrich(*domains).response()

# Example 2: Estimate the maximum number of characters allowed
for size in range(70, 75):
    print(f"Num domains: {size}, Concat size: {sum([len(d) for d in domains[:size]])}")
    dtools_api.iris_enrich(*domains[:size]).response()
    print(f"Response OK")
    sleep(1.5)

Result:

DomainTools API verison: 0.5.2
Num domains: 70, Concat size: 1656
Response OK
Num domains: 71, Concat size: 1678
Response OK
Num domains: 72, Concat size: 1701
Response OK
Num domains: 73, Concat size: 1728
Response OK
Num domains: 74, Concat size: 1751
Traceback (most recent call last):
  File "C:/Users/erodriguez/PycharmProjects/gsoc-ds-ml-anomaly-detection/domaintools_414issue_example.py", line 55, in <module>
    dtools_api.iris_enrich(*domains[:size]).response()
  File "C:\Users\erodriguez\AppData\Local\Continuum\anaconda3\envs\gsoc-ds-ml-anomaly-detection-tests\lib\site-packages\domaintools\base_results.py", line 161, in response
    response = self.data()
  File "C:\Users\erodriguez\AppData\Local\Continuum\anaconda3\envs\gsoc-ds-ml-anomaly-detection-tests\lib\site-packages\domaintools\base_results.py", line 93, in data
    self.setStatus(results.status_code, results)
  File "C:\Users\erodriguez\AppData\Local\Continuum\anaconda3\envs\gsoc-ds-ml-anomaly-detection-tests\lib\site-packages\domaintools\base_results.py", line 155, in setStatus
    raise RequestUriTooLongException(code, reason)
domaintools.exceptions.RequestUriTooLongException: <html>
<head><title>414 Request-URI Too Large</title></head>
<body bgcolor="white">
<center><h1>414 Request-URI Too Large</h1></center>
<hr><center>nginx</center>
</body>
</html>

Error while accessing from Linux env

Issue:

I have a python application which uses domain_tools api. My dev environment is on my windows PC and it work fine but when I deploy it to linux server it give following error

Traceback (most recent call last):
File "/home/myuser/myapplication/lib/work/work.py", line 98, in run
execute_func(msg)
File "/home/myuser/myapplication/lib/work/fraud/mywork.py", line 49, in execute
riskscore = json.loads(str(self.domaintools_client.risk(url)))
File "/home/myuser/myapplication/mylibrary/myclient/domaintools_client.py", line 24, in risk
return self.api.risk(domain)
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/api.py", line 254, in risk
return self._results('risk', '/v1/risk', items_path=('components',), domain=domain, cls=Reputation,
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/api.py", line 66, in _results
self._rate_limit()
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/api.py", line 60, in _rate_limit
for product in self.account_information():
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/base_results.py", line 197, in iter
return self._items().iter()
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/base_results.py", line 180, in _items
response = self.response()
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/base_results.py", line 161, in response
response = self.data()
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/base_results.py", line 92, in data
results = self._get_results()
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/base_results.py", line 74, in _get_results
data = self._make_request()
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/domaintools/base_results.py", line 68, in _make_request
return session.get(url=self.url, params=self.kwargs, verify=self.api.verify_ssl,
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/requests/sessions.py", line 636, in send
kwargs.setdefault('proxies', self.rebuild_proxies(request, self.proxies))
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/requests/sessions.py", line 305, in rebuild_proxies
username, password = get_auth_from_url(new_proxies[scheme])
File "/home/myuser/myapplication/venv/lib64/python3.8/site-packages/requests/utils.py", line 948, in get_auth_from_url
parsed = urlparse(url)
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/urllib/parse.py", line 372, in urlparse
url, scheme, _coerce_result = _coerce_args(url, scheme)
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/urllib/parse.py", line 124, in _coerce_args
return _decode_args(args) + (_encode_result,)
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/urllib/parse.py", line 108, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/urllib/parse.py", line 108, in
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'dict' object has no attribute 'decode'
'dict' object has no attribute 'decode'

For same input the Windows gives correct o/p while Linux gives error.
If anyone has any suggestions, it will be a great help

ModuleNotFoundError: domaintools_async

Using a python virtualenv with python 3.6.5. Following initial setup and ont import of the API I receive this error:

Python 3.6.5 (default, Apr 25 2018, 14:26:36)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from domaintools import API
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/username/Envs/dt_env/lib/python3.6/site-packages/domaintools/__init__.py", line 3, in <module>
    from domaintools.api import API
  File "/Users/username/Envs/dt_env/lib/python3.6/site-packages/domaintools/api.py", line 7, in <module>
    from domaintools.results import GroupedIterable, ParsedWhois, Reputation, Results
  File "/Users/username/Envs/dt_env/lib/python3.6/site-packages/domaintools/results.py", line 19, in <module>
    from domaintools_async import AsyncResults as Results
ModuleNotFoundError: No module named 'domaintools_async'
>>>

Any additional setup steps needed? Verified that domaintools-api 0.2.2 is in the output of pip-list

Pagination improvements

Hello!

There are two aspects to this ticket.

Document that the API only returns the first 500 results

It seems like a pretty important thing to note, but the API (notably the iris_investigate api - your most used endpoint) only returns 500 results by default. This is not stated in the main places you might expect it to be:

https://github.com/DomainTools/python_api
https://github.com/DomainTools/python_api/blob/master/domaintools/api.py#L277

The documentation string for iris_investigate could even be construed to indicate that all of the results are returned:

 You can loop over results of your investigation as if it was a native Python list:
            for result in api.iris_investigate(ip='199.30.228.112'):  # Enables looping over all related results

Handle pagination natively within the library

Wanting to get all results for a query rather than just the first 500 seems like a common use case for users - I tried the most obvious method of adding limit=5000 as an argument to iris_investigate e.g.:

with domaintools_obj.iris_investigate(search_hash=SEARCH_HASH, limit=5000) as results:
    for result in results:
       ...

However this appears to have no effect. Inspecting the library code I think that this isnt a valid argument.

If this is the case, it would be nice if pagination were handled within the library.

Thanks,
Tom

Retrieve Shared Iris Queries via API

In IRIS we can share our investigations with our organization. We'd like to be able to retrieve the investigation name, description, and latest search hash of the investigation via the API. This would alleviate us having to export every investigation and update the search_hash in our external application which uses domaintool's API on a cronjob - instead we can just edit a shared query and the cronjob will automatically pull in the query logic from the shared investigation.

Python 3.7 ImportError: cannot import name 'API' from 'domaintools' (unknown location)

macOS 10.14.6
python 3.7.4 (brew 3.7.4_1)
domaintools-api 0.3.3

$ domaintools 
Traceback (most recent call last):
  File "/usr/local/bin/domaintools", line 7, in <module>
    from domaintools.cli import run
  File "/usr/local/lib/python3.7/site-packages/domaintools/cli.py", line 7, in <module>
    from domaintools import API
ImportError: cannot import name 'API' from 'domaintools' (unknown location)

No Documentation for Search Hash in Iris Investigate

The search_hash parameter in Iris Investigate is super useful for monitoring an investigation -- particularly when paired with the results_updated_after parameter. This is a note both for myself -- or whoever gets around to it -- to add that documentation for future reference.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.