Giter Site home page Giter Site logo

thehive-project / hippocampe Goto Github PK

View Code? Open in Web Editor NEW
165.0 165.0 44.0 866 KB

Threat Feed Aggregation, Made Easy

Home Page: https://thehive-project.org

License: GNU Affero General Public License v3.0

Python 77.91% Shell 1.29% JavaScript 10.76% HTML 9.57% Dockerfile 0.47%
aggregator feed free free-software intel open-source python thehive threat-score threatintel

hippocampe's People

Contributors

garanews avatar jeromeleonard avatar ninsmith avatar norgalades avatar phpsystems avatar saadkadhi avatar zoomequipd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hippocampe's Issues

Can't seem to get the docker running

I'm, not very familiar with dockers, but wanted to give this a quick spin. I created a ubuntu VM, allocated 25GB disk, 2GB memory and installed docker. I git pulled and created the docker via the instructions.

When I run this : curl -XGET localhost:5000/hippocampe/api/v1.0/shadowbook

I get a 500 and stderr/stdout shows this in the image. I can't find any ES logs, but it appears to be running :
[2017-02-05 19:27:40,138][INFO ][node ] [Adam II] started
[2017-02-05 19:27:40,178][INFO ][gateway ] [Adam II] recovered [0] indices into cluster_state
[2017-02-05 19:28:07,903][INFO ][cluster.metadata ] [Adam II] [hippocampe] creating index, cause [api], templates [], shards [5]/[1], mappings []
[2017-02-05 19:28:08,322][DEBUG][action.admin.indices.mapping.put] [Adam II] failed to put mappings on indices [[hippocampe]], type [jobs]
org.elasticsearch.index.mapper.MapperParsingException: No handler for type [keyword] declared on field [status]
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:288)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:214)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:136)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:211)
at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:192)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:449)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$4.execute(MetaDataMappingService.java:505)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:204)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:167)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Can't use hippomore, server error

Can't use hippomore

Error

I did a fresh install with TheHive, Cortex and Hippocampe on my server under centos 7.
I'm not able to use the hippomore module. I updated my modules.
under cortex, when I try to analyse an IP address, hippocampe return the following :

Hippocampe: HTTP Error 405: METHOD NOT ALLOWED

Work Environment

Question Answer
OS version (server) CentOS 7
Hippocampe version / git hash Current

Problem Description

I did a fresh install with TheHive, Cortex and Hippocampe on my server under centos 7.
I'm not able to use the hippomore module. I updated my modules.
under cortex, when I try to analyse an IP address, hippocampe return the following :

Hippocampe: HTTP Error 405: METHOD NOT ALLOWED

Steps to Reproduce

  1. Install hippocampe and cortex on centos
  2. enable the hippomore module
  3. try to analyse an IP through cortex with hippomore

Possible Solutions

A problem with the URL provided under the configuration of the hippomore module.. I tried many thing, with the current error, I provided :
http://127.0.0.1:5000/hippocampe

Any ideas ? thank you very much in advance !

Hippocampe behind a proxy

Request Type

Bug

Work Environment

Question Answer
OS version (server) Ubuntu 16.04
OS version (client) Windows 7
Hippocampe version / git hash cf46107
Package Type From source

Problem Description

Shadowbook is failing due to corporate proxy. It does not use the set environment variables. Also states that a job is already running all the time even after the data is deleted and feeds re-indexed.

Hipposcore in error

When i select Hipposcore, i generat an error :

[2017-01-26 15:39:55,685] ERROR in app: Exception on /hippocampe/api/v1.0/hipposcore [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1982, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1614, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1517, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "Hippocampe/core/app.py", line 121, in hipposcoreService
    result = more.main(request.json)
  File "/home/disit/Hippocampe/core/services/more.py", line 54, in main
    queue.put(pool.map(tellMeMore, chunks))
  File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map
    return self.map_async(func, iterable, chunksize).get()
  File "/usr/lib/python2.7/multiprocessing/pool.py", line 567, in get
    raise self._value
KeyError: 'type'

Problems with ES2.4.5 with Hippocampe

Request Type

Bug

Work Environment

Question Answer
OS version (server) Debian 8
OS version (client) Ubuntu
Hippocampe version / git hash 9888ed1
Package Type Binary
Browser type & version n.a.

Problem Description

Hippocampe seems not to work.

Steps to Reproduce

-Installation was done as described in Install procedure on the same host as TheHive runs.
-Hippocampe was configured to run as Service with UWSGI
-Additional hippocampe was started on console (as port 5000 was not bound by "python app.py")
-First query was placed:
curl -XGET localhost:5000/hippocampe/api/v1.0/shadowbook
-Result on console is:

[2017-05-12 11:57:45,218] ERROR in app: Exception on /hippocampe/api/v1.0/shadowbook [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functionsrule.endpoint
File "app.py", line 168, in shadowbookService
if 'error' in reportJob:
TypeError: argument of type 'NoneType' is not iterable
127.0.0.1 - - [12/May/2017 11:57:45] "GET /hippocampe/api/v1.0/shadowbook HTTP/1.1" 500 -

Complementary Information

Index seems to exist:
user1:~# curl 'http://localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open hippocampe 5 1 0 0 795b 795b
yellow open the_hive_8 5 1 4999 1 3.1mb 3.1mb

Problem in running Hippocampe

Hello,

Can any one please help me !!!
I have installed hippocampe in ubutnu 16.04l LTS of 64 bit.
But I am not able to figure out how to make hippocampe run to get feeds.
I had followed all the steps explained in Documentation as well..

Thanks in advance

All is empty

Re !

Now that Hippocampe works, I wanted to add data but I can not click anywhere. I can not do anything with Hippocampe at the moment

pbhippocampepng
pbhippocampe2

Upgrade to Elasticsearch v2/v5

Elasticsearch support for v.1.7 is reaching its end in January: https://www.elastic.co/fr/support/eol
I tried to use v.2.4, but got an unexpected behavior: job is never updated and stays ongoing for ever.

{
  "AVkrJKJo4Y9D2h2GY9yi": {
    "startTime": "20161223T100538+0000",
    "status": "ongoing"
  }
}

Looking at the logs, it seams that something is silently failing:

services.shadowbook :: INFO :: shadowbook.startJob end
services.shadowbook :: INFO :: shadowbook.updateJob launched
services.shadowbook :: INFO :: updating with status: done

From /core/services/shadowbook.py

def updateJob(report, status):
	logger.info('shadowbook.updateJob launched')
	logger.info('updating with status: %s', status)
	#update job status, add the report and calculates the duration
	job = Job()
	job.updateStatus(report, status)
	logger.info('shadowbook.updateJob end')

And I never got this log:

services.shadowbook :: INFO :: shadowbook.updateJob launched

Did you already tried to upgrade to a more recent version of ES (v.2.x or v.5.x) ?
Did you encounter some blocking issues?

With Elasticsearch 5, distinct fails: 'size must be positive, got 0'

Hippocampe message:

{
  "error": "TransportError(400, u'search_phase_execution_exception', u'size must be positive, got 0')"
}

Elasticsearch log:

[2017-01-10T16:14:43,326][DEBUG][o.e.a.s.TransportSearchAction] [qLP6RLc] [hippocampe][4], node[qLP6RLc7Qw-gNa0FFpwoog], [P], s[STARTED], a[id=YZpE2OT_Qx-3ebqS3K4rqQ]: Failed to execute [SearchRequest{searchTyp$
=QUERY_THEN_FETCH, indices=[hippocampe], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multipl
e_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={
  "aggregations" : {
    "distinct" : {
      "terms" : {
        "field" : "ip",
        "size" : 0,
        "shard_size" : -1,
        "min_doc_count" : 1,
        "shard_min_doc_count" : 0,
        "show_term_doc_count_error" : false,
        "order" : [
          {
            "_count" : "desc"
          },
          {
            "_term" : "asc"
          }
        ]
      }
    }
  },
  "ext" : { }
}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [qLP6RLc][127.0.0.1:9300][indices:data/read/search[phase/query]]
Caused by: java.lang.IllegalArgumentException: size must be positive, got 0

With Elasticsearch 5, shadowbook fails: "script or doc is missing"

127.0.0.1 - - [10/Jan/2017 15:37:30] "GET /hippocampe/api/v1.0/shadowbook HTTP/1.1" 200 -
Exception in thread Thread-16:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/dc/dev/Hippocampe/core/services/shadowbook.py", line 117, in manageJob
    updateJob(report, status)
  File "/home/dc/dev/Hippocampe/core/services/shadowbook.py", line 106, in updateJob
    job.updateStatus(report, status)
  File "/home/dc/dev/Hippocampe/core/services/modules/shadowbook/objects/Job.py", line 105, in updateStatus
  File "/home/dc/dev/Hippocampe/core/services/modules/shadowbook/objects/ObjToIndex.py", line 76, in update
    qUpdate = self.es.update(index = self.indexNameES, doc_type = self.typeNameES, id = self.idES, body = self.docUpdate)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 69, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 452, in update
    doc_type, id, '_update'), params=params, body=body)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 327, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 109, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/base.py", line 113, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
RequestError: TransportError(400, u'action_request_validation_exception', u'Validation Failed: 1: script or doc is missing;')

Error when adding a new feed

Request Type

Bug

Work Environment

Question Answer
OS version (server) Alpine 3.4
OS version (client) Win10
Hippocampe version / git hash f4d8807
Package Type Docker
Browser type & version N/A
Elasticsearch version 5.5

Problem Description

When adding a new feed and launching shadowbook, I got this error on the new feed: ["TransportError(400, u'illegal_argument_exception', u"Can't parse [index] value [not_analyzed] for field [source], expected [true] or [false]")"] from Elasticsearch.

Steps to Reproduce

  1. Remove a feeds from the core/conf/feed directory
  2. Launch Hippocampe for the first time, on a new index and launch shadowbook
  3. Then, shutdown Hippocampe and restore the deleted feed
  4. Launch Hippocampe and launch Shadowbook again
  5. When shadowbook is finished, the new feed got the mentioned error

Possible Solutions

Looking at the Elasticsearch doc, index parameter should be True or False (false by default).
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/keyword.html
In core/services/modules/shadowbook/objects/IndexIntel.py, index parameter for field "source" is "not_analyzed".

Internal Server Error for NGINX Setup

Request Type

Bug

Work Environment

Question Answer
OS version (server) Ubuntu 16.0.4
OS version (client) MacOS
Hippocampe version / git hash Latest
Package Type From source
Browser type & version N/A

Problem Description

The NGINX process for fronting Hippocampe fails with internal server error

Steps to Reproduce

  1. Install hippocampe per docs
  2. Install nginix / uwsgi per docs
  3. Access the webserver http://ip/hippocampe
2017/12/23 23:09:49 [emerg] 5124#5124: invalid number of arguments in "try_files" directive in /etc/nginx/conf.d/hippo_nginx.conf:7
2017/12/23 23:12:44 [emerg] 5216#5216: invalid number of arguments in "try_files" directive in /etc/nginx/conf.d/hippo_nginx.conf:7
2017/12/23 23:15:18 [error] 5316#5316: *2 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 192.168.1.2, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.1.1", referrer: "http://192.168.1.1/"
2017/12/23 23:15:23 [error] 5316#5316: *2 open() "/usr/share/nginx/html/hippocamppe" failed (2: No such file or directory), client: 192.168.1.2, server: localhost, request: "GET /hippocamppe HTTP/1.1", host: "192.168.1.1"
2017/12/23 23:15:35 [error] 5316#5316: *2 open() "/usr/share/nginx/html/hippocampe" failed (2: No such file or directory), client: 192.168.1.2, server: localhost, request: "GET /hippocampe HTTP/1.1", host: "192.168.1.1"
2017/12/23 23:15:39 [error] 5316#5316: *2 "/usr/share/nginx/html/hippocampe/index.html" is not found (2: No such file or directory), client: 192.168.1.2, server: localhost, request: "GET /hippocampe/ HTTP/1.1", host: "192.168.1.1"
2017/12/23 23:16:59 [error] 5316#5316: *3 "/usr/share/nginx/html/hippocampe/index.html" is not found (2: No such file or directory), client: 192.168.1.2, server: localhost, request: "GET /hippocampe/ HTTP/1.1", host: "192.168.1.1"
2017/12/23 23:17:02 [error] 5316#5316: *3 open() "/usr/share/nginx/html/hippocampe" failed (2: No such file or directory), client: 192.168.1.2, server: localhost, request: "GET /hippocampe HTTP/1.1", host: "192.168.1.1"
2017/12/23 23:17:39 [error] 5316#5316: *3 open() "/usr/share/nginx/html/hippocampe" failed (2: No such file or directory), client: 192.168.1.2, server: localhost, request: "GET /hippocampe HTTP/1.1", host: "192.168.1.1"
2017/12/23 23:18:00 [error] 5316#5316: *4 open() "/usr/share/nginx/html/hippocampe" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /hippocampe HTTP/1.1", host: "localhost"
2017/12/23 23:25:48 [error] 1343#1343: *1 connect() to unix:/var/www/demoapp/demoapp_uwsgi.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.2, server: localhost, request: "GET /hippocampe HTTP/1.1", upstream: "uwsgi://unix:/var/www/demoapp/demoapp_uwsgi.sock:", host: "192.168.1.1"
2017/12/23 23:25:52 [error] 1343#1343: *1 connect() to unix:/var/www/demoapp/demoapp_uwsgi.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.2, server: localhost, request: "GET /hippocampe HTTP/1.1", upstream: "uwsgi://unix:/var/www/demoapp/demoapp_uwsgi.sock:", host: "192.168.1.1"

Autoshun Feed Fails on "Empty Value Passed for a required argument body"

Request Type

Bug

Work Environment

Question Answer
OS version (server) Ubuntu 16.0.4
OS version (client) OS X
Hippocampe version / git hash Latest
Package Type Source
Browser type & version N/A

Problem Description

The AutoShun feed returns an error after ingestion is attempted.


shunlist_IP.conf | true | ["Empty value passed for a required argument 'body'."] | http://autoshun.org/files/shunlist.csv | 0 | 0 | 0 | 0
-- | -- | -- | -- | -- | -- | -- | --

Steps to Reproduce

  1. Install hippocampe
  2. Start ingestion process
  3. Error returned (above)

Disabling of Feeds No Longer Active

Request Type

Bug

Work Environment

Question Answer
OS version (server) Ubuntu 16.04 LTS
OS version (client) OS X
Hippocampe version / git hash Latest
Package Type Source
Browser type & version N/A

Problem Description

Some of the feeds have been shutdown / disabled.

abuse_palevo_IP.conf | true | ["403 Client Error: Forbidden for url: https://palevotracker.abuse.ch/blocklists.php?download=ipblocklist"] | https://palevotracker.abuse.ch/blocklists.php?download=ipblocklist

and

abuse_palevo_DOMAIN.conf | true | ["403 Client Error: Forbidden for url: https://palevotracker.abuse.ch/blocklists.php?download=domainblocklist"] | https://palevotracker.abuse.ch/blocklists.php?download=domainblocklist

and

openbl_IP.conf | true | ["404 Client Error: Not Found for url: http://www.openbl.org/lists/base_all.txt"] | http://www.openbl.org/lists/base_all.txt

Steps to Reproduce

  1. Install Hippocampe
  2. Initiate feed ingestion
  3. Errors returned.

Possible Solutions

Disable feed, replace with others.

First start failed because of missing directory

Traceback (most recent call last):
  File "app.py", line 383, in <module>
    file_handler = RotatingFileHandler(pathLog, 'a', 1000000, 1)
  File "/usr/lib/python2.7/logging/handlers.py", line 117, in __init__
    BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)
  File "/usr/lib/python2.7/logging/handlers.py", line 64, in __init__
    logging.FileHandler.__init__(self, filename, mode, encoding, delay)
  File "/usr/lib/python2.7/logging/__init__.py", line 913, in __init__
    StreamHandler.__init__(self, self._open())
  File "/usr/lib/python2.7/logging/__init__.py", line 943, in _open
    stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/home/dc/dev/Hippocampe/core/logs/hippocampe.log'

Hippocampe Errors on ImportError: No module named configparser

Request Type

Bug

Work Environment

Question Answer
OS version (server) Ubuntu 16.4 LTS
OS version (client) OS X
Hippocampe version / git hash Latest
Package Type From source
Browser type & version N/A

Problem Description

When executing the app.py script, the following error is produced.

Traceback (most recent call last):
  File "./app.py", line 5, in <module>
    from services import shadowbook, hipposcore, more, distinct, typeIntel, new, sources, hipposched, freshness, schedReport, lastQuery, sizeBySources, sizeByType, monitorSources, jobs, lastStatus
  File "/opt/Hippocampe/core/services/shadowbook.py", line 8, in <module>
    from modules.shadowbook import createSessions
  File "/opt/Hippocampe/core/services/modules/shadowbook/createSessions.py", line 10, in <module>
    from getConf import getConf
  File "/opt/Hippocampe/core/services/modules/shadowbook/../../../services/modules/common/getConf.py", line 4, in <module>
    from configparser import ConfigParser
ImportError: No module named configparser

Steps to Reproduce

  1. Follow the install steps in the docs.
  2. Execute the app.py script.
  3. Error returned

Possible Solutions

Unknown Module is installed for both python2 / python3

No jobs execution + error 500

Hello,

After all reinstalled correctly, the error remains.
I noticed error 500 int he attached logs : .

 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [26/Jan/2017 10:21:48] "GET /hippocampe HTTP/1.1" 200 -
127.0.0.1 - - [26/Jan/2017 10:21:49] "GET /hippocampe/api/v1.0/sizeByType HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:21:49] "GET /hippocampe/api/v1.0/sizeBySources HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:21:49] "GET /hippocampe/api/v1.0/monitorSources HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:21:49] "GET /hippocampe/api/v1.0/type HTTP/1.1" 200 -
127.0.0.1 - - [26/Jan/2017 10:21:49] "GET /hippocampe/api/v1.0/type HTTP/1.1" 200 -
127.0.0.1 - - [26/Jan/2017 10:21:58] "GET /hippocampe/api/v1.0/shadowbook HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:22:05] "GET /hippocampe/api/v1.0/jobs HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:22:06] "GET /hippocampe/api/v1.0/sizeByType HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:22:06] "GET /hippocampe/api/v1.0/sizeBySources HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:22:06] "GET /hippocampe/api/v1.0/monitorSources HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:22:06] "GET /hippocampe/api/v1.0/type HTTP/1.1" 200 -
127.0.0.1 - - [26/Jan/2017 10:22:07] "GET /hippocampe/api/v1.0/type HTTP/1.1" 200 -
127.0.0.1 - - [26/Jan/2017 10:22:07] "GET /hippocampe/api/v1.0/sources HTTP/1.1" 500 -
127.0.0.1 - - [26/Jan/2017 10:22:08] "POST /hippocampe/api/v1.0/hipposcore HTTP/1.1" 500 -

With Elasticsearch 5, shadowbook fails: "illegal_argument_exception"

127.0.0.1 - - [10/Jan/2017 15:38:08] "GET /hippocampe/api/v1.0/shadowbook HTTP/1.1" 200 -
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/dc/dev/Hippocampe/core/services/shadowbook.py", line 117, in manageJob
    updateJob(report, status)
  File "/home/dc/dev/Hippocampe/core/services/shadowbook.py", line 106, in updateJob
    job.updateStatus(report, status)
  File "/home/dc/dev/Hippocampe/core/services/modules/shadowbook/objects/Job.py", line 108, in updateStatus
    self.update()
  File "/home/dc/dev/Hippocampe/core/services/modules/shadowbook/objects/ObjToIndex.py", line 76, in update
    qUpdate = self.es.update(index = self.indexNameES, doc_type = self.typeNameES, id = self.idES, body = self.docUpdate)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 69, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 452, in update
    doc_type, id, '_update'), params=params, body=body)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 327, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 109, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/base.py", line 113, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
RequestError: TransportError(400, u'illegal_argument_exception', u'[qLP6RLc][127.0.0.1:9300][indices:data/write/update[s]]')

Problem in hippocampe aggregator feed retrieval

Work Environment

Question Answer
OS version (server) Ubuntu 16.04
OS version (client) MAC OS 10.12

Request Type:
Beginner's Problem

Description:
I installed hippocampe following its user guide recently. I build the hippocampe and elastic search from scratch(not the docker version)

When I try to retrieve the feeds from shadowbook service(using the feeds retrieval way mentioned in documentation)
I always get the following:

{"AWSI0Z8MNNNpe3UggzPE":{"startTime":"20180711T150956+0500","status":"ongoing"}}

When I try to stop this process using this in the ubuntu command line terminal:

curl -XDELETE http://MYIPADDRESS:5000/hippocampe/jobs

I get this output:

<title>405 Method Not Allowed</title>

Method Not Allowed

The method is not allowed for the requested URL.

I need to fix this issue. Please guide me through this. Thanks!

Hippocampe as a service

Hello, i want to try to run Hippocampe as a service with your documentation but at the end; i've the message "Internal Server Error"

Can you point me to a solution.

Thanks

Error 500 on MonitorSources

Bug

Question Answer
OS version (server) Ubuntu
OS version (client) 10,.
Hippocampe version โ€ข Git Latest commit f4d8807
Package Type Binary
Browser type & version All Browsers

Problem Description

Unable to use the Monitor Ressource in the WebUI
Thet page monitorressopurce return an error 500

Steps to Reproduce

Just click on the Home Page
No Monitor Source are dispalyed

Complementary information

2018-03-24 17:36:34,605 :: services :: INFO :: monitorSources service requested
2018-03-24 17:36:34,608 :: services.modules.common.ES :: INFO :: ES.checkES launched
2018-03-24 17:36:34,620 :: services.modules.common.ES :: INFO :: ES.checkData launched
2018-03-24 17:36:34,623 :: services.modules.common.ES :: INFO :: ['sourceType']
2018-03-24 17:36:34,658 :: services.modules.common.ES :: INFO :: index hippocampe and type source exist
2018-03-24 17:36:34,668 :: services.monitorSources :: INFO :: monitorSources.main launched
2018-03-24 17:36:34,669 :: services.lastQuery :: INFO :: lastQuery.main launched
2018-03-24 17:36:34,670 :: services.sources :: INFO :: sources.main launched
2018-03-24 17:36:34,740 :: services.sources :: INFO :: sources.main end
2018-03-24 17:36:34,746 :: services.lastQuery :: INFO :: lastQuery.main end
2018-03-24 17:36:34,752 :: services.schedReport :: INFO :: schedReport.main launched
2018-03-24 17:36:34,762 :: services.sources :: INFO :: sources.main launched
2018-03-24 17:36:34,810 :: services.sources :: INFO :: sources.main end
2018-03-24 17:36:34,827 :: services.schedReport :: INFO :: schedReport.main end
2018-03-24 17:36:34,830 :: services.sizeBySources :: INFO :: sizeBySources.main launched
2018-03-24 17:36:34,832 :: services.sources :: INFO :: sources.main launched
2018-03-24 17:36:34,881 :: services.sources :: INFO :: sources.main end
2018-03-24 17:36:35,369 :: services.sizeBySources :: INFO :: sizeBySources.main end
2018-03-24 17:36:35,371 :: services.freshness :: INFO :: freshness.main launched
2018-03-24 17:36:35,381 :: services.sources :: INFO :: sources.main launched
2018-03-24 17:36:35,429 :: services.sources :: INFO :: sources.main end
2018-03-24 17:36:35,445 :: services.freshness :: INFO :: freshness.main end
2018-03-24 17:36:35,448 :: services.lastStatus :: INFO :: lastStatus.main launched
2018-03-24 17:36:35,471 :: services.lastStatus :: ERROR :: lastStatus.main failed, no idea where it came from...
Traceback (most recent call last):
File "/home/thehive/Hippocampe/core/services/lastStatus.py", line 14, in main
globalReport = bag.getLastJob()
File "/home/thehive/Hippocampe/core/services/modules/jobs/BagOfJobs.py", line 108, in getLastJob
report = self.matchResponse['hits']['hits'][0]['_source']
IndexError: list index out of range
2018-03-24 17:36:35,490 :: services.monitorSources :: INFO :: monitorSources.main end
2018-03-24 17:36:35,491 :: services :: ERROR :: monitorSources failed

Develop Dockerfille won't build NPM not installed as part of Nodejs

Bug

Work Environment

Question Answer
OS version (server) Docker build
OS version (client) N/a
Hippocampe version / git hash 8055289 (develop branch)
Package Type Docker
Browser type & version N/a

Problem Description

Dockerfile build errors with /bin/sh: npm: not found

Steps to Reproduce

  1. Checkout develop branch
  2. docker build .

Possible Solutions

Install nodejs-npm package in Dockerfile (as per my commit)

Feature Request: Feed scheduling

Feature Request

While reading through the documentation I came across this section

This seems very brittle and transient. I think removing this as a hidden API call, and making this a configuration option for the feed in '../Hippocampe/core/conf/feeds' might be a more useful feature since thats where we configure the feeds we're going to download.

Keeping the API call + allowing it configurable via feed configurations also seems solid.

Error retrieving feeds: illegal_argument_exception

Hi, I'm running into troubles while retrieving feeds...
I have installed hippocampe some days ago.
I've configured hippocampe to run as a service. When I started it and everything went ok, but when I run shadowbook and then I checked the results, I got these errors:

TransportError(400, u'illegal_argument_exception', u'Mapper for [description] conflicts with existing mapping in other types:\n[mapper [description] is used by multiple types. Set update_all_types to true to update [fielddata] across all types.]')

Request Type

Possible Bug / Problem

Work Environment

Ubuntu 16.04
Hippocampe version installed: commit ce75aae
Elasticsearch: 5.6.2

Complementary information

Partial output for the command: curl -GET http://localhost:80/hippocampe/api/v1.0/jobs

"openphish_URL.conf": {
"activated": true,
"error": [
"TransportError(400, u'illegal_argument_exception', u'Mapper for [description] conflicts with existing mapping in other types:\n[mapper [description] is used by multiple types. Set update_all_types to true to update [fielddata] across all types.]')"
],
"link": "https://openphish.com/feed.txt",
"nbFailed": 0,
"nbIndex": 0,
"nbNew": 0,
"nbUpdate": 0
},
"phishtank_URL.conf": {
"activated": true,
"error": [
"TransportError(400, u'illegal_argument_exception', u'Mapper for [description] conflicts with existing mapping in other types:\n[mapper [description] is used by multiple types. Set update_all_types to true to update [fielddata] across all types.]')"
],
"link": "http://data.phishtank.com/data/online-valid.csv",
"nbFailed": 0,
"nbIndex": 0,
"nbNew": 0,
"nbUpdate": 0
},

hippocampe's log is saying:

2017-10-17 10:57:40,063 :: services.modules.shadowbook.processFeed :: INFO :: processFeed.main for binarydefense_IP.conf end
2017-10-17 10:57:40,063 :: services.modules.shadowbook.processFeed :: INFO :: processFeed.main launched for ET_IP.conf
2017-10-17 10:57:40,065 :: services.modules.shadowbook.objects.Source :: INFO :: Source.isActive starts
2017-10-17 10:57:40,065 :: services.modules.shadowbook.objects.Source :: INFO :: E scenario
2017-10-17 10:57:40,078 :: services.modules.shadowbook.processFeed :: ERROR :: processFeed.main failed for ET_IP.conf, no idea where it came from
Traceback (most recent call last):
File "./services/modules/shadowbook/processFeed.py", line 50, in main
source.indexSourceInES()
File "./services/modules/shadowbook/objects/Source.py", line 131, in indexSourceInES
indexSource.createIndexSource()
File "./services/modules/shadowbook/objects/IndexSource.py", line 80, in createIndexSource
self.create()
File "./services/modules/shadowbook/objects/Index.py", line 49, in create
indexES.put_mapping(doc_type = self.typeNameES, body = self.docMapping)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/indices.py", line 282, in put_mapping
'_mapping', doc_type), params=params, body=body)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
RequestError: TransportError(400, u'illegal_argument_exception', u'Mapper for [description] conflicts with existing mapping in other types:\n[mapper [description] is used by multiple types. Set update_all_types to true to update [fielddata] across all types.]')
2017-10-17 10:57:40,078 :: services.modules.shadowbook.processFeed :: INFO :: processFeed.main for ET_IP.conf end
2017-10-17 10:57:40,078 :: services.shadowbook :: INFO :: shadowbook.startJob end
2017-10-17 10:57:40,078 :: services.shadowbook :: INFO :: shadowbook.updateJob launched
2017-10-17 10:57:40,078 :: services.shadowbook :: INFO :: updating with status: done
2017-10-17 10:57:40,115 :: services.shadowbook :: INFO :: shadowbook.updateJob end
2017-10-17 10:57:40,115 :: services.shadowbook :: INFO :: shadowbook.manageJob end
2017-10-17 10:57:56,241 :: services :: INFO :: jobs service requested
2017-10-17 10:57:56,243 :: services.modules.common.ES :: INFO :: ES.checkES launched
2017-10-17 10:57:56,246 :: services.modules.common.ES :: INFO :: ES.checkData launched
2017-10-17 10:57:56,246 :: services.modules.common.ES :: INFO :: ['jobsType']
2017-10-17 10:57:56,250 :: services.modules.common.ES :: INFO :: index hippocampe and type jobs exist
2017-10-17 10:57:56,250 :: services.jobs :: INFO :: jobs.main launched
2017-10-17 10:57:56,255 :: services.jobs :: INFO :: jobs.main end

curl -PUT http://localhost:9200/hippocampe/_mapping?pretty
{
"hippocampe" : {
"mappings" : {
"jobs" : {
"properties" : {
"duration" : {
"type" : "float"
},
"endTime" : {
"type" : "date",
"format" : "basic_date_time_no_millis"
},
"report" : {
"properties" : {
"ET_IP" : {
"properties" : {
"conf" : {
"properties" : {
"activated" : {
"type" : "boolean"
},
"error" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"link" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"nbFailed" : {
"type" : "long"
},
"nbIndex" : {
"type" : "long"
},
"nbNew" : {
"type" : "long"
},
"nbUpdate" : {
"type" : "long"
}
}
}
}
},

It seems that ES is complaining about some conflicting mappings... How can I "Set update_all_types to true to update [fielddata] across all types." !?

Thank you :)

With Elasticsearch 5, indexing domain feed fails

2017-01-10 18:38:24,957 :: services.modules.shadowbook.processFeed :: ERROR :: processFeed.main failed for domain.conf, no idea where it came from
Traceback (most recent call last):
  File "/home/dc/dev/Hippocampe/core/services/modules/shadowbook/processFeed.py", line 67, in main
    resMsearch = searchIntel.littleMsearch(source.coreIntelligence, source.typeNameESIntel, parsedPage)
  File "/home/dc/dev/Hippocampe/core/services/modules/shadowbook/searchIntel.py", line 72, in littleMsearch
    res = es.msearch(body = req)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 69, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 803, in msearch
    raise ValueError("Empty value passed for a required argument 'body'.")
ValueError: Empty value passed for a required argument 'body'.

Shadowbook not running, throws 500 error to web server

Request Type

Bug

Work Environment

Ubuntu, 1604 Server
Hippocampe current
Python env 2.7.12
PIP 9.0.1

Problem Description

Can anyone help with the following error, same problem running command line or service in nginx

[2017-06-09 12:58:17,951] ERROR in app: Exception on /hippocampe/api/v1.0/shadowbook [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functionsrule.endpoint
File "./app.py", line 168, in shadowbookService
if 'error' in reportJob:
TypeError: argument of type 'NoneType' is not iterable

Job blocked on going"

Request Type

Bug

Work Environment

Question Answer
OS version (server) Debian 4.9.65-3+deb9u2 (2018-01-04) x86_64
git hash commit f4d8807
Package Type From source

Problem Description

Job blocked "ongoing"

Steps to Reproduce

Install Hippocampe
run shadowbook
Retrieve feeds automatically and periodically
Wait few update until one task stays blocked

Possible Solutions

N/A

Complementary information

From command: curl -GET http://localhost:5000/hippocampe/api/v1.0/jobs
=>
"AWFJifRJDaFPKWQ1ZZhx": {
"startTime": "20180131T010713+0100",
"status": "ongoing"
}

With Elasticsearch 5, sizeByType fails

{
  "error": "'NoneType' object has no attribute '__getitem__'"
}
2017-01-12 10:59:52,612 :: services.sizeByType :: ERROR :: sizeByType.main failed, no idea where it came from...
Traceback (most recent call last):
  File "/home/dc/dev/Hippocampe/core/services/sizeByType.py", line 15, in main
    for typeIntelligence in typeReport['type']:
TypeError: 'NoneType' object has no attribute '__getitem__'
2017-01-12 10:59:52,612 :: services :: ERROR :: sizeByType failed

Run as a service

Hello !

Have you a solution to execute Hippocampe as a service ?

Thank you

Feature Request: Alerting for Networks

Request Type

Feature Request

Problem Description

It would be great, if I could add networks ranges of my internet facing devices and Hippocampe would Alert (e.g. in a log-file) if IPs from within this range were listed on a feed

[front-end] no notification when using hipposcore and shadowbook

No notifications show up when using the web interface with hipposcore and shadowbook

Error: e.get(...).success is not a function
this.$get</y@http://hippocam.pe/bower_components/angular-ui-notification/dist/angular-ui-notification.min.js:8:1068
this.$get</y.error@http://hippocam.pe/bower_components/angular-ui-notification/dist/angular-ui-notification.min.js:8:2671
$scope.hipposcore/<@http://hippocam.pe/app/hipposcore/module.js:50:7
processQueue@http://hippocam.pe/bower_components/angular/angular.js:16648:37
scheduleProcessQueue/<@http://hippocam.pe/bower_components/angular/angular.js:16692:27
$RootScopeProvider/this.$get</Scope.prototype.$eval@http://hippocam.pe/bower_components/angular/angular.js:17972:16
$RootScopeProvider/this.$get</Scope.prototype.$digest@http://hippocam.pe/bower_components/angular/angular.js:17786:15
$RootScopeProvider/this.$get</Scope.prototype.$apply@http://hippocam.pe/bower_components/angular/angular.js:18080:13
done@http://hippocam.pe/bower_components/angular/angular.js:12210:36
completeRequest@http://hippocam.pe/bower_components/angular/angular.js:12436:7
requestLoaded@http://hippocam.pe/bower_components/angular/angular.js:12364:9
consoleLog/<()angular.js (line 14328)
$ExceptionHandlerProvider/this.$get</<()angular.js (line 10837)
processChecks()angular.js (line 16674)
$RootScopeProvider/this.$get</Scope.prototype.$eval()angular.js (line 17972)
$RootScopeProvider/this.$get</Scope.prototype.$digest()angular.js (line 17786)
$RootScopeProvider/this.$get</Scope.prototype.$apply()angular.js (line 18080)
done()angular.js (line 12210)
completeRequest()angular.js (line 12436)
requestLoaded()angular.js (line 12364)
["Possibly unhandled rejection: {}"]

It seems to be linked with this issue: alexcrack/angular-ui-notification#106

Fresh install of hippocampe not working with Elasticsearch 5.3.x

Fresh install of hippocampe not working with Elasticsearch 5.3.x

Request Type

Bug

Work Environment

Question Answer
OS version (server) Redhat
OS version (client) 7.5
Hippocampe version / git hash f4d8807
Package Type From source
Browser type & version Curl

Problem Description

Unalbe to query shadowbook or enable sources due to the following error:

curl -XGET 10.x.x.x:5000/hippocampe/api/v1.0/shadowbook
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

Hippocampe is being executed via app.py, it shows the following error:

[2018-10-11 23:19:12,046] ERROR in app: Exception on /hippocampe/api/v1.0/shadowbook [GET]
Traceback (most recent call last):
  File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "app.py", line 168, in shadowbookService
    if 'error' in reportJob:
TypeError: argument of type 'NoneType' is not iterable
10.4.6.106 - - [11/Oct/2018 23:19:12] "GET /hippocampe/api/v1.0/shadowbook HTTP/1.1" 500 -

Added a print_exc() on shadowbook.py#100 , which shows the following backtrace:

Traceback (most recent call last):
  File "/opt/Hippocampe/core/services/shadowbook.py", line 79, in initJob
    indexJob.createIndexJob()
  File "/opt/Hippocampe/core/services/modules/shadowbook/objects/IndexJob.py", line 71, in createIndexJob
    self.create()
  File "/opt/Hippocampe/core/services/modules/shadowbook/objects/Index.py", line 49, in create
    indexES.put_mapping(doc_type = self.typeNameES, body = self.docMapping)
  File "/usr/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/lib/python2.7/site-packages/elasticsearch/client/indices.py", line 282, in put_mapping
    '_mapping', doc_type), params=params, body=body)
  File "/usr/lib/python2.7/site-packages/elasticsearch/transport.py", line 318, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
  File "/usr/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 124, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
RequestError: TransportError(400, u'illegal_argument_exception', u'mapper [status] cannot be changed from type [text] to [keyword]')

This is likely caused by https://github.com/TheHive-Project/Hippocampe/blob/master/core/services/modules/shadowbook/objects/IndexJob.py#L51

Steps to Reproduce

  1. Follow the Install instructions
  2. run 'python core/app.py'
  3. Curl an api url such as 'curl -XGET 127.0.0.1:5000/hippocampe/api/v1.0/shadowbook'

Shadowbook keeps crashing ES

I have a DO box with 4G ram that regularly runs at 60% utilization that keeps crashing when I run teh shadowbook api call. Once it crashed teh job is stuck in "ongoing" and I can't run anything else. I tried limiting the number of threads per cpu and that didn't help. Any thoughts.

I di dget a handful indexed, but it never completes, tried delting teh hippocampe index and starting over a few times and no luck. this is the log right when it crashes. Any ideas - more memory isn't quite possible right now.

2017-02-05 17:22:46,259 :: services.modules.shadowbook.objects.Source :: INFO :: E scenario 2017-02-05 17:22:46,337 :: services.modules.shadowbook.objects.Source :: INFO :: http://mirror1.malwaredomains.com/files/domains.txt querried for the first time 2017-02-05 17:22:46,948 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.littleMsearch launched 2017-02-05 17:22:52,524 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.littleMsearch end 2017-02-05 17:22:52,541 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.littleSort launched 2017-02-05 17:22:52,574 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.littleSort end 2017-02-05 17:22:52,575 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.bigMsearch launched 2017-02-05 17:22:57,266 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.bigMsearch end 2017-02-05 17:22:57,294 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.bigSort launched 2017-02-05 17:22:57,315 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.bigSort end 2017-02-05 17:22:57,315 :: services.modules.shadowbook.bulkOp :: INFO :: bulkOp.indexNew launched 2017-02-05 17:23:00,153 :: services.modules.shadowbook.bulkOp :: INFO :: bulkOp.index res: (17406, []) 2017-02-05 17:23:00,153 :: services.modules.shadowbook.bulkOp :: INFO :: bulkOp.indexNew end 2017-02-05 17:23:00,263 :: services.modules.shadowbook.bulkOp :: INFO :: bulkOp.index launched 2017-02-05 17:23:05,199 :: services.modules.shadowbook.bulkOp :: INFO :: bulkOp.index res: (17482, []) 2017-02-05 17:23:05,199 :: services.modules.shadowbook.bulkOp :: INFO :: bulkOp.index end 2017-02-05 17:23:05,200 :: services.modules.shadowbook.processFeed :: INFO :: <type 'tuple'> 2017-02-05 17:23:05,200 :: services.modules.shadowbook.processFeed :: INFO :: processFeed.main for dnsbh_DOMAIN.conf end 2017-02-05 17:23:05,256 :: services.modules.shadowbook.processFeed :: INFO :: processFeed.main launched for openbl_IP.conf 2017-02-05 17:23:05,264 :: services.modules.shadowbook.objects.Source :: INFO :: Source.isActive starts 2017-02-05 17:23:05,264 :: services.modules.shadowbook.objects.Source :: INFO :: E scenario 2017-02-05 17:23:05,409 :: services.modules.shadowbook.objects.Source :: INFO :: http://www.openbl.org/lists/base_all.txt querried for the first time 2017-02-05 17:23:07,003 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.littleMsearch launched 2017-02-05 17:24:11,870 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.littleMsearch end 2017-02-05 17:24:11,997 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.littleSort launched 2017-02-05 17:24:12,320 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.littleSort end 2017-02-05 17:24:12,320 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.bigMsearch launched 2017-02-05 17:25:04,692 :: services.modules.shadowbook.searchIntel :: INFO :: searchIntel.bigMsearch end 2017-02-05 17:25:04,987 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.bigSort launched 2017-02-05 17:25:05,189 :: services.modules.shadowbook.processMsearch :: INFO :: processMsearch.bigSort end 2017-02-05 17:25:05,189 :: services.modules.shadowbook.bulkOp :: INFO :: bulkOp.indexNew launched 2017-02-05 17:25:05,197 :: services.modules.shadowbook.processFeed :: ERROR :: processFeed.main failed for openbl_IP.conf, no idea where it came from

With Elasticsearch 5, type fails

127.0.0.1 - - [12/Jan/2017 11:06:00] "GET /hippocampe/api/v1.0/sizeByType HTTP/1.1" 500 -
[2017-01-12 11:07:12,476] ERROR in app: Exception on /hippocampe/api/v1.0/type [GET]
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1988, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1641, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1544, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1639, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1625, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "app.py", line 218, in typeIntelService
    if 'error' in result:
TypeError: argument of type 'NoneType' is not iterable

Unable to schedule hipposched - unable to determine the name of the local timezone

Request Type

Bug

Work Environment

Question Answer
OS version (server) RedHat
OS version (client) Seven
Package Type Docker

Problem Description

Unable to schedule hipposched as specified in docs. Error 500.

Steps to Reproduce

  1. made a curl request as specified in docs:
 curl -i -H "Content-Type: application/json" -X POST -d '{"time" : "* */12 * * *"}' "https://XXXX/hippocampe/api/v1.0/hipposched"
HTTP/1.0 500 INTERNAL SERVER ERROR
Content-Type: application/json
Content-Length: 287
Server: Werkzeug/0.14.1 Python/2.7.14
Date: Fri, 29 Jun 2018 08:10:21 GMT
Set-Cookie: XXXXXXXX; path=/; HttpOnly; Secure
Connection: keep-alive

{"error":"Unable to determine the name of the local timezone -- you must explicitly specify the name of the local timezone. Please refrain from using timezones like EST to prevent problems with daylight saving time. Instead, use a locale based timezone name (such as Europe/Helsinki)."} 
  1. Get error: Unable to determine the name of the local timezone

Data access and storage issue

Hi,
Hope You are doing well !!!!!

I am using ubuntu 16.04 server version.
I want to access data stored in elasticsearch in hippocampe and also if I can save that data in txt or csv format IS IT POSSIBLE???
Elasticsearch version is 6.2.4

Thanks in Advance

Certain domains query cause Error 500

Request Type

Bug

Work Environment

Question Answer
OS version (server) RedHat
OS version (client) Seven,
Package Type Docker

Problem Description

Error 500 when asked for a domain doesok.top, baserpolaser.tk, letsgotohome.tk

Steps to Reproduce

  1. Query hipposcore for any of doesok.top, baserpolaser.tk, letsgotohome.tk
  2. Get error 500
curl -i -H "Content-Type: application/json" -X POST -d '{"ferasoplertyh.tk" : {"type" : "domain"} }' "https://domain.some/hippocampe/api/v1.0/hipposcore"
HTTP/1.0 500 INTERNAL SERVER ERROR
Content-Type: text/html
Content-Length: 291
Server: Werkzeug/0.14.1 Python/2.7.14
Date: Fri, 29 Jun 2018 07:44:28 GMT
Set-Cookie: XXXXXXX; path=/; HttpOnly; Secure
Connection: keep-alive

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

Logs:

2018-06-29 08:00:43,423 :: services.more :: INFO :: {u'ferasoplertyh.tk': {u'type': u'domain'}}
2018-06-29 08:00:43,540 :: services.hipposcore :: INFO :: hipposcore.calcHipposcore launched
2018-06-29 08:00:43,545 :: services.hipposcore :: ERROR :: hipposcore.calcHipposcore failed, no idea where it came from...
Traceback (most recent call last):
  File "/opt/Hippocampe/core/services/hipposcore.py", line 76, in calcHipposcore
    P = P + (n3 * n2 * n1)
TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
2018-06-29 08:01:53,198 :: services.modules.common.ES :: INFO :: ES.checkES launched
Traceback (most recent call last):
    resMsearch = searchIntel.littleMsearch(source.coreIntelligence, source.typeNameESIntel, parsedPage)
    res = es.msearch(body = req)
  File "/usr/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 76, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 1183, in msearch
    headers={'content-type': 'application/x-ndjson'})
  File "/usr/lib/python2.7/site-packages/elasticsearch/transport.py", line 314, in perform_request
    status, headers_response, data = connection.perform_request(method, url, params, body, headers=headers, ignore=ignore, timeout=timeout)
  File "/usr/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 175, in perform_request
    raise ConnectionError('N/A', str(e), e)
ConnectionError: ConnectionError(('Connection aborted.', error(104, 'Connection reset by peer'))) caused by: ProtocolError(('Connection aborted.', error(104, 'Connection reset by peer')))

Error 404

Hi !

After launching Hippocamp's boot command, when I go to 127.0.0.1:5000, the browser returns error 404

Why ? How to fix it ?

Have a great day

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.