Giter Site home page Giter Site logo

apache / kibble-1 Goto Github PK

View Code? Open in Web Editor NEW
59.0 18.0 29.0 2.06 MB

Apache Kibble - a tool to collect, aggregate and visualize data about any software project

Home Page: https://kibble.apache.org/

License: Apache License 2.0

Python 44.13% HTML 30.57% CSS 6.96% JavaScript 3.42% CoffeeScript 14.91% Shell 0.01%
python big-data kibble open-source visualization

kibble-1's Introduction

This is old version of Apache Kibble

This repo contains archived code for Kibble v1. The current development for Apache Kibble happens at https://github.com/apache/kibble.

Apache Kibble

Apache Kibble is a tool to collect, aggregate and visualize data about any software project that uses commonly known tools. It consists of two components:

  • Kibble Server (this repository) - main database and UI Server. It serves as the hub for the scanners to connect to, and provides the overall management of sources as well as the visualizations and API end points.
  • Kibble scanners (kibble-scanners) - a collection of scanning applications each designed to work with a specific type of resource (git repo, mailing list, JIRA, etc) and push compiled data objects to the Kibble Server.

Documentation

For information about the Kibble project and community, visit our web site at https://kibble.apache.org/.

Live demo

If you would love to try Kibble without installing it on your own machine try the online demo of the Kibble service: https://demo.kibble.apache.org/.

Installation

For installation steps see the documentation.

Contributing

We welcome all contributions that improve the state of the Apache Kibble project. For contribution guidelines check the CONTRIBUTING.md.

kibble-1's People

Contributors

aligoren avatar esrakarakas avatar humbedooh avatar jlrifer avatar michalslowikowski00 avatar sharanf avatar thadguidry avatar treyyi avatar turbaszek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kibble-1's Issues

Proxy error

Hi @Humbedooh ,

I'm trying to set up organizations and I am getting a proxy error

[Wed Feb 14 19:40:01.187059 2018] [proxy:error] [pid 24131:tid 140417307170560] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8000 (localhost) failed
[Wed Feb 14 19:40:01.187103 2018] [proxy:error] [pid 24131:tid 140417307170560] AH00959: ap_proxy_connect_backend disabling worker for (localhost) for 60s
[Wed Feb 14 19:40:01.187106 2018] [proxy_http:error] [pid 24131:tid 140417307170560] [client 127.0.0.1:40210] AH01114: HTTP: failed to make connection to backend: localhost, referer: http://localhost/organisations.html?page=org

Any help is appreciated!

Installation update and a few thoughts

I'm at the point of adding the scanners and kibble.localhost is up and running. Success. A few that may or may not be right that I did along the way:

  1. Installed PyYAML. There is no yaml (at least that I could find) for Python3
  2. I think that the virtual host install could use a bit more description. Loading the modules, setting up proxy, and configuring /etc/hosts is a bit vague. I think I got it to work but it was with a bit of clawing.

Looking ahead to the scanner installs:

  1. Regarding git binaries and cloc -- are these required? "require the following optional components" is not clear
  2. Should I be putting (git clone https://github.com/apache/kibble-scanners.git) somewhere in particular?
  3. This is not clear: Then edit the conf/config.yaml file to match both your ElasticSearch database and the file layout you wish to use on the scanner machine --> I'm guessing that this conf folder is in the clone but could you provide a bit of help on what 'match the db and file layout' are? I'm not very familiar with ES so this is a bit new to me.

I'll add that I'm doing all of this on Ubuntu 16.04 in a Parallels VM.

Introduce DataSource class

Description
Currently all data sources supported by Kibble are defined in this one, long yaml:
https://github.com/apache/kibble/blob/8904f39ca2b19aef3522455ec357294cc398c49e/kibble/api/yaml/sourcetypes.yaml#L1-L103

We should introduce DataSource base class and then rewrite (automatically) the yaml file into pythonic code. For example:

class GitDataSource(DataSource):
    title: str = "Plain git repository"
    description: str = "This is a plain git repository with no issues/PRs attached. For GitHub repositories, use the GitHub source type."
    regex: str = r"(?:https?|git)://.*/.+\.git"
    example: str = "git://example.org/repos/foo.git"
    optauth: List[str] = [
        "username",
        "password",
    ]

This will also bring us closer to refactoring API - having a DataSource class we can implement common update, delete, create methods that will be used by /api/sources endpoint.

Use case
To make it easier to work with data sources and keep them as pythonic object not yaml files.

Related Issues
N/A

Create kibble cli tool

Description
As discussed in #64 we would like to have a cli tool that would allow users mange their kibble instance

Use case
Users should be able to do:

# python kibble/setup/setup.py
kibble setup

# python kibble/setup/makeaccount.py
kibble create_account

# and in future
kibble scanners run github

Related Issues
#64
#78

Roadmap for adding new metrics

Hi,
@jlrifer and I plan on contributing a new metric to Kibble in the coming months. Is there an existing roadmap for contributing a new metric to Kibble? Also, are there any metrics that the community is looking to add?

Thanks,
Adam

Allow embedding Apache Kibble charts on external websites.

Description
Allow embedding Apache Kibble charts on external websites.
Thanks @Humbedooh for mentioning on the devlist.

Use case

One thing we have been forgetting for a long time is embeds, akin to
what Snoot.io uses on the projects.apache.org site. It needs to be
possible to grab a chart from the standard UI, get an embed link, and
publish it somewhere (with the appropriate caching in place for such a
request)

https://projects.apache.org

Related Issues
N/A

Change CODE_OF_CONDUCT to point to Apache CoC

Hello all!

I would like to suggest to change the Apache Kibble CoC to point to ASF CoC instead of using full text. For example:

# Code of Conduct

The Apache Kibble project follows the [Apache Software Foundation code of conduct](https://www.apache.org/foundation/policies/conduct.html).

If you observe behavior that violates those rules please follow the [ASF reporting guidelines](https://www.apache.org/foundation/policies/conduct#reporting-guidelines).

By doing this we point users to ASF CoC that may change over time so we are avoiding possible inconsistency and we allow users to see that the Kibble project is part of something bigger :)

Happy to hear your opinion on this!

Adding Data Sources

@Humbedooh thanks again for the help in the install. Now moving onto the adding data sources. The UI is certainly straightforward. Organization added. However, adding the data sources seems to hang.

screen shot 2018-02-15 at 6 05 13 am

Should I be patient here?

Fix deprecation warning for yaml.load()

Description:
In multiple places we use yaml.load() which rises:

YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.

We should fix this deprecation warning.

Refactor the configuration methods of Apache Kibble

Hello,

Currently, Apache Kibble is configured via kibble.yaml which is generated by running the setup.py script. This file is referenced in multiple places in the codebase which is not necessary in my opinion.

I would like to propose to use for the standard Python library configparser instead of yaml.
https://docs.python.org/3.7/library/configparser.html

So instead of:

accounts:
  allowSignup: true
  verify: true
api:
  database: 2
  version: 0.1.0
elasticsearch:
  dbname: kibble
  host: elasticsearch
  port: 9200
  ssl: false
mail:
  mailhost: localhost
  mailport: 25
  sender: Kibble <[email protected]>

we would have:

[accounts]
allowSignup = True
verify = True

[api]
database = 2
version = 0.1.0

[elasticsearch]
dbname = kibble
host = elasticsearch
port = 9200
ss = false

[mail]
mailhost = localhost
mailport = 25
sender = Kibble <[email protected]>

The main advantage of using configparser is that we will be able to parse the config file once and then access any value doing something like config.get("section", "key").

Additionally, we may take some inspiration from Apache Airflow project and:

  • introduce "default config" that will be used for fallback values that were not defined by user
  • use env variables to override the config (dead useful for configuring environments) for example export KIBBLE__MAIL__MAILPORT=34 would override the config value of mail.mailport

Installation steps don't work

It seems that I'm unable to install the Apache Kibble from sources:

➜ pip install -e setup/.
Obtaining file:///Users/tomaszurbaszek/kibble/setup
    ERROR: Command errored out with exit status 2:
     command: /Users/tomaszurbaszek/.virtualenvs/kibble/bin/python3.6 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/tomaszurbaszek/kibble/setup/setup.py'"'"'; __file__='"'"'/Users/tomaszurbaszek/kibble/setup/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/_j/9tm1bndd0x909p5l34tsg4mw0000gn/T/pip-pip-egg-info-n31j6zo4
         cwd: /Users/tomaszurbaszek/kibble/setup/
    Complete output (5 lines):
    /Users/tomaszurbaszek/kibble/setup/setup.py:36: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
      myyaml = yaml.load(open("kibble.yaml.sample"))
    usage: setup.py [-h] [-e HOSTNAME] [-p PORT] [-d DBNAME] [-s SHARDS]
                    [-r REPLICAS] [-m MAILHOST] [-a] [-k]
    setup.py: error: unrecognized arguments: egg_info --egg-base /private/var/folders/_j/9tm1bndd0x909p5l34tsg4mw0000gn/T/pip-pip-egg-info-n31j6zo4
    ----------------------------------------
ERROR: Command errored out with exit status 2: python setup.py egg_info Check the logs for full command output.

➜ cd setup

➜ pip install setup.py
ERROR: Could not find a version that satisfies the requirement setup.py (from versions: none)
ERROR: No matching distribution found for setup.py

I'm happy to improve it but at first, I would like to make sure that it's a real problem and not me.

Widget design not found!

Clicking on "Exports" entry in Kibble Web UI shows "Widget design not found!", as per title. Any ideas?

Add Kibble Scanner for Bitbucket

Description
Kibble doesn't currently include a scanner for Bitbucket and a request for one has been made.
Create a new kibble scanner to accept Bitbucket as a data source

Use case
Kibble should try to cover as many of the the popular sources as possible so that users who prefer using Bitbucket may
be able to scan and report on their data

Related Issues
Original Request [https://github.com/apache/kibble-scanners/issues/2#issue-407654689]

First attempt to login is always failed

Description:
The first login attempt is always failed in the fresh Kibble instance.
After providing login credentials and hit the button Sign In there is no effect.
The second try is successful.

Reproduction steps:
Precondition: kill the server if you have running one

  1. Run server (follow these steps https://github.com/apache/kibble/blob/master/CONTRIBUTING.md)
  2. Go to http://127.0.0.1:8000/login.html (Chrome or the other web browsers)
  3. Provide credentials
  4. Click the Sign In button

Actual result:
The first try ends without any effects. You won't be logged.

OS:
macOS Catalina

Logs:
N/A

Other:
N/A

Any plans for GitHub API token support ?

According to this page, password auth has been deprecated and will cease working starting November 13, 2020 at 4:00 PM UTC.

Are there any plans of updating the github repository options in the sources page to take into account the new API authentication scheme ?

Add Kibble Scanner for Slack

Description
Kibble doesn't currently include a scanner for Slack and a request for one has been made.
Create a new kibble scanner to accept Slack as a data source

Use case
Kibble should try to cover as many of the the popular sources as possible so that users who prefer using Slack may
be able to scan and report on their data

Related Issues
Original Request [https://github.com/apache/kibble/issues/87#issue-730783106]

503 Service Error after setup

Hello,

I have just finished going through the guide on installing kibble, and I feel that I am so close.
When I get to the Login page and try to register, I get a 503 service unavailable error. Any recommendation or assistance would be appreciated.

Thanks,
Adam

Add Kibble Scanner for Gitlab

Description
Kibble doesn't currently include a scanner for Gitlab and a request for one has been made.
Create a new kibble scanner to accept Gitlab as a data source

Use case
Kibble should try to cover as many of the the popular sources as possible so that users who prefer using Gitlab may
be able to scan and report on their data

Related Issues
Original request [https://github.com/apache/kibble-scanners/issues/1#issue-406142739]

Link contributions to affiliations

Description
As suggested by @sharanf on the devlist:

I saw a talk by Myrle at ApacheCon@Home and she had done some initial work linking contributions to affiliation and I mentioned seeing if that could be something we could look at including into Kibble. I think we tried to do it with Meta Pony Factor but dont think it is complete.

Use case
This is a highly useful indicator for observing how well the community is balanced.

Related Issues
N/A

Proxy Mod (Installation)

Hey,
Still working through the install as I have time. Are their proxy mods that I have to load to get access to ProxyPass directive? Is there anything that I need to know special here?

Thanks!

Decimal value in trends widget

Hello,
I would like to add an issues closed over issues opened ratio to the trends widget in the Issue Trackers page. I have gotten it to show up, but the value is displayed as an integer when I would like for it to be a decimal.
issuesclosedopenedratio

I edited the /api/pages/issue/trends.py file to add the ratio.

    trends = {
        "created": {
            'before': no_issues_created_before,
            'after': no_issues_created,
            'title': "Issues opened this period"
        },
        "authors": {
            'before': no_creators_before,
            'after': no_creators,
            'title': "People opening issues this period"
        },
        "closed": {
            'before': no_issues_closed_before,
            'after': no_issues_closed,
            'title': "Issues closed this period"
        },
        "closers": {
            'before': no_closers_before,
            'after': no_closers,
            'title': "People closing issues this period"
        },
        "ratio":{
            'before': no_issues_closed_before / no_issues_created_before,
            'after': no_issues_closed/no_issues_created,
            'title': "Ratio of Issues Closed / Issues Opened"
        }
    }

I have tried casting the variables to a float, but the UI still displays an integer.
Any help would be appreciated. Thanks.

Broken UI on web browsers: Chrome & Opera

Description:
UI is broken on Chrome and Opera.
UI on Safari and Firefox is ok.

Reproduction steps:

  1. Run Kibble:
  • pip install -e ."[devel]"
  • docker-compose -f docker-compose-dev.yaml up setup
  • docker-compose -f docker-compose-dev.yaml up ui
  1. Go to http://127.0.0.1:8000/ in Chrome
  2. Go to http://127.0.0.1:8000/ in Opera

Actual result:
UI is broken and practically unusable.

OS:
MacOS Catalina

Logs:

Other:
Screenshot from Chrome (on Opera is the same result)
Screen Shot 2020-10-24 at 18 07 21

Introduce pylint to increase code quality

Description
Introduce pylint for code quality analysis:
https://github.com/PyCQA/pylint

This can be easily done by adding a respectable hook to our pre-commit configuration. However, this may require a lot of changes to code and maybe better to split it into few commits in single PR.

Use case
Increase code quality, follow Python community best practices and avoid regression in future.

Related Issues
N/A

Problems installing Kibble using Docker compose

Description:

I am having problems installing Kibble using
docker-compose -f docker-compose-dev.yaml up setup

Initially reported as part of #50 and agreed to continue looking to resolve afterwards.

As requested I have now run
docker-compose -f docker-compose-dev.yaml up elasticsearch
and it is exiting with a code. I have copied the complete log in the section below.

Reproduction steps:

  1. Change into to Kibble code directory (cd /opt/.../kibble
  2. Run docker-compose -f docker-compose-dev.yaml up elasticsearch
  3. Exits with message kibble_elasticsearch_1 exited with code 78

Actual result:
Command runs but then exists with message kibble_elasticsearch_1 exited with code 78

OS:
Ubuntu 18.04.5 LTS

Logs:
Log details copied below:

$ docker-compose -f docker-compose-dev.yaml up elasticsearch
Starting kibble_elasticsearch_1 ... 
Starting kibble_elasticsearch_1 ... done
Attaching to kibble_elasticsearch_1
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:02,320Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "version[7.9.2], pid[6], build[default/docker/d34da0ea4a966c4e49417f2da2f244e3e97b4e6e/2020-09-23T00:45:33.626720Z], OS[Linux/5.4.0-48-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15/15+36]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:02,323Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:02,323Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-15979048660884328998, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms256m, -Xmx256m, -XX:MaxDirectMemorySize=134217728, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,941Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [aggs-matrix-stats]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,941Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [analysis-common]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,941Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [constant-keyword]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,941Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [flattened]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,942Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [frozen-indices]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,942Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [ingest-common]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,942Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [ingest-geoip]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,942Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [ingest-user-agent]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,942Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [kibana]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,943Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [lang-expression]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,943Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [lang-mustache]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,943Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [lang-painless]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,943Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [mapper-extras]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,943Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [parent-join]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,943Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [percolator]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,944Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [rank-eval]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,944Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [reindex]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,944Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [repository-url]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,944Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [search-business-rules]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,944Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [searchable-snapshots]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,945Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [spatial]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,945Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [tasks]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,945Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [transform]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,945Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [transport-netty4]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,945Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [vectors]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,946Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [wildcard]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,946Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-analytics]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,946Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-async]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,946Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-async-search]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,946Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-autoscaling]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,946Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-ccr]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,946Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-core]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-data-streams]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-deprecation]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-enrich]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-eql]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-graph]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-identity-provider]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-ilm]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,947Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-logstash]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,948Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-ml]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,948Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-monitoring]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,948Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-ql]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,948Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-rollup]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,948Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-security]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,948Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-sql]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,948Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-stack]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,949Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-voting-only-node]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,949Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "loaded module [x-pack-watcher]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,949Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "no plugins loaded" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,975Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/nvme0n1p4)]], net usable_space [648gb], net total_space [751.8gb], types [ext4]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:03,975Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "heap size [256mb], compressed ordinary object pointers [true]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:04,026Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "node name [es01], node ID [dDdqQsSFTeKH--g5LV5OoQ], cluster name [traefik-tutorial-cluster]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:06,521Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "[controller/229] [Main.cc@114] controller (64 bit): Version 7.9.2 (Build 6a60f0cf2dd5a5) Copyright (c) 2020 Elasticsearch BV" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:06,965Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:07,681Z", "level": "INFO", "component": "o.e.t.NettyAllocator", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=256mb}]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:07,729Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "using discovery type [zen] and seed hosts providers [settings]" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,034Z", "level": "WARN", "component": "o.e.g.DanglingIndicesState", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,329Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "initialized" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,329Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "starting ..." }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,439Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,592Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" }
elasticsearch_1  | ERROR: [1] bootstrap checks failed
elasticsearch_1  | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch_1  | ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/traefik-tutorial-cluster.log
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,602Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "stopping ..." }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,618Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "stopped" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,618Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "closing ..." }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,625Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "closed" }
elasticsearch_1  | {"type": "server", "timestamp": "2020-10-25T10:59:08,626Z", "level": "INFO", "component": "o.e.x.m.p.NativeController", "cluster.name": "traefik-tutorial-cluster", "node.name": "es01", "message": "Native controller process has stopped - no new native processes can be started" }
kibble_elasticsearch_1 exited with code 78

*Other:

Implement kibble account creation with click

Description
Add an option to the kibble cli to create an account.

Use case
Instead of having to call ../kibble/setup/makeaccount.py user should just be able to call kibble create-account for the same functionality. All the same arguments and options will be available.

Related Issues
#77

Write basic usage documentation

We need someone to get started on writing documentation on how to use Kibble;

  • How do you set up the base
  • How do you set up an organisation
  • How to add sources, what to keep in mind
  • How to browse projects (what's the sub-filter for etc)
  • How to grab data, exports etc
  • What to look for when assessing a project/community's health.

Refactor KibbleDatabase

Currently we implement two times the same class: KibbleDatabase:

https://github.com/apache/kibble/blob/2abfcc871dd35ddc727317267a4595f8230b53eb/kibble/setup/makeaccount.py#L27

https://github.com/apache/kibble/blob/2abfcc871dd35ddc727317267a4595f8230b53eb/kibble/api/plugins/database.py#L121

What should be done:

  1. We should consolidate the whole logic into single class and create kibble/database.py that will keep definition of this object.
  2. Refactor this class to use values from KibbleConfigParser from kibble/configuration.py
  3. Drop support for es < 7 as per: #85

Custom Date Range Picker Limts Date Range to a Maximum of 12 months

Hi

There is something strange happening with the custom date picker. The default seems to only allow you to select 12 month date range as a maximum. T

I was wanting to select the date range 1st July 1995 to 31st December 1996 and it wouldn’t let me. Once I select the start date, the default is automatically selected to be 12 months later. So in my case the default end date was automatically set as 30th April 1996 and I wanted it to be 31st December 1996.

If I select the end date first (eg. 31st December 1996) then it automatically defaults the start date to be 1st January 1996 and I wanted it to be 1st July 1995.

The steps to replicate it is as follows:

  1. Go to Data Points
  2. Enter ‘httpd’ in the sub filter and click the green subfilter button
  3. In the date picker box in the top left – change the date to be custom
  4. Enter a start date of 1st July 1995 and an end date of 31st December 1996

Current behaviour:

  • When you enter the start date, the end date is defaulted as one year from the start date.
  • If you enter the end date then the start date is defaulted to be one year before the end date.

Expected behaviour:

  • Any date range should be accepted and it should not default to only use a maximum 12 month duration.

I've attached a screenshot
custom-dates

Source URL's should be defined as anchor links.

Source URLs should be defined as an anchor element.

As you can see

Currently, all links in the sources page are not an anchor element. All of them just a text.

May it be good, if a source URL open in a new tab when a user clicks them.

Create docker-compose for Apache Kibble

To have Apache Kibble up and running users are required to have more than Python code. Additional requirements are:

  • Elasticsearch instance
  • Webserver

It seems feasible to create docker-compose that will allow users to start Kibble just by running docker-compose up. This should also help to unify development environment.

Refactor Travis Scanner method scan.py

URL checker could be split into smaller methods. It's hard to read a little bit right now.
This could help to write tests for the URL checker.
Code: here is the code

Description
Split scan.py method for smaller chunks.

Use case

  • refactor code
  • simplify the logic
  • add tests

Related Issues
N/A

Support only Elasticsearch 7+

Description
Support only Elasticsearch 7+.

Use case
We have many places where we check ES version and decide what to use depending if it's ES 7 or 6 and failing silently when it's something else.

This also reduces the test matrix.

Related Issues
#84

Improvement for daterangepicker when using custom ranges

Hi. I think date range picker needs UI improvement when using custom ranges.

*Screenshot

screencast-demo kibble apache org-2019 06 10-14-06-59

Possible Solution

I think we should define a style rule for

daterangepicker dropdown-menu opensleft show-calendar

classes. There is only one thing to do that;

If, the right property set as auto, it looks good.

display: block;
top: 184px;
right: auto; /* this property should changed */
left: auto;

Introduce black formatting

Description
Introduce black formatting tool:
https://github.com/psf/black

This can be easily done by adding a respectable hook to our pre-commit configuration.

Use case
Make the code format consistent and follow Python community best practises.

Related Issues
N/A

Template in reported issue

I think it could be good idea to have template in every new reported bug.
It will help us to diagnose bugs faster and force reporter to think about issue i.e. how to write a good reproduction steps and other necessary information.

Tests Needs

Hi.

I know there is not many contributors here.

But even like that, I think we should write tests.

Before then I think some modules should be rewritten. I know this isn’t easy.

Every developer in this project has its work like me.

Maybe we think a new structure with Flask or an async framework like Bocadillo.

Maybe we talk about the documentation in a different issue.

Kibble API griefing

Theory:

Apache Kibble only supports cookie authorization for its REST API.

Attempt(s) to falsify the above:

GET http://localhost:9200/kibble_useraccount/_search?pretty HTTP/1.1
Content-Type: application/json

{
    "query": {
        "match_all": {}
    }
}
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-encoding: gzip
content-length: 458

{
  "took": 1,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "max_score": 1.0,
    "hits": [
      {
        "_index": "kibble_useraccount",
        "_type": "_doc",
        "_id": "[email protected]",
        "_score": 1.0,
        "_source": {
          "email": "[email protected]",
          "password": "blah-blah-blah",
          "displayName": "Administrator",
          "organisations": [],
          "ownerships": [],
          "defaultOrganisation": "Blah",
          "verified": true,
          "userlevel": "admin",
          "token": "abdb02d7-6450-4af2-9ef3-add42907e09c"
        }
      }
    ]
  }
}

Great we have a "token". Let's see what that does:

POST http://kibble.localhost/api/org/list HTTP/1.1
Authorization: token abdb02d7-6450-4af2-9ef3-add42907e09c
Content-Type: application/json
HTTP/1.1 403 Authentication failed
Date: Wed, 27 May 2020 17:29:05 GMT
Server: gunicorn/20.0.4
Content-Type: application/json
Connection: close
Transfer-Encoding: chunked

{
  "code": 403,
  "reason": "You must be logged in to use this API endpoint!"
}

Hmm, nothing. Let's attempt traditional authorization schemes. Note that my REST client automatically Base64 encodes 'user pass'.

POST http://kibble.localhost/api/org/list HTTP/1.1
Authorization: Basic [email protected] blah-blah-blah
Content-Type: application/json

and

POST http://kibble.localhost/api/org/list HTTP/1.1
Authorization: Digest [email protected] blah-blah-blah
Content-Type: application/json

both result in:

HTTP/1.1 403 Authentication failed
Date: Wed, 27 May 2020 17:32:52 GMT
Server: gunicorn/20.0.4
Content-Type: application/json
Connection: close
Transfer-Encoding: chunked

{
  "code": 403,
  "reason": "You must be logged in to use this API endpoint!"
}

Hmm. Let's fiddle with kibble_uisession and see where that leads.

GET http://localhost:9200/kibble_uisession/_search?pretty HTTP/1.1
Content-Type: application/json

{
    "query": {
        "match_all": {}
    }
}
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-encoding: gzip
content-length: 554

{
  "took": 1,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 8,
      "relation": "eq"
    },
    "max_score": 1.0,
    "hits": [
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "1a50a63a-c30b-4eae-9d5d-ee3a60d5774a",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": "1a50a63a-c30b-4eae-9d5d-ee3a60d5774a",
          "timestamp": 1588490569
        }
      },
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "d111112e-0a56-4345-a826-628e82474d6b",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": "d111112e-0a56-4345-a826-628e82474d6b",
          "timestamp": 1590598956
        }
      },
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "GLhx2XEBNsvtLM3qCx7P",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": null,
          "timestamp": 1588490996
        }
      },
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "pPfjUnIBipkYliznO657",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": null,
          "timestamp": 1590528523
        }
      },
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "4322a9ac-6654-44c2-9b29-1859b2c457e4",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": "4322a9ac-6654-44c2-9b29-1859b2c457e4",
          "timestamp": 1590599702
        }
      },
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "kgDJVnIBXVmZaGAJjxn6",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": null,
          "timestamp": 1590593949
        }
      },
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "114ed171-9fe1-42fd-9b3d-b7622c438ae4",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": "114ed171-9fe1-42fd-9b3d-b7622c438ae4",
          "timestamp": 1590594038
        }
      },
      {
        "_index": "kibble_uisession",
        "_type": "_doc",
        "_id": "kQDQVnIBXVmZaGAJsipG",
        "_score": 1.0,
        "_source": {
          "cid": "[email protected]",
          "id": null,
          "timestamp": 1590594417
        }
      }
    ]
  }
}

Oh, hello! Let's see what gives.

POST http://kibble.localhost/api/org/list HTTP/1.1
Cookie: kibble_session=d111112e-0a56-4345-a826-628e82474d6b
Content-Type: application/json
HTTP/1.1 200 Okay
Date: Wed, 27 May 2020 17:46:40 GMT
Server: gunicorn/20.0.4
Content-Type: application/json; charset=utf-8
Connection: close
Transfer-Encoding: chunked

{
  "organisations": [
    {
      "id": "2e1c2450",
      "name": "blah1",
      "description": "blah description",
      "admins": [],
      "sourceCount": 0,
      "docCount": 16709
    },
    {
      "id": "blah2",
      "name": "blah2",
      "description": "blah description",
      "admins": [],
      "sourceCount": 0,
      "docCount": 22881
    }
  ],
  "okay": true,
  "responseTime": 0.17593073844909668
}

Conclusion: I have failed to refute the initial theory. Can anyone refute this and if not, comment on any plans available to provide proper authorization scheme for REST Clients?

Document use cases for Apache Kibble

It would be awesome if we could have some documented use cases for Kibble.
Who needs it, what can they gain from using Kibble, all that stuff.

This would be a good issue for people wanting to start contributing to the project.

Changes to python file not displaying

Hi @Humbedooh,

I'm working on adding a community activity metric and I'm having problems getting changes in the python files to display. I've created a new file and have been basing it on the pony factor widget because I want it to be in a similar layout. I was able to get the widget to display with the code used in the pony.py file, but when I try to edit the code the changes aren't displaying. My code is currently the same as the pony factor, except I've changed the titles that are passed to the JSON. Any suggestions on what I need to do to get these changes to display?

Widget:
activity widget

Code:
activity code

Thanks,
Jacie

Refactor api/ to make it a package

Currently the /api directory is not a package and it should be as it encapsulates some logic. This will need adding the __init__.py and refactoring the imports.

Refactor Apache Kibble project

Refactor Apache Kibble project

This is an umbrella issue for collecting ideas and managing the work that has to be done.
Dev list thread: https://lists.apache.org/thread.html/r3f373be93397fc55ad6aff11030d2f3eb93f0f34782ff58fba9325b1%40%3Cdev.kibble.apache.org%3E

1. General refactor

  • Move scanners to main kibble repo
  • Make kibble a package
  • Add command-line interface to configure kibble and start all components (including scanners)
  • Rethink configuration of kibble #52

2. Rewrite API server

  • Add tests for API endpoints to help us preserve backward compatibility and to limit the scope of work to backend stuff. The frontend can be done in future. Having the open API spec we should be able to generate some part of required code.
  • Decide which framework we would like to use. The main requirement is that it should work with the existing OpenAPI spec and should do automagically as much as possible.
  • Rewrite all endpoint to use new framework

Additional considerations

  • Consider implementing GraphQL server
  • Schedule kibble scanners using celery beat

Installation Question: Transport Error

Hi all,

I'm running through the install and running into this:

Creating index kibble
Index creation failed: TransportError(400, 'mapper_parsing_exception', 'No handler for type [string] declared on field [sourceID]')

Is this stemming from elasticsearch? Can provide more detail if needed. Any thoughts would be great.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.