Giter Site home page Giter Site logo

hartl3y94 / diskover Goto Github PK

View Code? Open in Web Editor NEW

This project forked from diskoverdata/diskover-community

0.0 0.0 0.0 31.74 MB

File system crawler, disk space usage, file search engine and file system analytics powered by Elasticsearch

Home Page: https://shirosaidev.github.io/diskover

License: Apache License 2.0

diskover's Introduction

diskover - File system crawler, disk space usage, file search engine and storage analytics powered by Elasticsearch

License Sponsor Patreon Donate PayPal

diskover

diskover is an open source file system crawler and disk space usage software that uses Elasticsearch to index and manage data across heterogeneous storage systems. Using diskover, you are able to more effectively search and organize files and system administrators are able to manage storage infrastructure, efficiently provision storage, monitor and report on storage use, and effectively make decisions about new infrastructure purchases.

As the amount of file data generated by businesses continues to expand, the stress on expensive storage infrastructure, users and system administrators, and IT budgets continues to grow.

Using diskover, users can identify old and unused files and give better insights into data change, file duplication and wasted space. diskover supports crawling local file-systems, crawling NFS/SMB, cloud storage, etc. Plugins can be used for adding additional meta data.

diskover is written and maintained by Shirosaidev and runs on Linux, macOS and Windows 10 using Python 3.

diskover requires an auth token to run, learn more on the wiki. Sign up to download diskover and receive your auth token at https://diskoverspace.com/diskover/.

News/ Updates

diskover v2 will be released soon (Q1 2021), please sign up and register at https://diskoverspace.com/diskover/ for updates and join diskover Slack. v1 will be discontinued soon and no longer supported.


This is the first tool I've found that can index 7m files/2m directories in under 20 min

-- linuxserver.io community member

Screenshots (v1)

diskover crawler and workerbots running in terminal
diskover crawler diskover worker bot
diskover-web (diskover's web file manager, analytics app, file system search engine, rest-api)
diskover-web dashboard diskover-web file tree diskover-web advanced search diskover-web tags
Kibana dashboards/saved searches/visualizations and support for Gource
kibana-screenshot diskover-gource

diskover v1 Gource videos

gource video gource video

Become a Patron & support shedding light on data darkness

If you are a fan of the project or you are using diskover and it's helping you save storage space, please consider supporting the project on Patreon or PayPal. Thank you so much to all the fans and supporters!

Enterprise vs community versions

If you are a business and would like to inquire about diskover enterprise, please visit https://diskoverspace.com to learn more and to contact us.

Installation Guide

For a detailed install guide for linux and docker, please see the Install Guide wiki page

Requirements (v1)

  • Linux, macOS, Windows 10 (WSL) Native Windows support in v2
  • Python 2.7+ or 3.5+ Python 2 will not be supported in v2
  • Elasticsearch 5.6.x (local or cloud) ES 7.x in v1 enterprise and v2
  • Redis 4.x Redis is no longer used in v2

Optional Installs

  • diskover-web (diskover's web file manager and analytics app)
  • saisoku (data sync/mover between on-prem to cloud, etc)
  • sharesniffer (for scanning your network for file shares and auto-mounting for crawls)
  • Redis RQ Dashboard (for monitoring redis queue)
  • Kibana (for visualizing Elasticsearch data, tested on Kibana 5.6.9)
  • X-Pack (Kibana plugin for graphs, reports, monitoring and http auth)
  • netdata (for realtime monitoring cpu/disk/mem/network/elasticsearch/redis/etc metrics, plugin for rq-dashboard in netdata directory)
  • Grafana ES dashboard (Grafana dashboard for Elasticsearch)
  • crontab-ui (web ui for managing cron jobs - for scheduling crawls)
  • cronkeep (alternative web ui for managing cron jobs)
  • Gource (for Gource visualizations of diskover Elasticsearch data, see videos above)

Download

To download diskover, please sign up for an account at https://diskoverspace.com/diskover/.

Getting Started (v1)

In order to run diskover, you first need to create an account and get your auth code at https://diskoverspace.com/diskover/ Once you have created an account and verified, login to receive your auth token. You can learn more about where to set your auth token on the wiki.

Check Elasticsearch and Redis are running and are the required versions (see requirements above).

$ curl -X GET http://localhost:9200/
$ redis-cli info

Install Python dependencies using pip.

$ pip install -r requirements.txt

Copy diskover config diskover.cfg.sample to diskover.cfg and edit for your environment.

Start diskover worker bots (a good number might be cores x 2) with:

$ cd /path/with/diskover
$ python diskover_worker_bot.py

Worker bots can be added during a crawl to help with the queue. To run a worker bot in burst mode (quit after all jobs done), use the -b flag. If the queue is empty these bots will die, so use rq info or rq-dashboard to see if they are running.

To start up multiple bots, run:

$ cd /path/with/diskover
$ ./diskover-bot-launcher.sh

By default, this will start up 8 bots. See -h for cli options including changing the number of bots to start. Bots can be run on the same host as the diskover.py crawler or multiple hosts in the network as long as they have the same nfs/cifs mountpoint as rootdir (-d path) and can connect to ES and Redis (see wiki for more info). Edit this file and check the paths are set correct at the top of the file to the same version of Python that you will be running diskover.py with, they need to be the same or you could run into issues.

Usage examples (v1)

See all cli options in the wiki.

Start diskover main job dispatcher and file tree crawler with (using adaptive batch size and optimize index cli flags):

$ python /path/to/diskover.py -d /rootpath/you/want/to/crawl -i diskover-indexname -a -O

Defaults for crawl with no flags is to index from . (current directory) and files >0 Bytes and 0 days modified time. Empty files and directores are skipped (unless you use -s 0 and -e flags). Symlinks are not followed and skipped. Use -h to see cli options.

Don't prompt user to overwrite existing index:

$ python /path/to/diskover.py -d /rootpath/you/want/to/crawl -i diskover-indexname -a -O -F

Use 32 tree walk threads (default is cpu cores x 2):

$ python /path/to/diskover.py -d /rootpath/you/want/to/crawl -i diskover-indexname -a -T 32

Crawl down to maximum tree depth of 3:

$ python diskover.py -i diskover-indexname -a -d /rootpath/to/crawl -M 3

Only index files which are >90 days modified time and >1 KB filesize:

$ python diskover.py -i diskover-indexname -a -d /rootpath/to/crawl -m +90 -s 1024

Only index files which have been modified in the last 7 days including empty files and dirs:

$ python diskover.py -i diskover-indexname -a -d /rootpath/to/crawl -m -7 -s 0 -e

Distribute file meta collecting amongst bots and split file lists for directories with many files (can help to keep all bots busy if your file tree has directories with many files):

$ python diskover.py -i diskover-index -a -d /rootpath/to/crawl --splitfiles --splitfilesnum 5000 --chunkfiles --chunkfilesnum 500

Find duplicate files in an index (after crawl finishes):

$ python diskover.py -i diskover-indexname -a --finddupes

Find "hot dirs" and change % between two indices (after crawls are complete):

$ python diskover.py -i diskover-latestindex -a -H diskover-previndex

Store cost per gb (Enterprise ver. only) in es index from diskover.cfg settings and use size on disk (disk usage) instead of file size:

$ python diskover.py -i diskover-index -a -d /rootpath/to/crawl -G -S

Tree walk and enqueue all jobs into RQ with no bots running (don't wait for bots). This could allow you to tree walk during the day and build up a large queue of all the crawl jobs with no stat calls hitting the storage and then in the evening start up the bots to do the crawl jobs and the heavy stating on the storage:

$ python diskover.py -i diskover-index -a -d /rootpath/to/crawl --nowait

Create index with just level 1 directories and files, then run background crawls in parallel for each directory in rootdir and merge the data into same index. After all crawls are finished, calculate rootdir doc's size/items counts. This could be used if you want to get a very high queue fill rate on a very large directory tree and a regular diskover crawl is not filling the queue fast enough and bots are starved for jobs:

See parallel crawl script for an example of scripting this.

$ python diskover.py -i diskover-indexname -a -d /rootpath/to/crawl --maxdepth 1
$ python diskover.py -i diskover-indexname -a -d /rootpath/to/crawl/dir1 --reindexrecurs -q &
$ python diskover.py -i diskover-indexname -a -d /rootpath/to/crawl/dir2 --reindexrecurs -q &
...
$ python diskover.py -i diskover-indexname -a -d /rootpath/to/crawl --dircalcsonly --maxdcdepth 0

User Guide

Read the wiki for more documentation on how to use diskover.

Discussions/Support

For discussions or support for diskover join the diskover Slack workspace, my username is @shirosai.

Bugs

For bugs about diskover, please use the issues page.

License

See the license file.

diskover's People

Contributors

carlchan avatar fake-name avatar helge000 avatar mathse avatar rapphil avatar seanbales avatar shirosaidev avatar suika avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.