Giter Site home page Giter Site logo

netmeld / netmeld Goto Github PK

View Code? Open in Web Editor NEW
34.0 2.0 8.0 3.78 MB

A tool suite for use during system assessments.

License: MIT License

CMake 7.65% C++ 88.07% Shell 1.23% PLpgSQL 2.24% Python 0.81%
snl-cyber-sec snl-comp-science-libs snl-performance-workflow

netmeld's Introduction

NAME

Netmeld - A tool suite for use during system assessments.

DESCRIPTION

System assessments typically yield large quantities of data from disparate sources for an analyst to scrutinize for issues. Netmeld is used to parse input from different file formats, store the data in a common format, allow users to easily query it, and enable analysts to tie different analysis tools together using a common back-end.

INSTALLATION

We primarily target Kali Rolling and Debian Testing, so we package deb releases on the GitHub page. To compile from source, see the INSTALL.md for instructions.

DO ONE THING AND DO IT WELL

The Netmeld tools follow a slightly modified version of the UNIX philosophy:

Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

However, instead of text streams and pipes for inter-process communication, Netmeld tools primarily use a data store as a central communication hub and store of accumulated data. Where it makes sense, Netmeld tools support text streams and command chaining on either their input or output.

Following this, the Netmeld tool suite is divided into several modules which focus on a specific area with regard to data collection and processing. Furthermore, the tools in these modules are focused on performing one specific task within the purview of the module.

A generalized work and data flow for the Netmeld tool suite is depicted in the following diagram.

In general:

  • The Core module is a library to supply the functionality common to all modules within this tool suite.
  • The Datalake module provides a repository for raw data collection and the tools to import, export, or otherwise query the data stored.
  • The Datastore module provides a repository for the processed data and the tools to import, export, or otherwise query the data stored.
  • The Fetchers module provides tools to automate the collection of data from hosts within the targeted system.
  • The Playbook modules provides tools to automate the collection of data from a network perspective within the targeted system.
  • The Tool-* modules are targeted tools which resolve a specific need across multiple modules (potentially even external to Netmeld). Generally, the desire to keep these as loosely coupled to other Netmeld tools as possible is high.

See the individual module documentation for more detailed information on it and its tooling. Note that in the modules documentation, the term End User is used instead of identifying all the possible data sources for simplicity and may be a person or other tool.

AUTHOR

Written by Michael Berg (2013-2015, pre v1.0). Currently maintained (2016-present) by the Netmeld development team at Sandia National Laboratories.

REPORTING BUGS

Report bugs to [email protected] or on the issue tracker of the projects GitHub page.

netmeld's People

Contributors

ben-anthony-snl avatar cctechwiz avatar cmwill avatar iburres avatar ljvalen avatar marshall-sg avatar mire-all-rashly avatar raulcruise avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

netmeld's Issues

Standardize save location formats

Is your enhancement request related to a problem? Please describe.
The clw tool and fetchers save to a different location format. For the fetchers, the Ansible related one saves to host/dts-uuid while the SSH/RPC one saves to host_dts_uuid. For the clw tool, it saves to tool_dts_uuid. This can make automating or even user navigation complicated due to inconsistent format.

Describe the solution you'd like
At the least, standardize on dash or underscore for space separator and for the fetchers to standardize on a host folder or identifier for the prefix. It is understood we probably can't get consistency across the fetchers and clw tool since they are different in nature/perspective, but at least ensure it is as consistent/similar as possible.

Describe alternatives you've considered
None.

Additional context
None.

Flag Native VLAN Usage

Is your enhancement request related to a problem? Please describe.
Capturing of what is the native VLAN is not occurring. This means it's a manual process to see what the native VLAN is and then cross reference usage/traffic to determine if some host is leveraging a native VLAN or not.

Describe the solution you'd like
A column/warning/flag/whatever (e.g., ToolObservation) should be made when the native VLAN is being used. Behavior would probably be different depending on if a VLAN is just seen (e.g., packet capture parsing) vs VLAN definitions are being parsed (e.g., network device config). Something along the lines:

  • If a network device config, changing of and/or assignments to native VLAN should probably be captured and/or flagged.
  • If not a network device config, the logic should assume VLAN 1 is the native VLAN and flag on that.

Describe alternatives you've considered
Manual (cross-) examination.

Additional context
None.

Fetcher leveraging of Datalake module

Is your enhancement request related to a problem? Please describe.
Not really a problem, but their could be better integration of the modules if the Fetcher module attempted to leverage the Datalake module if it is present on the system.

Describe the solution you'd like
If the Datalake modules is available, leverage it for storage of data obtained by the Fetcher module.

Describe alternatives you've considered
None, outside of currently the Fetcher module just places data in a specific location on the filesystem.

Additional context
Closure of this is contingent on closure of #7 .

Add test for all NmapXmlParser functionality

Is your enhancement request related to a problem? Please describe.
The NmapXmlParser does not currently have tests for all its functionality.

Describe the solution you'd like
Add tests for all NmapXmlParser functionality.

Describe alternatives you've considered
N/A

Additional context
N/A

Add a folder option for the `nmdb-graph-network` tool

Is your enhancement request related to a problem? Please describe.
It would be good if we can specify the icon folder for the nmdb-graph-network. This would allow the "default" location to be documented as well as allow alternate icons to be located elsewhere instead of having to install all to that default location.

Describe the solution you'd like
The tool could have an option like --icon-folder.

Describe alternatives you've considered
None.

Additional context
None.

ACL Processing for nmdb-import-cisco

Is your enhancement request related to a problem? Please describe.
The nmdb-import-cisco tool should be able to appropriately parse ACLs. Currently only the ASA one does this.

Describe the solution you'd like
The nmdb-import-cisco tool process ACL lists similar to how the ASA one does.

Describe alternatives you've considered
Attempted to process with the ASA parser, but it appears not to work appropriately. Manual insertions was the only other option via the nmdb-insert-ac tool.

Traceroute support with IP linkage graphing

Is your feature request related to a problem? Please describe.
Traceroute like tools can show the hops along the way, but Netmeld does not support parsing of that data nor do graphing tools support displaying linkage unless subnets are explicitly known.

Describe the solution you'd like
Create a tool called nmdb-import-traceroute which:

  • process a traceroute or traceroute -n type command (equivalant tracert for Windows)
  • IPs would be imported as at least responding hosts and probably as router IPs
  • add the next hop type information into the data store

Update the nmdb-graph-network to:

  • Allow showing linkage between nodes (i.e. next hop) despite a common subnet not being known.

Describe alternatives you've considered
Manually adding the data and manually adding connections after the generation by the graphing tool.

Additional context
None.

More Testing Workflows

Though infrequent, the libraries Netmeld depends on (e.g., Boost) get update overtime and the old versions get removed from the repos. A workflow is needed to test install of the Netmeld deb packages to ensure the install works. Likewise a workflow may be need to exercise basic functionality testing to ensure any runtime dependent library changes are captured. While it may be possible to do this all in one container, new independent containers may need to be created for testing purposes (it will have to be further examined).

Cisco ACL Targeted, Small-ish Updates

Targeted update requests:

  • Enable adding rule options which are impactful (e.g., established) to the rule. Probably should be represented in the service_set column as that's what other tools do currently. E.g., permit tcp any any established.
  • It appears ipv6 rules follow a different format than ipv4 (and what was expected). Add ipv6 support, e.g., https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3650/software/release/3se/ipv6/configuration_guide/b_ipv6_3se_3650_cg/b_ipv6_3se_3650_cg_chapter_0111.html
  • Unapplied rules (i.e., those not assigned to an interface) should probably still be entered into the datastore. Set empty but required values to a known not-possibly-valid default (e.g., '-'). This may require changes to the associated graphing tool so as not to confuse defined but not-applied rules with being applied/enforced.
  • Try modifying github ci workflow to be pull_request synchronize only and see if that is what we want.
  • Cisco allows for referencing ACL lists that are not actually defined. We should record it as a notable ToolObservation.

Consider pull the decoder tools to top level

Is your enhancement request related to a problem? Please describe.
The decoder tools, cisco and junos, do not seem to actually require anything in the Playbook module. However, to use them you have to install the Playbook module and everything it depends on. This also includes the Datastore module (at least core), however these tools don't seem to insert anything into it.

Describe the solution you'd like
It would be nice if they were pulled out to their own tools. If the Datastore module is available, the tools should also probably be updated to insert the processed data into the datastore. However, that's probably more a stretch goal/request than anything as there is a lot of "missing" context then.

Describe alternatives you've considered
Manually build/install from source and port those binaries around.

Additional context
None.

TracerouteHop Should Update raw_router_ip_addrs

Is your enhancement request related to a problem? Please describe.
Mainly it is an incomplete coverage/accounting type issue.

Describe the solution you'd like
The TracerouteHop object should insert the hops into the raw_router_ip_addrs table as well as each hop listed is explicitly a routing hop.

Describe alternatives you've considered
None.

Additional context
None.

Enable Tab Completion

Update ping import tool to not require `-n` option

Is your enhancement request related to a problem? Please describe.
The nmdb-import-ping command states it requires the -n option, however with or without it the command appears to provide enough information to be able to extract the same key data the tool already does. Basically, if someone doesn't use this command with the specified option, this tool won't work and the data has to be entered manually.

Describe the solution you'd like
The tool works regardless of if -n was issued or not.

Describe alternatives you've considered
Manually entering the data.

Additional context
None

The `nm-fetch-rpcclient` tool appears broken

Describe the bug
The tool displays command not found messages while it runs despite looking like it successfully executed.

To Reproduce

nm-fetch-rpcclient user@machine enumprivs
Enter WORKGROUP\user's password: 
command not found: command_name_enumprivs
found 35 privileges
...
rpcclient: missing argument
command not found: netmeld-fetcher-delimiter
nm-fetch-rpcclient results stored in: ...

Expected behavior
Would not expect the command not found messages or the rpcclient: missing argument message.

Screenshots
None.

Additional context
None.

Fix README.md files

In general a few of these appear to be outdated or slightly wrong.

Runman (MODULE|TOOL)_NAME to see what they look like after building.

Many of these change revolve around requiring --device-id or not.

DockerBuild Workflow Updates

Describe the bug
Docker workflow needs updated to leverage newer actions.

To Reproduce
Just examine workflow.

Expected behavior
No deprecation warnings

Screenshots
None.

Additional context
None.

Netmeld PSQL Queries Leverage YAML File

Is your enhancement request related to a problem? Please describe.
Playbook module was updated to leverage a YAML file for the SQL queries. This allows both inspection (e.g., printing) of the query as well as usage in other (non-Netmeld code/library) tooling.

Describe the solution you'd like
Examine appropriateness of migrating queries. If reasonable, implement it.

Describe alternatives you've considered
None.

Additional context
See playbook/common/utils/queries.yaml and the associated PlaybookQueries.[c|h]pp files.

Deb Package, Fetcher Module, Removal Issues

Describe the bug
The .deb package created for the Fetcher Module does not correctly remove/uninstall files.

To Reproduce
Steps to reproduce the behavior:

  1. Starting without the Fetcher module installed (e.g., nm-fetch-ssh should not exist in /usr/local/bin)
  2. Install Fetcher module via apt (i.e., download deb zip, unzip, and install sudo apt install ./netmeld*fetchers*.deb)
  3. Check to ensure it installed (e.g., nm-fetch-ssh should exist in /usr/local/bin)
  4. Uninstall the module (e.g., sudo apt autoremove --purge netmeld-fetchers)
  5. Check what was examined in step 3; nothing was removed (e.g., nm-fetch-ssh will still exist in /usr/local/bin)

Expected behavior
All files installed with the Fetcher Module (unless otherwise modified by the end user) should be removed.

Additional context
The Fetcher tools are slightly different than the other modules in that they are primarily scripts. This may be the root of the problem and tools like cmake (builds tools; primes cpack) and cpack (build debs) are behaving differently because of this. There may be another way but if not obvious/simple, then will probably have to create a postrm script to handle this (the Core module has an example, but for other reasons).

Add support for bare command to `nmdb-import-ipconfig`

Is your enhancement request related to a problem? Please describe.
The nmdb-import-ipconfig tool should be able to process at least some information from a bare ipconfig command. It currently supports when the /all or /allcompartments /all flags are given, but not when no arguments are given.

Describe the solution you'd like
The tool supports the bare ipconfig as well as the other formats.

Describe alternatives you've considered
None.

Additional context
None.

Add Command Execution Functionality to Core

Is your feature request related to a problem? Please describe.
Not exactly. Multiple tools have implemented system level command execution logic with slightly different logic, seemingly because the functionality was not available elsewhere.

Describe the solution you'd like
Add a system level command execution logic capability to the Core module so current modules can leverage a common functionality.

Describe alternatives you've considered
Currently modules are left to implement their own.

Additional context
At least present in the Playbook and Datalake modules, but may be elsewhere.
See also: Originally posted by @cctechwiz in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjU2MjczMDM5OnYy/pull_request_review_threads/discussion

Playbook Logic Migration (More)

Is your enhancement request related to a problem? Please describe.
Much of the nmdb-playbook tool logic was migrated out of the tooling and into a YAML file. Not all of it was as it was not related to the original issue. However, more of it could be to further create a loose coupling between the tooling and the actual play logic.

Describe the solution you'd like
Examine what can further be extracted and migrate that logic from the tool to the YAML file. Ensure other tooling in the Netmeld modules/tools (e.g., Playbook, Importers, clw, etc.) are considered for impact/side-effects as well.

Describe alternatives you've considered
None.

Additional context
None.

Playbook Confirm Link State During Run

Is your enhancement request related to a problem? Please describe.
Certain changes to the link state during a run cause problematic datastore state. For example, should a link go down between phase one and two the datastore will be populated with a list of supposed up hosts that have no open ports, etc. It currently is not obvious that something like this occurred and requires careful examination of the data, which in large playbooks/networks can be tedious or impossible.

Describe the solution you'd like
The playbook tool will detect certain state changes (e.g., loss of scan IP, down link, etc.) during a run, log the state change, and abort that current play/run/phase. It is understood that it may not be possible to abort the currently running scans in a simple manner and as such aborting the "future" scans which are under the same "broken" config are acceptable.

Describe alternatives you've considered
None.

Additional context
The error checking probably should be allowed to be optionally ignored (for whatever reason I can't think of at this point in time).

The `nmdb-import-tshark` incorrectly states behavior for `--pipe`

Describe the bug
The nmdb-import-tshark tool states when the --pipe option is issued it saves the data to a file, whereas the man page and README.md state the opposite.

To Reproduce
Read the docs and compare to --help output.

Expected behavior
Either it saves or it doesn't and the documentation states it correctly.

Screenshots
None.

Additional context
None.

The `nmdb-remove-tool-run` shouldn't require `--tool-run-id` option

Is your enhancement request related to a problem? Please describe.
Not really, just odd behavior. The nmdb-remove-tool-run requires passing the --tool-run-id option despite its only real function being to remove the tool run ID passed on command line.

Describe the solution you'd like
It would be nice if the tool just took the passed argument and worked regardless of passing the option or not.

Describe alternatives you've considered
None.

Additional context
None.

The `nmdb-insert-device` tool's `--responding` option does not work/behave as documented

Describe the bug
The tool will set the device as responding regardless of what is provided to the option.

To Reproduce
nmdb-insert-device --device-id test001 --ip-addr 1.2.3.0/24 --responding true
nmdb-insert-device --device-id test001 --ip-addr 1.2.3.1/24 --responding false
psql site -c "select * from ip_addrs where ip_addr = '1.2.3.0' or ip_addr = '1.2.3.1'"

 ip_addr | is_responding 
---------+---------------
 1.2.3.0 | t
 1.2.3.1 | t
(2 rows)

Expected behavior

 ip_addr | is_responding 
---------+---------------
 1.2.3.0 | t
 1.2.3.1 | f
(2 rows)

Screenshots
None.

Additional context
None.

Data Lake Option/Consistency

Is your feature request related to a problem? Please describe.
An issue arises around how to store data collected and processed. Current measures only work in simple cases and generally not helpful in the case where large amounts of data is collected or over time. Basically, source data management is a hybrid of end user and tool (e.g., fetchers) decision points. It would be helpful to have tools which allow for a more unified source data management option.

Describe the solution you'd like
Initial thoughts:

  • Tools to migrate/store raw data to a consistent location with some ordering.
    • Hopefully the consistent location enables end user search and discovery.
    • Thought might want to be put into tools (i.e., import-*) leveraging the layout to allow auto-loading of data.
    • What is best is debatable, but useful alternatives may leverage linking to achieve allow easier retrieval for tools or end users. Location will probably consist of things like host, import_tool, collection_method, etc.
  • The storing location is some sort of version controlled repository.
    • This hopefully loosens the need to DTS_UUID everything as that may be handled by the versioning within the repo.
    • Ideally you would be able to load data from selected points in time so you could compare state/knowledge of the targeted environment over the course of time.
  • Ideally, the ability to permanently purge or alter data.
    • Basically, if a mistake is made it needs to be able to be erased. That is suppose I call a host by the wrong name, I need to be able to correct it for the life (backwards) of the repo as the data is still valid but stored under an incorrect name. Similarly, if data for a target that is later identified to be out-of-scope is stashed, that data would need to be able to be purged as if it never existed.
  • Tooling should be generic enough to allow different back ends.
    • While one solution may be the default/supported, allow for end users to create the logic to support different backends.

Additional context
This should probably be another module as an initial prototype is wanted to facilitate discussion points about what works and what doesn't.

Resolve cppcheck Issues

Describe the bug
Numerous potential issues where introduced with the new route logic support. While not perfect (as it doesn't support code after c++11) and all of them may not be actual issues, run the cppcheck to see the issue and mitigate as many as possible.

To Reproduce
Effectively, build the code via
cmake -S . -B build -DCMAKE_EXPORT_COMPILE_COMMANDS=ON && cmake --build build -j4 --target Test.netmeld && (cd build && ctest Test.netmeld) and then run the first checks via
cppcheck --template=gcc --enable=all --force --project=build/compile_commands.json --inline-suppr -Dfalse=0 --suppressions-list=cppcheck-supressions.txt . > /dev/null
and (to ensure the suppressed checks/list still isn't a real issue) then
cppcheck --template=gcc --enable=all --force --project=build/compile_commands.json --inline-suppr -Dfalse=0 . > /dev/null.

Expected behavior
Output should only be those that can't be mitigated.

Screenshots
None.

Additional context
None.

Cisco's access-list standard support

Is your enhancement request related to a problem? Please describe.
None of the existing Cisco parsers support for access-list standard type access control rules, they only support access-list extended rules. This issue revolves around adding support for standard ones.

Describe the solution you'd like
Support could behave in a similar manor as what happens with the extended rules.

For example, assuming the format:
access-list access-list-number {permit|deny} {host|source source-wildcard|any}

  • If the rule is applied in then:
    • source is in the rule
    • destination is any
    • dst_iface is the interface it is applied on
    • src_iface is any
  • If the rule is applied out then source is any and the src_iface is the interface it is applied on.
    • source is any
    • destination is in the rule
    • dst_iface is any
    • src_iface is the interface it is applied on
  • Service is always any and action is in the rule

I think the above is logically right, however it shows the point regardless.

Describe alternatives you've considered
Manual examination and insertion into the datastore.

Additional context
See here for a sample and more information regarding Cisco standard access lists.

More Nmap Hostname Parsing

Is your enhancement request related to a problem? Please describe.
Enhance the nmdb-import-nmap tool to add processing support of the NSE port service microsoft-ds and port scripts rdp-ntlm-info and ms-sql-ntlm-info.

Describe the solution you'd like
Specifically, extracting details regarding the hostname as associated with the IP.

Describe alternatives you've considered
None.

Additional context
None.

Collapse Cisco parsers

Question
Should we collapse the Cisco type routers to one tool with options instead?

Is your enhancement request related to a problem? Please describe.
The configs between them are generally similar, there are outliers though (e.g, the wireless device configs). So this means much of the logic is the same between the parsers. When issues arise, we have to fix for multiple (unless it is forgotten) and usually this results in copy paste code. We may be able to do better.

Describe the solution you'd like
Explore collapsing the logic into a single parsing tool. I advise against a single monolithic parser, in favor of a multi-branch parser. This would allow the flexibility to give the end-user an option to explicitly specify which parser to leverage (e.g., --ios, --nxos, etc.). Also, this hopefully will allow logic re-use between the parsers in a cleaner manner.

Describe alternatives you've considered
Current solution leverages multiple tools.

Additional context
None.

ACL Processing for nmdb-import-cisco-nxos

Is your enhancement request related to a problem? Please describe.
The nmdb-import-cisco-nxos tool should be able to appropriately parse ACLs. Currently only the ASA one does this.

Describe the solution you'd like
The nmdb-import-cisco-nxos tool process ACL lists similar to how the ASA one does.

Describe alternatives you've considered
Attempted to process with the ASA parser, but it appears not to work appropriately. Manual insertions was the only other option via the nmdb-insert-ac tool.

Additional context
Similar to issue #4, but for a different tool.

Docker Builds for Tooling

Is your enhancement request related to a problem? Please describe.
Add more Docker builds to support easier usage, dev, etc. of tooling and code via docker images.

Describe the solution you'd like
Docker image build(s) (integrated into the docker build pipeline) for:

  • Entire Netmeld tool suite in one image
  • Netmeld modules in images (one image per module)
    • This might not actually make sense for all modules/sub-modules/tools and will need to be examined more
  • Some rudimentary testing for each to ensure they can be used (i.e., runtime testing)
    • As best as possible
    • This is a replacement for #77 concepts
  • (stretch) Statically linked builds
    • May not be possible for all, but was able to get the clw tool to work with 3 lines of code change to the cmake files.

Describe alternatives you've considered
Manually creating docker images each time.

Additional context
None.

Arista EOS Importer

Is your feature request related to a problem? Please describe.
There is no import tool for an Arista EOS configuration file.

Describe the solution you'd like
An import tool to handle Arista EOS configuration as produced by show running-config or show session-config

Describe alternatives you've considered
It is similar to Cisco IOS style syntax, so one might be able to leverage a lot of the same logic but there are a few differences.

Post-processing Tool Chain Logic

Is your feature request related to a problem? Please describe.
It would be nice to have a post-processing/importing tool(s) which could pull out items of interest/concern. Similar to how the graphing tools export graphic views of the network connectivity, something which exports textual/graphic views of data that may be of interest.

Describe the solution you'd like
Effectively, a tool(s) which can take an input file(s) of things to gather from the data-store/lake and spit out a report(s). While it could be built into the tool, it probably would be more useful to allow the end-user to define the queries.

The input file (user generated) probably should contain:

  • pre-data intro (e.g., this is what/significance of what follows)
  • query to run
  • (stretch goal'ish) possible sub area/queries

The output file (tool generated) probably should contain:

  • sectioning based on the queries ran
  • query output
  • (stretch goal'ish) be in ConTeXt(?) and standalone build-able

Describe alternatives you've considered
Manually running similar query/logic chains.

Additional context
None.

CI/CD Pipeline and Packaging

Description
Migration to GitHub left out some previously existing CI/CD and packaging hooks. We need to get some form of those back and try to be as repo host agnostic as possible. This includes at least:

  • Creation of CI/CD pipelines

    • Build, test, and deploy are on a per-module basis so the failure of one doesn't kill everything, only those that are dependent on it.
      • This doesn't need to be perfect and can be refined over time.
    • For a standalone branch this is debatable, but if it is associated with a pull request it should go through build and test
    • For master this would be build, test, and deploy
  • Deploy needs to create packages.

    • This part should leverage our deb creation scripts so that it can be done independent of the repo.

Additional information
We may do the majority of this via scripts (e.g., the deb creation scripts already build, test, and install to a targeted location). Furthermore, we may create a script to automate all of this as it would probably be simple in that the steps are: get/clone repo, install dependencies, and execute deb creation scripts.

All Import tools should allow for passing a device-id.

To facilitate more automated processes, Import module tools should all allow for passing a device-id. Examples include: nessus, nmap, tshark, and pcap don't. The tools do not have to actually use it, but should call that out. Thus, it will be OK to make it optional/ignored and not use if appropriate.

Addrs only one entry per tool run id but tools may insert multiple times

Describe the bug
Some tools appear to work where it will capture as much data and do operations on a per line basis. While in general this is probably fine, the tables raw_mac_addrs and raw_ip_addrs will not allow this. That is, once a MAC or IP has been added for that tool run, it can not be added again. This means if the MAC or IP is originally inserted as not responding or no value, then if it is suddenly found to be responding (e.g., long running ping finds device alive after several not-alive responses), then the target will be incorrectly recorded as not alive.

To Reproduce
None.

Expected behavior
Targets are added with correct status.

Screenshots
None.

Additional context
Either tools need to be examined and re-worked to ensure they merge/prune duplicate targets before inserting or the schemes and associated objects need to be updated to account for this potential overlap.

Explicitly Unreachable Routes

Routing features include the capability to mark a network as unreachable, regardless of actual ability to reach the target network. How should this be handled? From an unprivileged user standpoint this is important, however from a privileged user standpoint one can technically un-do this feature.

Add config file path option to `nmdb-export-port-list`

Is your enhancement request related to a problem? Please describe.
It would be good if we can specify the config file path for the nmdb-export-port-list tool. This would allow the "default" location to be documented as well as allow an alternate config to be located elsewhere instead of having to alter the default installed one.

Describe the solution you'd like
Add the option --config-path to the tool.

Describe alternatives you've considered
None.

Additional context
None.

Playbook DNS Scan

Is your enhancement request related to a problem? Please describe.
I want to be able to use the playbook to perform a DNS scan, so that I can quickly find known hostnames of hosts within the targeted networks.

Describe the solution you'd like
The playbook module should contain a tool or option to perform a DNS scan of in-scope networks by leveraging known and/or provided DNS servers and then saving the results to the data store.

Describe alternatives you've considered
This can be done with something along the lines of clw nmap --dns-servers $all_dns_servers -sL $target_networks and then manual import of the data via nmdb-import-nmap.

Stage Updatable Playbook

Is your enhancement request related to a problem? Please describe.
Once started the playbook tool is static in its behavior. While this is desirable in many cases, it is also desirable to allow for dynamic modification. Specifically when the command currently running is large (won't finish quick) or costly (lose valuable data) to terminate for updating subsequent execution steps.

Describe the solution you'd like
The nmdb-playbook tool should support a dynamic mode that allows for checking the next stage to ensure modifications have not been made in the DB before executing. If modifications have been made it should use the updated configuration.

Describe alternatives you've considered
Generally, this can be accomplished by scripting. That is, continually query the data store for the next unfinished stage.

Export port list wrong prefix for SCTP

Describe the bug
When running nmdb-export-port-list --sctp (or variants) it prefixes the port list with Y: instead of S: (what nmap takes currently).

To Reproduce

$ nmdb-export-port-list --sctp-all
Y:0-65535

Expected behavior

$ nmdb-export-port-list --sctp-all
S:0-65535

Screenshots
n/a

Additional context
See man nmap in the -p port ranges section for what values are used for per scan type.

Headless Playbook Execution

Is your enhancement request related to a problem? Please describe.
Having the playbook spawn GUI xterm shells prevents the team from fully automation playbook runs in their environment.

Describe the solution you'd like
Provide a command line option for running playbook in headless mode that does not spawn GUI xterm shells while executing. The initial suggestion would be to use tmux in place of xterm.

JunOS application-set description parsing

Describe the bug
The JunOS parser appears to error out and skip application-set blocks which contain a description field.

To Reproduce
Attempt to parse something like:

applications {
  application test-app {
    protocol tcp;
    destination-port 1;
  }
  application-set test-set {
    description "some description";
    application test-app;
  }
}

Expected behavior
The JunOS parser parses or skips the description element in an application-set but otherwise parses and adds the data in the application set as it does with other sets.

Import nmap hostname discovery via targeted host scripts

Is your enhancement request related to a problem? Please describe.
Enhance the nmdb-import-nmap tool to add processing support of the NSE host scripts nbstat and smb-os-discovery.

Describe the solution you'd like
Specifically, extracting details regarding the hostname as associated with the IP.

Describe alternatives you've considered
None.

Additional context
None.

Fetchers should capture user context

Is your enhancement request related to a problem? Please describe.
Because different users may have different context information about them on different hosts, the fetchers should capture/record the user leveraged to get the data along with the results to help understand who/what/how the data was captured.

Describe the solution you'd like
At the least, the data should be captured to the fetcher capture location. It would be nice if it was consumed throughout the other tools as well (e.g., the Datastore).

Describe alternatives you've considered
Manually keeping the fetcher configurations and associated with generated UUIDs and/or ensuring a whoami type command is issued with the other commands. Also, the fetcher UUID doesn't necessarily propagate throughout the tool chains and in multi-data collect sessions this can become more and more complex.

Additional context
None.

Create ip_traceroutes View(s)

Is your enhancement request related to a problem? Please describe.
No view to show processed data from the raw_ip_traceroutes table.

Describe the solution you'd like
Create a view, or multiple, to display the processed results from the raw_ip_traceroutes table. This may require more thought on how to represent, some initial thoughts:

  • Without context (e.g., start device, prior hops, etc.), hop_count means little. Context is present in the raw table (i.e., via tool_run_id) and upon graphing (because of tree edge flow).
  • Tools that use the raw table and be migrated to the view (i.e., any non-import tools/objects) will have to be examined to ensure they (can) function the same with the new view(s).

Describe alternatives you've considered
None.

Additional context
None.

Playbook export tool behavior inconsistent

Is your enhancement request related to a problem? Please describe.
Depending on the choice, the nmdb-playbook-export-scans tool may or may not generate a blank/skeleton document if the data store does not contain any information.

Describe the solution you'd like
The tool should either always generate a blank/skeleton document (e.g., sample outline/format) or not. If it does, it should always be able to be compiled as either a standalone ConTeXt document or as an include in a larger document.

Describe alternatives you've considered
None.

Additional context
None.

Import ipconfig Tests

I made some fixes/changes based on your suggestions, though I haven't made any test cases yet as I would need to make the parser rules protected rather than private, and I would rather save that for last. I think this does cover all but the tests.

As for tests, I was thinking of covering these sorts of cases. Do you have any ideas for others that should be covered?

  1. Bare
  2. /allcompartments /all
  3. /all
  4. An adapter interface type with multiple words (the Wireless LAN interface in previous test cases caused trouble, and adapters with multiword types didn't seem to be parsed correctly)
  5. The last adapter has 0 IPs in its Default Gateway
  6. The last adapter has 1 IP in its Default Gateway
  7. The last adapter has 2+ IPs in its Default Gateway

I'll prefix: us adding test cases is "new" and we've been doing this as we modify code, so why I made mention of it and why you see there aren't a lot existing.

Generally speaking, what we've been doing is having tests which cover the parts of the parser (this covers probably 4-7 of your list). So they are more unit test oriented. Definitely have "good cases" (they pass) and possibly "bad cases" (they fail). The good cases are basically what the rule is codified to parse. In some cases, the rules have an attached predicate and you can also examine/test the state of the object after a particular parse path. If possible, have a few bad cases which cover instance we explicitly are aware of and want to prevent a refactor from allowing. The parser is pretty good at failing on "bad cases" because of formatting on it's own.

If reasonable, a few interesting whole parser test cases (this covers probably 1-3 of your list) could be added to help a developer understand the general overall structure of the data and some interesting corner cases. However, in many cases it is not reasonable to codify (e.g., a router/switch config) so it's a judgement call. We do run changes through a separate, independent, regression testing so it is covered somewhat there as well.

That said, for parts, probably test:

  • compartmentHeader
    • Good case(s): an arbitrary line of text preceded and followed by a line with only one, or more, equal signs
    • Bad case(s): multiple lines in the center, doesn't start/end with a equal sign line, any other character besides an equal or ending newline in the pre/post-line
  • hostData
    • Good case(s): tests of host name and dns suffix
    • Bad case(s): ?
  • adapter
    • Good case(s): your 5-7, but also all the other parts, except servers since it has it's own test, of the or block
    • Bad case(s): ?
  • ifaceTypeName
    • Good case(s): single and multi-word name
    • Bad case(s): ?
  • dots
    • Good case(s): just a colon, a dot-space colon, and a couple dot-space instances then a colon
    • Bad case(s): ?
  • servers
    • Good case(s): all the parts of the or block
    • Bad case(s): ?
  • ipLine
    • Good case(s): three variations on address prefix, test with and without Subnet Mask line afterwards
    • Bad case(s): ?
  • getIp
    • Good case(s): an IP with zero or one tokens after
      • note: that the IP address parser already tests IP formats so don't worry about that
    • Bad case(s): an IP with multiple tokens after
  • ifaceType
    • Good case(s): (your 4) single and multi-word type, one or more spaces before adapter
    • Bad case(s): something without adapter after

Don't worry about testing token or ignoredLine as they are fairly simple and we will ultimately, probably, refactor those to a common Parser logic include for consistency across parsers.

Originally posted by @marshall-sg in #78 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.