netmeld / netmeld Goto Github PK
View Code? Open in Web Editor NEWA tool suite for use during system assessments.
License: MIT License
A tool suite for use during system assessments.
License: MIT License
Is your enhancement request related to a problem? Please describe.
Having the playbook spawn GUI xterm shells prevents the team from fully automation playbook runs in their environment.
Describe the solution you'd like
Provide a command line option for running playbook in headless mode that does not spawn GUI xterm shells while executing. The initial suggestion would be to use tmux in place of xterm.
Describe the bug
The JunOS parser appears to error out and skip application-set
blocks which contain a description
field.
To Reproduce
Attempt to parse something like:
applications {
application test-app {
protocol tcp;
destination-port 1;
}
application-set test-set {
description "some description";
application test-app;
}
}
Expected behavior
The JunOS parser parses or skips the description
element in an application-set
but otherwise parses and adds the data in the application set as it does with other sets.
Is your feature request related to a problem? Please describe.
Not exactly. Multiple tools have implemented system level command execution logic with slightly different logic, seemingly because the functionality was not available elsewhere.
Describe the solution you'd like
Add a system level command execution logic capability to the Core module so current modules can leverage a common functionality.
Describe alternatives you've considered
Currently modules are left to implement their own.
Additional context
At least present in the Playbook and Datalake modules, but may be elsewhere.
See also: Originally posted by @cctechwiz in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjU2MjczMDM5OnYy/pull_request_review_threads/discussion
Is your enhancement request related to a problem? Please describe.
Much of the nmdb-playbook
tool logic was migrated out of the tooling and into a YAML file. Not all of it was as it was not related to the original issue. However, more of it could be to further create a loose coupling between the tooling and the actual play logic.
Describe the solution you'd like
Examine what can further be extracted and migrate that logic from the tool to the YAML file. Ensure other tooling in the Netmeld modules/tools (e.g., Playbook, Importers, clw, etc.) are considered for impact/side-effects as well.
Describe alternatives you've considered
None.
Additional context
None.
Describe the bug
The .deb package created for the Fetcher Module does not correctly remove/uninstall files.
To Reproduce
Steps to reproduce the behavior:
nm-fetch-ssh
should not exist in /usr/local/bin
)apt
(i.e., download deb zip, unzip, and install sudo apt install ./netmeld*fetchers*.deb
)nm-fetch-ssh
should exist in /usr/local/bin
)sudo apt autoremove --purge netmeld-fetchers
)nm-fetch-ssh
will still exist in /usr/local/bin
)Expected behavior
All files installed with the Fetcher Module (unless otherwise modified by the end user) should be removed.
Additional context
The Fetcher tools are slightly different than the other modules in that they are primarily scripts. This may be the root of the problem and tools like cmake
(builds tools; primes cpack
) and cpack
(build debs) are behaving differently because of this. There may be another way but if not obvious/simple, then will probably have to create a postrm
script to handle this (the Core module has an example, but for other reasons).
Is your enhancement request related to a problem? Please describe.
Once started the playbook tool is static in its behavior. While this is desirable in many cases, it is also desirable to allow for dynamic modification. Specifically when the command currently running is large (won't finish quick) or costly (lose valuable data) to terminate for updating subsequent execution steps.
Describe the solution you'd like
The nmdb-playbook
tool should support a dynamic mode that allows for checking the next stage to ensure modifications have not been made in the DB before executing. If modifications have been made it should use the updated configuration.
Describe alternatives you've considered
Generally, this can be accomplished by scripting. That is, continually query the data store for the next unfinished stage.
Targeted update requests:
established
) to the rule. Probably should be represented in the service_set column as that's what other tools do currently. E.g., permit tcp any any established
.Is your enhancement request related to a problem? Please describe.
The nmdb-import-ping
command states it requires the -n
option, however with or without it the command appears to provide enough information to be able to extract the same key data the tool already does. Basically, if someone doesn't use this command with the specified option, this tool won't work and the data has to be entered manually.
Describe the solution you'd like
The tool works regardless of if -n
was issued or not.
Describe alternatives you've considered
Manually entering the data.
Additional context
None
Is your enhancement request related to a problem? Please describe.
Enhance the nmdb-import-nmap
tool to add processing support of the NSE host scripts nbstat
and smb-os-discovery
.
Describe the solution you'd like
Specifically, extracting details regarding the hostname as associated with the IP.
Describe alternatives you've considered
None.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
None of the existing Cisco parsers support for access-list standard
type access control rules, they only support access-list extended
rules. This issue revolves around adding support for standard ones.
Describe the solution you'd like
Support could behave in a similar manor as what happens with the extended
rules.
For example, assuming the format:
access-list access-list-number {permit|deny} {host|source source-wildcard|any}
in
then:
any
dst_iface
is the interface it is applied onsrc_iface
is any
out
then source is any
and the src_iface
is the interface it is applied on.
any
dst_iface
is any
src_iface
is the interface it is applied onany
and action is in the ruleI think the above is logically right, however it shows the point regardless.
Describe alternatives you've considered
Manual examination and insertion into the datastore.
Additional context
See here for a sample and more information regarding Cisco standard access lists.
Is your enhancement request related to a problem? Please describe.
The nmdb-import-cisco
tool should be able to appropriately parse ACLs. Currently only the ASA one does this.
Describe the solution you'd like
The nmdb-import-cisco
tool process ACL lists similar to how the ASA one does.
Describe alternatives you've considered
Attempted to process with the ASA parser, but it appears not to work appropriately. Manual insertions was the only other option via the nmdb-insert-ac
tool.
Is your enhancement request related to a problem? Please describe.
Not really a problem, but their could be better integration of the modules if the Fetcher module attempted to leverage the Datalake module if it is present on the system.
Describe the solution you'd like
If the Datalake modules is available, leverage it for storage of data obtained by the Fetcher module.
Describe alternatives you've considered
None, outside of currently the Fetcher module just places data in a specific location on the filesystem.
Additional context
Closure of this is contingent on closure of #7 .
Is your enhancement request related to a problem? Please describe.
Not really, just odd behavior. The nmdb-remove-tool-run
requires passing the --tool-run-id
option despite its only real function being to remove the tool run ID passed on command line.
Describe the solution you'd like
It would be nice if the tool just took the passed argument and worked regardless of passing the option or not.
Describe alternatives you've considered
None.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
I want to be able to use the playbook to perform a DNS scan, so that I can quickly find known hostnames of hosts within the targeted networks.
Describe the solution you'd like
The playbook module should contain a tool or option to perform a DNS scan of in-scope networks by leveraging known and/or provided DNS servers and then saving the results to the data store.
Describe alternatives you've considered
This can be done with something along the lines of clw nmap --dns-servers $all_dns_servers -sL $target_networks
and then manual import of the data via nmdb-import-nmap
.
Is your enhancement request related to a problem? Please describe.
Certain changes to the link state during a run cause problematic datastore state. For example, should a link go down between phase one and two the datastore will be populated with a list of supposed up hosts that have no open ports, etc. It currently is not obvious that something like this occurred and requires careful examination of the data, which in large playbooks/networks can be tedious or impossible.
Describe the solution you'd like
The playbook tool will detect certain state changes (e.g., loss of scan IP, down link, etc.) during a run, log the state change, and abort that current play/run/phase. It is understood that it may not be possible to abort the currently running scans in a simple manner and as such aborting the "future" scans which are under the same "broken" config are acceptable.
Describe alternatives you've considered
None.
Additional context
The error checking probably should be allowed to be optionally ignored (for whatever reason I can't think of at this point in time).
Is your feature request related to a problem? Please describe.
Traceroute like tools can show the hops along the way, but Netmeld does not support parsing of that data nor do graphing tools support displaying linkage unless subnets are explicitly known.
Describe the solution you'd like
Create a tool called nmdb-import-traceroute
which:
traceroute
or traceroute -n
type command (equivalant tracert for Windows)Update the nmdb-graph-network
to:
Describe alternatives you've considered
Manually adding the data and manually adding connections after the generation by the graphing tool.
Additional context
None.
Some Cisco devices appear to flip the column order. Current tooling expects "vlan, mac, type, port" where sometimes it can be "vlan, mac, port, type" instead.
Is your enhancement request related to a problem? Please describe.
Mainly it is an incomplete coverage/accounting type issue.
Describe the solution you'd like
The TracerouteHop
object should insert the hops into the raw_router_ip_addrs
table as well as each hop listed is explicitly a routing hop.
Describe alternatives you've considered
None.
Additional context
None.
Is support for bash tab completion desired? Further on that, is it desired to go all the way towards something along the lines of how ip
works?
For more info and reference, see:
https://www.tldp.org/LDP/abs/html/tabexpansion.html
https://www.gnu.org/software/bash/manual/bash.html#Programmable-Completion
https://debian-administration.org/article/316/An_introduction_to_bash_completion_part_1
https://www.cyberciti.biz/faq/add-bash-auto-completion-in-ubuntu-linux/
https://www.reddit.com/r/cpp/comments/6i6zqt/bash_completion_script_for_boost_program_options/
https://iridakos.com/programming/2018/03/01/bash-programmable-completion-tutorial
An example is:
complete -F _longopt nmdb-initialize
nmdb-initialize --[TAB]
Describe the bug
The tool displays command not found
messages while it runs despite looking like it successfully executed.
To Reproduce
nm-fetch-rpcclient user@machine enumprivs
Enter WORKGROUP\user's password:
command not found: command_name_enumprivs
found 35 privileges
...
rpcclient: missing argument
command not found: netmeld-fetcher-delimiter
nm-fetch-rpcclient results stored in: ...
Expected behavior
Would not expect the command not found
messages or the rpcclient: missing argument
message.
Screenshots
None.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
Playbook module was updated to leverage a YAML file for the SQL queries. This allows both inspection (e.g., printing) of the query as well as usage in other (non-Netmeld code/library) tooling.
Describe the solution you'd like
Examine appropriateness of migrating queries. If reasonable, implement it.
Describe alternatives you've considered
None.
Additional context
See playbook/common/utils/queries.yaml
and the associated PlaybookQueries.[c|h]pp
files.
Description
Migration to GitHub left out some previously existing CI/CD and packaging hooks. We need to get some form of those back and try to be as repo host agnostic as possible. This includes at least:
Creation of CI/CD pipelines
Deploy needs to create packages.
Additional information
We may do the majority of this via scripts (e.g., the deb creation scripts already build, test, and install to a targeted location). Furthermore, we may create a script to automate all of this as it would probably be simple in that the steps are: get/clone repo, install dependencies, and execute deb creation scripts.
Describe the bug
The tool will set the device as responding regardless of what is provided to the option.
To Reproduce
nmdb-insert-device --device-id test001 --ip-addr 1.2.3.0/24 --responding true
nmdb-insert-device --device-id test001 --ip-addr 1.2.3.1/24 --responding false
psql site -c "select * from ip_addrs where ip_addr = '1.2.3.0' or ip_addr = '1.2.3.1'"
ip_addr | is_responding
---------+---------------
1.2.3.0 | t
1.2.3.1 | t
(2 rows)
Expected behavior
ip_addr | is_responding
---------+---------------
1.2.3.0 | t
1.2.3.1 | f
(2 rows)
Screenshots
None.
Additional context
None.
Routing features include the capability to mark a network as unreachable, regardless of actual ability to reach the target network. How should this be handled? From an unprivileged user standpoint this is important, however from a privileged user standpoint one can technically un-do this feature.
Is your enhancement request related to a problem? Please describe.
The clw
tool and fetchers save to a different location format. For the fetchers, the Ansible related one saves to host/dts-uuid
while the SSH/RPC one saves to host_dts_uuid
. For the clw
tool, it saves to tool_dts_uuid
. This can make automating or even user navigation complicated due to inconsistent format.
Describe the solution you'd like
At the least, standardize on dash or underscore for space separator and for the fetchers to standardize on a host folder or identifier for the prefix. It is understood we probably can't get consistency across the fetchers and clw tool since they are different in nature/perspective, but at least ensure it is as consistent/similar as possible.
Describe alternatives you've considered
None.
Additional context
None.
In general a few of these appear to be outdated or slightly wrong.
Runman (MODULE|TOOL)_NAME
to see what they look like after building.
Many of these change revolve around requiring --device-id
or not.
Is your enhancement request related to a problem? Please describe.
The nmdb-import-cisco-nxos
tool should be able to appropriately parse ACLs. Currently only the ASA one does this.
Describe the solution you'd like
The nmdb-import-cisco-nxos
tool process ACL lists similar to how the ASA one does.
Describe alternatives you've considered
Attempted to process with the ASA parser, but it appears not to work appropriately. Manual insertions was the only other option via the nmdb-insert-ac
tool.
Additional context
Similar to issue #4, but for a different tool.
Is your enhancement request related to a problem? Please describe.
Add more Docker builds to support easier usage, dev, etc. of tooling and code via docker images.
Describe the solution you'd like
Docker image build(s) (integrated into the docker build pipeline) for:
clw
tool to work with 3 lines of code change to the cmake files.Describe alternatives you've considered
Manually creating docker images each time.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
It would be good if we can specify the icon folder for the nmdb-graph-network
. This would allow the "default" location to be documented as well as allow alternate icons to be located elsewhere instead of having to install all to that default location.
Describe the solution you'd like
The tool could have an option like --icon-folder
.
Describe alternatives you've considered
None.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
The NmapXmlParser does not currently have tests for all its functionality.
Describe the solution you'd like
Add tests for all NmapXmlParser functionality.
Describe alternatives you've considered
N/A
Additional context
N/A
Is your enhancement request related to a problem? Please describe.
Enhance the nmdb-import-nmap tool to add processing support of the NSE port service microsoft-ds and port scripts rdp-ntlm-info and ms-sql-ntlm-info.
Describe the solution you'd like
Specifically, extracting details regarding the hostname as associated with the IP.
Describe alternatives you've considered
None.
Additional context
None.
Is your feature request related to a problem? Please describe.
An issue arises around how to store data collected and processed. Current measures only work in simple cases and generally not helpful in the case where large amounts of data is collected or over time. Basically, source data management is a hybrid of end user and tool (e.g., fetchers) decision points. It would be helpful to have tools which allow for a more unified source data management option.
Describe the solution you'd like
Initial thoughts:
import-*
) leveraging the layout to allow auto-loading of data.host
, import_tool
, collection_method
, etc.DTS_UUID
everything as that may be handled by the versioning within the repo.Additional context
This should probably be another module as an initial prototype is wanted to facilitate discussion points about what works and what doesn't.
Is your enhancement request related to a problem? Please describe.
It would be good if we can specify the config file path for the nmdb-export-port-list
tool. This would allow the "default" location to be documented as well as allow an alternate config to be located elsewhere instead of having to alter the default installed one.
Describe the solution you'd like
Add the option --config-path
to the tool.
Describe alternatives you've considered
None.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
Depending on the choice, the nmdb-playbook-export-scans
tool may or may not generate a blank/skeleton document if the data store does not contain any information.
Describe the solution you'd like
The tool should either always generate a blank/skeleton document (e.g., sample outline/format) or not. If it does, it should always be able to be compiled as either a standalone ConTeXt document or as an include in a larger document.
Describe alternatives you've considered
None.
Additional context
None.
Is your feature request related to a problem? Please describe.
There is no import tool for an Arista EOS configuration file.
Describe the solution you'd like
An import tool to handle Arista EOS configuration as produced by show running-config
or show session-config
Describe alternatives you've considered
It is similar to Cisco IOS style syntax, so one might be able to leverage a lot of the same logic but there are a few differences.
Though infrequent, the libraries Netmeld depends on (e.g., Boost) get update overtime and the old versions get removed from the repos. A workflow is needed to test install of the Netmeld deb packages to ensure the install works. Likewise a workflow may be need to exercise basic functionality testing to ensure any runtime dependent library changes are captured. While it may be possible to do this all in one container, new independent containers may need to be created for testing purposes (it will have to be further examined).
Is your enhancement request related to a problem? Please describe.
The decoder tools, cisco and junos, do not seem to actually require anything in the Playbook module. However, to use them you have to install the Playbook module and everything it depends on. This also includes the Datastore module (at least core), however these tools don't seem to insert anything into it.
Describe the solution you'd like
It would be nice if they were pulled out to their own tools. If the Datastore module is available, the tools should also probably be updated to insert the processed data into the datastore. However, that's probably more a stretch goal/request than anything as there is a lot of "missing" context then.
Describe alternatives you've considered
Manually build/install from source and port those binaries around.
Additional context
None.
To facilitate more automated processes, Import module tools should all allow for passing a device-id
. Examples include: nessus, nmap, tshark, and pcap don't. The tools do not have to actually use it, but should call that out. Thus, it will be OK to make it optional/ignored and not use if appropriate.
Describe the bug
When running nmdb-export-port-list --sctp
(or variants) it prefixes the port list with Y:
instead of S:
(what nmap takes currently).
To Reproduce
$ nmdb-export-port-list --sctp-all
Y:0-65535
Expected behavior
$ nmdb-export-port-list --sctp-all
S:0-65535
Screenshots
n/a
Additional context
See man nmap
in the -p port ranges
section for what values are used for per scan type.
Describe the bug
Numerous potential issues where introduced with the new route logic support. While not perfect (as it doesn't support code after c++11) and all of them may not be actual issues, run the cppcheck
to see the issue and mitigate as many as possible.
To Reproduce
Effectively, build the code via
cmake -S . -B build -DCMAKE_EXPORT_COMPILE_COMMANDS=ON && cmake --build build -j4 --target Test.netmeld && (cd build && ctest Test.netmeld)
and then run the first checks via
cppcheck --template=gcc --enable=all --force --project=build/compile_commands.json --inline-suppr -Dfalse=0 --suppressions-list=cppcheck-supressions.txt . > /dev/null
and (to ensure the suppressed checks/list still isn't a real issue) then
cppcheck --template=gcc --enable=all --force --project=build/compile_commands.json --inline-suppr -Dfalse=0 . > /dev/null
.
Expected behavior
Output should only be those that can't be mitigated.
Screenshots
None.
Additional context
None.
Describe the bug
The nmdb-import-tshark
tool states when the --pipe
option is issued it saves the data to a file, whereas the man page and README.md
state the opposite.
To Reproduce
Read the docs and compare to --help
output.
Expected behavior
Either it saves or it doesn't and the documentation states it correctly.
Screenshots
None.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
No view to show processed data from the raw_ip_traceroutes
table.
Describe the solution you'd like
Create a view, or multiple, to display the processed results from the raw_ip_traceroutes
table. This may require more thought on how to represent, some initial thoughts:
hop_count
means little. Context is present in the raw table (i.e., via tool_run_id
) and upon graphing (because of tree edge flow).Describe alternatives you've considered
None.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
Capturing of what is the native VLAN is not occurring. This means it's a manual process to see what the native VLAN is and then cross reference usage/traffic to determine if some host is leveraging a native VLAN or not.
Describe the solution you'd like
A column/warning/flag/whatever (e.g., ToolObservation
) should be made when the native VLAN is being used. Behavior would probably be different depending on if a VLAN is just seen (e.g., packet capture parsing) vs VLAN definitions are being parsed (e.g., network device config). Something along the lines:
Describe alternatives you've considered
Manual (cross-) examination.
Additional context
None.
Is your feature request related to a problem? Please describe.
It would be nice to have a post-processing/importing tool(s) which could pull out items of interest/concern. Similar to how the graphing tools export graphic views of the network connectivity, something which exports textual/graphic views of data that may be of interest.
Describe the solution you'd like
Effectively, a tool(s) which can take an input file(s) of things to gather from the data-store/lake and spit out a report(s). While it could be built into the tool, it probably would be more useful to allow the end-user to define the queries.
The input file (user generated) probably should contain:
The output file (tool generated) probably should contain:
Describe alternatives you've considered
Manually running similar query/logic chains.
Additional context
None.
I made some fixes/changes based on your suggestions, though I haven't made any test cases yet as I would need to make the parser rules protected rather than private, and I would rather save that for last. I think this does cover all but the tests.
As for tests, I was thinking of covering these sorts of cases. Do you have any ideas for others that should be covered?
- Bare
- /allcompartments /all
- /all
- An adapter interface type with multiple words (the Wireless LAN interface in previous test cases caused trouble, and adapters with multiword types didn't seem to be parsed correctly)
- The last adapter has 0 IPs in its Default Gateway
- The last adapter has 1 IP in its Default Gateway
- The last adapter has 2+ IPs in its Default Gateway
I'll prefix: us adding test cases is "new" and we've been doing this as we modify code, so why I made mention of it and why you see there aren't a lot existing.
Generally speaking, what we've been doing is having tests which cover the parts of the parser (this covers probably 4-7 of your list). So they are more unit test oriented. Definitely have "good cases" (they pass) and possibly "bad cases" (they fail). The good cases are basically what the rule is codified to parse. In some cases, the rules have an attached predicate and you can also examine/test the state of the object after a particular parse path. If possible, have a few bad cases which cover instance we explicitly are aware of and want to prevent a refactor from allowing. The parser is pretty good at failing on "bad cases" because of formatting on it's own.
If reasonable, a few interesting whole parser test cases (this covers probably 1-3 of your list) could be added to help a developer understand the general overall structure of the data and some interesting corner cases. However, in many cases it is not reasonable to codify (e.g., a router/switch config) so it's a judgement call. We do run changes through a separate, independent, regression testing so it is covered somewhat there as well.
That said, for parts, probably test:
compartmentHeader
hostData
adapter
servers
since it has it's own test, of the or blockifaceTypeName
dots
servers
ipLine
getIp
ifaceType
Don't worry about testing token
or ignoredLine
as they are fairly simple and we will ultimately, probably, refactor those to a common Parser logic include for consistency across parsers.
Originally posted by @marshall-sg in #78 (comment)
Question
Should we collapse the Cisco type routers to one tool with options instead?
Is your enhancement request related to a problem? Please describe.
The configs between them are generally similar, there are outliers though (e.g, the wireless device configs). So this means much of the logic is the same between the parsers. When issues arise, we have to fix for multiple (unless it is forgotten) and usually this results in copy paste code. We may be able to do better.
Describe the solution you'd like
Explore collapsing the logic into a single parsing tool. I advise against a single monolithic parser, in favor of a multi-branch parser. This would allow the flexibility to give the end-user an option to explicitly specify which parser to leverage (e.g., --ios
, --nxos
, etc.). Also, this hopefully will allow logic re-use between the parsers in a cleaner manner.
Describe alternatives you've considered
Current solution leverages multiple tools.
Additional context
None.
Describe the bug
Docker workflow needs updated to leverage newer actions.
To Reproduce
Just examine workflow.
Expected behavior
No deprecation warnings
Screenshots
None.
Additional context
None.
Describe the bug
Some tools appear to work where it will capture as much data and do operations on a per line basis. While in general this is probably fine, the tables raw_mac_addrs
and raw_ip_addrs
will not allow this. That is, once a MAC or IP has been added for that tool run, it can not be added again. This means if the MAC or IP is originally inserted as not responding or no value, then if it is suddenly found to be responding (e.g., long running ping finds device alive after several not-alive responses), then the target will be incorrectly recorded as not alive.
To Reproduce
None.
Expected behavior
Targets are added with correct status.
Screenshots
None.
Additional context
Either tools need to be examined and re-worked to ensure they merge/prune duplicate targets before inserting or the schemes and associated objects need to be updated to account for this potential overlap.
Is your enhancement request related to a problem? Please describe.
Because different users may have different context information about them on different hosts, the fetchers should capture/record the user leveraged to get the data along with the results to help understand who/what/how the data was captured.
Describe the solution you'd like
At the least, the data should be captured to the fetcher capture location. It would be nice if it was consumed throughout the other tools as well (e.g., the Datastore).
Describe alternatives you've considered
Manually keeping the fetcher configurations and associated with generated UUIDs and/or ensuring a whoami
type command is issued with the other commands. Also, the fetcher UUID doesn't necessarily propagate throughout the tool chains and in multi-data collect sessions this can become more and more complex.
Additional context
None.
Is your enhancement request related to a problem? Please describe.
The nmdb-import-ipconfig
tool should be able to process at least some information from a bare ipconfig
command. It currently supports when the /all
or /allcompartments /all
flags are given, but not when no arguments are given.
Describe the solution you'd like
The tool supports the bare ipconfig
as well as the other formats.
Describe alternatives you've considered
None.
Additional context
None.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.