Giter Site home page Giter Site logo

open-nti's Introduction

Beta Community

OpenNTI

OpenNTI is a container packaged with all tools needed to collect and visualize time series data from network devices. Data can be collected from different sources:

  • Data Collection Agent : Collect data on devices using CLI/Shell or Netconf
  • Data Streaming Collector : Take all data streamed by Juniper devices as Input (JTI, Analyticsd, soon Openconfig with gRPC)
  • Statsd interface : Accept any Statsd packets

It's pre-configured with all tools and with a default dashboard .. Send it data, it will graph it

Thanks to docker, it can run pretty much anywhere on server, on laptop ... on the device itself

More detailed description of a project can be found here (including a series of videos on how to use it):

Requirements

The requirement is to have docker and docker-compose installed on your Linux server/machine. Please check the Install Guide

Documentation

The complete documentation is available here

Ask a question or Report an Issue ?

Please open an issue on Github this is the fastest way to get an answer.

Want to contribute ?

Contributions are more than welcome, small or big. We love to receive contributions for Parsers or Dashboards that you might have created.
If you are planning a big change, please start a discussion first to make sure we'll be able to merge it.

Contributors

Current

Former

Tools used

  • fluentd
  • influxdb
  • telegraf
  • grafana
  • nginx
  • pyez

open-nti's People

Contributors

3fr61n avatar ashwinak avatar brahmastra2016 avatar dgarros avatar itomic-jnpr avatar jjess avatar jmizquierdo avatar jorgebonilla avatar karlnewell avatar mpergament avatar mstecher avatar psagrera avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-nti's Issues

Unsupported sensor : jnpr_firewall_ext

when I added Firewall sensor I started seeing

2016-11-30 06:44:09 -0500 fluent.warn: {"message":"Unsupported sensor : jnpr_firewall_ext"}
2016-11-30 06:44:10 -0500 [warn]: Unsupported sensor : jnpr_firewall_ext
2016-11-30 06:44:10 -0500 fluent.warn: {"message":"Unsupported sensor : jnpr_firewall_ext"}
2016-11-30 06:44:10 -0500 [warn]: Unsupported sensor : jnpr_firewall_ext
2016-11-30 06:44:10 -0500 [warn]: Unsupported sensor : jnpr_firewall_ext
2016-11-30 06:44:10 -0500 fluent.warn: {"message":"Unsupported sensor : jnpr_firewall_ext"}
2016-11-30 06:44:10 -0500 fluent.warn: {"message":"Unsupported sensor : jnpr_firewall_ext"}
2016-11-30 06:44:11 -0500 [warn]: Unsupported sensor : jnpr_firewall_ext
2016-11-30 06:44:11 -0500 fluent.warn: {"message":"Unsupported sensor : jnpr_firewall_ext"}
2016-11-30 06:44:11 -0500 [warn]: Unsupported sensor : jnpr_firewall_ext
2016-11-30 06:44:11 -0500 [warn]: Unsupported sensor : jnpr_firewall_ext
2016-11-30 06:44:11 -0500 fluent.warn: {"message":"Unsupported sensor : jnpr_firewall_ext"}
2016-11-30 06:44:11 -0500 fluent.warn: {"message":"Unsupported sensor : jnpr_firewall_ext"}

https://github.com/JNPRAutomate/fluent-plugin-juniper-telemetry

fluent-plugin-juniper-telemetry does decode it correctly but i am unable to configure it to send data to influxdb.

any suggestion as how to fix this ?

"show-subscriber-summary/Port" parsers do not have correct RegEx

File names affected:
open-nti > data > junos_parsers > show-subscriber-summary.parser.yaml
open-nti > data > junos_parsers > show-subscriber-summary-port.parser.yaml

Correct syntax should be "subscribers" (plural) and current command file uses "subscriber" (singular). Please change all references of "subscriber" to "subscribers".

Example:
Incorrect Regex: regex-command: show\s+subscriber\s+summary\s+|\s+display\s+xml
Incorrect: variable-name: $host.bng.subscriber.summary.session-state-active
(Missing last "s" in subscribers)

Correct regex: regex-command: show\s+subscribers\s+summary\s+|\s+display\s+xml
Correct: variable-name: $host.bng.subscribers.summary.session-state-active

Happy coding!

GUI/Dashboards automation and improvements

Lately, it's been more and more obvious to me that we need to improve the Graphical User Interface of Open NTI, I'm opening this thread to share some ideas I've been thinking about and hopefully start a discussion on this topic. Also a major version of Grafana just got released and it could change a lot of things.

From my point of view the main objectives are :

  • Make it easier to add a new graph into an existing Dashboard (needed for new sensor or parser)
  • Make it easier to create a personalized dashboard
  • Eventually support different type of database

My initial thought was to leverage Jinja2 templates and Ansible to create a library of graphs that can be group into a dashboard easily.

Meanwhile, Grafana 3.0 has been released and it includes a new plugin architecture composed of "types, Apps, and Panels".

Ideas/comments are welcome on this topic

installation warning: warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastmath.

Hi team,
when issuing "sudo ./docker.build.sh", a warning appears.
warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastmath.

Installing on Ubuntu 14.04 Desktop vm
ubuntu@ubuntu-virtual-machine:/usr/share/gnome-connection-manager$ uname -a
Linux ubuntu-virtual-machine 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@ubuntu-virtual-machine:/usr/share/gnome-connection-manager$ docker -v
Docker version 1.10.2, build c3959b1

Searching for pycrypto>=2.4.1
Reading https://pypi.python.org/simple/pycrypto/
Best match: pycrypto 2.6.1
Downloading https://pypi.python.org/packages/source/p/pycrypto/pycrypto-2.6.1.tar.gz#md5=55a61a054aa66812daf5161a0d5d7eda
Processing pycrypto-2.6.1.tar.gz
Writing /tmp/easy_install-AYsA9B/pycrypto-2.6.1/setup.cfg
Running pycrypto-2.6.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-AYsA9B/pycrypto-2.6.1/egg-dist-tmp-sOhpcq
warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastmath.
zip_safe flag not set; analyzing archive contents...
Adding pycrypto 2.6.1 to easy-install.pth file

Installed /usr/local/lib/python2.7/dist-packages/pycrypto-2.6.1-py2.7-linux-x86_64.egg

Regards,
Lorenzo

Merge all scripts into a Makefile

Hi

The root directory of the project is becoming crowded with all the docker.*.sh and the open-nti.*.sh scripts.

I would suggest to centralize all scripts inside a single script/tool that take parameters
I think make would do a good job or any other alternative. We could also develop our own shell scripts

Using Make, we could have something like this :

./docker.start.sh                >> make start
./docker.stop.sh                 >> make stop
./docker.update.sh               >> make update
./open-nti-show-cron.sh          >> make show-cron
./open-nti-start-cron.sh         >> make start-cron TAG="tag1"

The would be a very quick project, once we agree on something I can do it
Damien

Refactor user input files to support more input format in the configuration files

Hi
As we are looking to support more input formats like SNMP or Openconfig, we need to extend the the user configuration files (hosts.yaml, credentials.yaml, commands.yaml) to support new type of data.

Currently the logic to extract these information in embedded into the python script open-nti.py along with Netconf collector.
I think we should separate these 2 functions into:

  • 1 script to read all user input files and determine what needs to be done on which device based on tags
  • 1 script to collect information over netconf

As a middle man between these scripts I would recommend to use a configuration database like etcd. Etcd provide a very powerful key-value store.
Moving forward it will be easier to add other consumer for the configurations data like the telegraf SNMP plugin.

Comments, ideas, concerns ?
Damien

No entries in influxdb

I have installed open-nti on linux:

root@gvision:~/open-nti# cat /etc/issue
Ubuntu 14.04.5 LTS \n \l

  • I believe my install is healthy
  • time sync is verified on router and server
  • UDP data is streaming
  • I see UDP data in my docker

root@a3ffef65766a:/# tcpdump -i eth0 -n dst port 50000 | more
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
09:59:59.907177 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 3034
09:59:59.908863 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 3048
09:59:59.910632 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 3042

  • Grafana GUI is accessible but with no data
  • influxdb "show measurements" with juniper db shows no data

Any clues where I can look next?

Create graph & row for JTI firewall filter

Need to create a graph and row under data_streaming_collector for JTI firewall filter
Structure below as a reminder

2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"foo_filter_output-et-19/0/3.1-o","filter_timestamp":1477327298,"filter_counter_name":"c300-et-19/0/3.1-o","type":"filter_counter.packets","value":121340}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"foo_filter_output-et-19/0/3.1-o","filter_timestamp":1477327298,"filter_counter_name":"c300-et-19/0/3.1-o","type":"filter_counter.bytes","value":6795040}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"match-dscp","filter_timestamp":1477327298,"type":"memory_usage.HEAP","value":3464}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"BACKBONE-SAMPLE-V6","filter_timestamp":1477327298,"type":"memory_usage.HEAP","value":1496}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"LOOPBACK-V6","filter_timestamp":1477327316,"type":"memory_usage.HEAP","value":54280}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"LOOPBACK-V6","filter_timestamp":1477327316,"type":"filter_counter.packets","value":86392,"filter_counter_name":"DENIALS-LO-V4"}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"LOOPBACK-V6","filter_timestamp":1477327316,"type":"filter_counter.bytes","value":6220224,"filter_counter_name":"DENIALS-LO-V4"}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"__default_bpdu_filter__","filter_timestamp":1477402764,"type":"memory_usage.HEAP","value":2424}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"chash-test","filter_timestamp":1477402935,"type":"memory_usage.HEAP","value":9856}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"__default_arp_policer__","filter_timestamp":1477327298,"type":"memory_usage.HEAP","value":1584}
2016-11-03 00:43:10 +0000 jnpr.test: {"device":"BAR-re0:10.17.0.10","filter_name":"__flowspec_default_inet__","filter_timestamp":1478106192,"type":"memory_usage.HEAP","value":707600}
2016-11-03 13:25:20 +0000 fluent.debug: {"message":"Extract sensor data from BAR-re0:10.17.0.10 with output structured"}

'make stop' not stopping containers

From the documentation :https://github.com/Juniper/open-nti/blob/master/docs/troubleshoot.rst
containers can be stopped with 'make stop'

Here is the output of it on my system:

root@xxxx:/root/open-nti# make stop
echo "Use docker compose file : docker-compose.yml"
Use docker compose file : docker-compose.yml
docker-compose -f docker-compose.yml down
No such command: down

Commands:
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
unpause Unpause services
up Create and start containers
migrate-to-labels Recreate containers to add labels
version Show the Docker-Compose version information
Makefile:35: recipe for target 'stop' failed
make: *** [stop] Error 1

installation warning: no previously-included files matching '*.pyc' found anywhere in distribution

Hi team,
when issuing "sudo ./docker.build.sh", a warning appears.
warning: no previously-included files matching '*.pyc' found anywhere in distribution
zip_safe flag not set; analyzing archive contents...
ply.lex: module references file
ply.lex: module MAY be using inspect.getsourcefile
ply.yacc: module references file
ply.yacc: module MAY be using inspect.getsourcefile
ply.yacc: module MAY be using inspect.stack
ply.ygen: module references file

Installing on Ubuntu 14.04 Desktop vm
ubuntu@ubuntu-virtual-machine:/usr/share/gnome-connection-manager$ uname -a
Linux ubuntu-virtual-machine 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@ubuntu-virtual-machine:/usr/share/gnome-connection-manager$ docker -v
Docker version 1.10.2, build c3959b1

Searching for ply
Reading https://pypi.python.org/simple/ply/
Best match: ply 3.8
Downloading https://pypi.python.org/packages/source/p/ply/ply-3.8.tar.gz#md5=94726411496c52c87c2b9429b12d5c50
Processing ply-3.8.tar.gz
Writing /tmp/easy_install-czFJXK/ply-3.8/setup.cfg
Running ply-3.8/setup.py -q bdist_egg --dist-dir /tmp/easy_install-czFJXK/ply-3.8/egg-dist-tmp-n4TOD8
warning: no previously-included files matching '*.pyc' found anywhere in distribution
zip_safe flag not set; analyzing archive contents...
ply.lex: module references file
ply.lex: module MAY be using inspect.getsourcefile
ply.yacc: module references file
ply.yacc: module MAY be using inspect.getsourcefile
ply.yacc: module MAY be using inspect.stack
ply.ygen: module references file
Adding ply 3.8 to easy-install.pth file

Installed /usr/local/lib/python2.7/dist-packages/ply-3.8-py2.7.egg

Not able to parse a syslog, block the queue

Apparently there is a problem when a syslog cannot be parsed correctly or doesn't have the right keys
As a result the output buffer get stuck

2016-02-06 02:26:20 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2016-02-06 02:41:53 +0000 error_class="InfluxDB::Error" error="{\"error\":\"partial write:\\nunable to parse 'events,pid=1808,message=task_reconfigure\\\\ reinitializing\\\\ done,device=QFX5100,daemon=analyticsd,event= priority=\\\"info\\\",facility=\\\"daemon\\\" 1454724520': missing tag value\\nunable to parse 'events,message=major\\\\ =\\\\ 34\\\\,\\\\ minor\\\\ =\\\\ 2\\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': invalid tag format\\nunable to parse 'events,message=setsocketopts:\\\\ setting\\\\ SO_RTBL_INDEX\\\\ to\\\\ 0,device=QFX5100,daemon=/kernel,event= priority=\\\"debug\\\",facility=\\\"kern\\\" 1454724521': missing tag value\\nunable to parse 'events,pid=1773,message=task_reconfigure\\\\ reinitializing\\\\ done,device=QFX5100,daemon=l2cpd,event= priority=\\\"info\\\",facility=\\\"daemon\\\" 1454724520': missing tag value\\nunable to parse 'events,message=clksync_phy_driver_enable_tc\\\\ \\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': missing tag value\\nunable to parse 'events,pid=7844,message=auto-snapshot\\\\ is\\\\ not\\\\ configured,device=QFX5100,daemon=file,event= priority=\\\"debug\\\",facility=\\\"daemon\\\" 1454724520': missing tag value\\nunable to parse 'events,message=pic\\\\ 0x214bb8f8\\\\ enable\\\\ No:\\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': missing tag value\\nunable to parse 'events,message=total\\\\ ports\\\\ 72\\\\,\\\\ we\\\\ are\\\\ DISABLING\\\\ transparent\\\\ clock\\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': missing tag value\"}\n" plugin_id="object:c17410"
  2016-02-06 02:26:20 +0000 [warn]: suppressed same stacktrace
2016-02-06 02:26:20 +0000 fluent.warn: {"next_retry":"2016-02-06 02:41:53 +0000","error_class":"InfluxDB::Error","error":"{\"error\":\"partial write:\\nunable to parse 'events,pid=1808,message=task_reconfigure\\\\ reinitializing\\\\ done,device=QFX5100,daemon=analyticsd,event= priority=\\\"info\\\",facility=\\\"daemon\\\" 1454724520': missing tag value\\nunable to parse 'events,message=major\\\\ =\\\\ 34\\\\,\\\\ minor\\\\ =\\\\ 2\\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': invalid tag format\\nunable to parse 'events,message=setsocketopts:\\\\ setting\\\\ SO_RTBL_INDEX\\\\ to\\\\ 0,device=QFX5100,daemon=/kernel,event= priority=\\\"debug\\\",facility=\\\"kern\\\" 1454724521': missing tag value\\nunable to parse 'events,pid=1773,message=task_reconfigure\\\\ reinitializing\\\\ done,device=QFX5100,daemon=l2cpd,event= priority=\\\"info\\\",facility=\\\"daemon\\\" 1454724520': missing tag value\\nunable to parse 'events,message=clksync_phy_driver_enable_tc\\\\ \\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': missing tag value\\nunable to parse 'events,pid=7844,message=auto-snapshot\\\\ is\\\\ not\\\\ configured,device=QFX5100,daemon=file,event= priority=\\\"debug\\\",facility=\\\"daemon\\\" 1454724520': missing tag value\\nunable to parse 'events,message=pic\\\\ 0x214bb8f8\\\\ enable\\\\ No:\\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': missing tag value\\nunable to parse 'events,message=total\\\\ ports\\\\ 72\\\\,\\\\ we\\\\ are\\\\ DISABLING\\\\ transparent\\\\ clock\\\\ \\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\"debug\\\",facility=\\\"local4\\\" 1454724521': missing tag value\"}\n","plugin_id":"object:c17410","message":"temporarily failed to flush the buffer. next_retry=2016-02-06 02:41:53 +0000 error_class=\"InfluxDB::Error\" error=\"{\\\"error\\\":\\\"partial write:\\\\nunable to parse 'events,pid=1808,message=task_reconfigure\\\\\\\\ reinitializing\\\\\\\\ done,device=QFX5100,daemon=analyticsd,event= priority=\\\\\\\"info\\\\\\\",facility=\\\\\\\"daemon\\\\\\\" 1454724520': missing tag value\\\\nunable to parse 'events,message=major\\\\\\\\ =\\\\\\\\ 34\\\\\\\\,\\\\\\\\ minor\\\\\\\\ =\\\\\\\\ 2\\\\\\\\ \\\\\\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\\\\\"debug\\\\\\\",facility=\\\\\\\"local4\\\\\\\" 1454724521': invalid tag format\\\\nunable to parse 'events,message=setsocketopts:\\\\\\\\ setting\\\\\\\\ SO_RTBL_INDEX\\\\\\\\ to\\\\\\\\ 0,device=QFX5100,daemon=/kernel,event= priority=\\\\\\\"debug\\\\\\\",facility=\\\\\\\"kern\\\\\\\" 1454724521': missing tag value\\\\nunable to parse 'events,pid=1773,message=task_reconfigure\\\\\\\\ reinitializing\\\\\\\\ done,device=QFX5100,daemon=l2cpd,event= priority=\\\\\\\"info\\\\\\\",facility=\\\\\\\"daemon\\\\\\\" 1454724520': missing tag value\\\\nunable to parse 'events,message=clksync_phy_driver_enable_tc\\\\\\\\ \\\\\\\\ \\\\\\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\\\\\"debug\\\\\\\",facility=\\\\\\\"local4\\\\\\\" 1454724521': missing tag value\\\\nunable to parse 'events,pid=7844,message=auto-snapshot\\\\\\\\ is\\\\\\\\ not\\\\\\\\ configured,device=QFX5100,daemon=file,event= priority=\\\\\\\"debug\\\\\\\",facility=\\\\\\\"daemon\\\\\\\" 1454724520': missing tag value\\\\nunable to parse 'events,message=pic\\\\\\\\ 0x214bb8f8\\\\\\\\ enable\\\\\\\\ No:\\\\\\\\ \\\\\\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\\\\\"debug\\\\\\\",facility=\\\\\\\"local4\\\\\\\" 1454724521': missing tag value\\\\nunable to parse 'events,message=total\\\\\\\\ ports\\\\\\\\ 72\\\\\\\\,\\\\\\\\ we\\\\\\\\ are\\\\\\\\ DISABLING\\\\\\\\ transparent\\\\\\\\ clock\\\\\\\\ \\\\\\\\ ,device=QFX5100,daemon=fpc0,event= priority=\\\\\\\"debug\\\\\\\",facility=\\\\\\\"local4\\\\\\\" 1454724521': missing tag value\\\"}\\n\" plugin_id=\"object:c17410\""}

grafana "Data Streaming Collector Dashboard"

Hello,
With the "default" "Data Streaming Collector Dashboard" (the one that came up with this container), can we graph data from 2 qfx?
I am unable ot get this working (2 qfx).
it is ok with one qfx, not ok with 2.
despite I am receiving (tcpdump) data from 2.
I have a "no datapoints" in my grafana graphs for the second qfx.
When I stop the working qfx, I am then able to graph the other one (after some time).
So I guess this is expected ? Do we need to generate other dashbords (one per device)?
Many thanks!
Khelil

Decouple open-nti.py from the database

Hi
We've been talking about it for a while,
I think it will be great to decouple the netconf collector (open-nti.py) from the Influxdb.
The idea would be to send data to Fluentd or Telegraf (format to be defined) to leverage their plugin architecture. Both have the ability to execute external script periodically to accept plugin in any languages.

Anil has started to look into this and we realized that with the current implementation it's gonna be challenging because of how the delta computation is done today.
Is this part still mandatory ?
Do we know if Influxdb has improved on this point ? it would be great to get rid of this part if possible

Would love to get everyone's feedback on that.
Damien

Basic install problem

Why would this be happening?

cbarth-mbp:open-nti cbarth$ ./docker.start.sh
Use docker compose file : docker-compose.yml
WARNING: The LOCAL_PORT_EVENT variable is not set. Defaulting to a blank string.
WARNING: The IMAGE_NAME variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_PORT_STATSD variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_PORT_NGINX variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_PORT_GRAFANA variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_PORT_INFLUXDB variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_PORT_INFLUXDB_API variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_DIR_DASHBOARD variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_DIR_DATA variable is not set. Defaulting to a blank string.
WARNING: The CONTAINER_NAME variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_PORT_JTI variable is not set. Defaulting to a blank string.
WARNING: The LOCAL_PORT_ANALYTICSD variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
opennti.ports is invalid: Invalid port ":3000", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
input-jti.ports is invalid: Invalid port ":50000/udp", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
input-jti.ports is invalid: Invalid port ":50020/udp", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
input-syslog.ports is invalid: Invalid port ":6000/udp", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
opennti.ports is invalid: Invalid port ":80", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
opennti.ports is invalid: Invalid port ":8083", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
opennti.ports is invalid: Invalid port ":8086", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
opennti.ports is invalid: Invalid port ":8125/udp", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]

Grafana Interface Error Stats graph not pulling correct statistic

the Grafana graph for Interface Error Statistics seems to not be pulling the correct data. it looks to be looking for egress_queue_info.packets.

looking at data gathered from a particular interface i'm not sure what data would be best to pull. There is "ingress_errors_if_in_discards", but not an egress one.

Getting error when running 'docker.start.sh'

Not able to start open-nti containers using 'docker.start.sh'

root@ubuntu:~/open-nti# ./docker.start.sh
Use docker compose file : docker-compose.yml
WARNING: The IMAGE_TAG variable is not set. Defaulting to a blank string.
Pulling opennti (juniper/open-nti:latest)...
latest: Pulling from juniper/open-nti
Digest: sha256:1bc00e223477fce1bdb47c4206ea6bf8c5ea01ee4adca2191043ca5b8891acfb
Status: Image is up to date for juniper/open-nti:latest
Pulling input-syslog (juniper/open-nti-input-syslog:latest)...
latest: Pulling from juniper/open-nti-input-syslog
Digest: sha256:69603d75bf645267dbe48e6f17fe6c5010cf5ae53c5ad9ada0d3b0a4c1c15519
Status: Image is up to date for juniper/open-nti-input-syslog:latest
Pulling input-jti (juniper/open-nti-input-jti:latest)...
latest: Pulling from juniper/open-nti-input-jti
Digest: sha256:9eb80f329e25a6cdc3767c50329e0df05edcabc3c3eae8288a11fb3b518eb3c4
Status: Image is up to date for juniper/open-nti-input-jti:latest
Pulling opennti (juniper/open-nti:latest)...
latest: Pulling from juniper/open-nti
Digest: sha256:1bc00e223477fce1bdb47c4206ea6bf8c5ea01ee4adca2191043ca5b8891acfb
Status: Image is up to date for juniper/open-nti:latest

ERROR: for opennti Image 'juniper/open-nti:' not found
Traceback (most recent call last):
File "", line 3, in
File "compose/cli/main.py", line 61, in main
File "compose/cli/main.py", line 113, in perform_command
File "compose/cli/main.py", line 835, in up
File "compose/project.py", line 400, in up
File "compose/parallel.py", line 64, in parallel_execute
compose.service.NoSuchImageError: Image 'juniper/open-nti:' not found

Monitor BGP FlowSpec with OpenNTI

Someone suggested to use OpenNTI to monitor BGP FlowSpec on Junos devices.
It looks like a great idea and thanks to Gonzalo all instructions are already available:

From my point of view, there are 2 options to add support for BGP Flow Spec

  • Create new parsers
  • Add support for Table/View and reuse Gonzalo examples (I created an issue for that #49)

Ideas ? Comments ? Suggestions ?

Build docker container [Error]

Hi

Step 13
`Downloading/unpacking git+https://github.com/Juniper/py-junos-eznc.git
Cloning https://github.com/Juniper/py-junos-eznc.git to /tmp/pip-QYpGsX-build
Running setup.py (path:/tmp/pip-QYpGsX-build/setup.py) egg_info for package from git+https://github.com/Juniper/py-junos-eznc.git
error in junos-eznc setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers
Complete output from command python setup.py egg_info:
error in junos-eznc setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers


Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-QYpGsX-build
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c pip install influxdb && pip install xmltodict && pip install pexpect && easy_install pysnmp && pip install lxml && pip install python-crontab && pip install git+https://github.com/Juniper/py-junos-eznc.git' returned a non-zero code: 1

root@template:~/open-nti# ^C
`

Logical interface sensor doesn't work any more

Upgrade the open-nti with the latest one. But the logical interface sensor doesn't work any more. Before it use to have GUI output, but with the latest open-nti, no data show in Grafana GUI. tcpdump show the packets from the routers are reach to the open-nti server. Change the sensor to the interface level, the Grafana GUI will have data.

Try methods in Issue #106 No entries in influxdb, it seems it doesn't work as there is no devel Tag, (only master and consul), try to change it to consul in the Makefile, the logical interfaces doesn't work too.

Software version:
16.1R2.11 (logical interfaces usage used to work with older version of open-nti)
16.2R1.6 (latest version of Junos software, but no luck)

Please help me out.

Many thanks

Fluentd is not sending JTI packets to InfluxDB

Hello team,

I have configured open-NTI without using Docker and I am experiencing the following issue with the JTI packets.

The JTI packets are arriving at my server but fluentd is not sending them to the InfluxDB.

I have to note that I did change the default port for JTI to 2000.

I have uncommented lines 100-103 of the fluent.conf and restarted it. But I still cannot see any additional information being printed in the /var/log/fluentd.log.

Information from /var/log/fluentd.log

root@automation-srv:~# tail /var/log/fluentd.log
  </match>
</ROOT>
2016-03-30 05:51:39 +1100 [info]: plugin/in_forward.rb:81:listen: listening fluent socket on 0.0.0.0:24224
2016-03-30 05:51:39 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:2000
2016-03-30 05:51:39 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:50010
2016-03-30 05:51:39 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:50020
2016-03-30 05:51:39 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:50021
2016-03-30 05:51:39 +1100 [info]: plugin/in_syslog.rb:176:listen: listening syslog socket on 0.0.0.0:6000 with udp
2016-03-30 05:51:39 +1100 [debug]: plugin/in_monitor_agent.rb:235:start: listening monitoring http server on http://0.0.0.0:24220/api/plugins
2016-03-30 05:51:39 +1100 [info]: plugin/in_debug_agent.rb:49:start: listening dRuby uri="druby://127.0.0.1:24230" object="Engine"
root@automation-srv:~#

startup file for fluentd

root@automation-srv:~# cat /etc/service/fluentd/run

#!/bin/sh
# `/sbin/setuser memcache` runs the given command as the user `memcache`.
# If you omit that part, the command will be run as root.
# fluentd -c /fluent/fluent.conf -vv
fluentd -c /fluent/fluent.conf -vv >>/var/log/fluentd.log 2>&1
root@automation-srv:~#

processes related to fluentd

root@automation-srv:# ps ax | grep fluent
25546 pts/4 S 0:00 /bin/sh /etc/service/fluentd/run
25547 pts/4 Sl 0:00 /usr/bin/ruby2.2 /usr/local/bin/fluentd -c /fluent/fluent.conf -vv
25549 pts/4 Sl 0:00 /usr/bin/ruby2.2 /usr/local/bin/fluentd -c /fluent/fluent.conf -vv
25727 pts/4 S+ 0:00 grep fluent
root@automation-srv:
#

services port name register

root@automation-srv:~# cat /etc/services | grep j-vision
j-vision    2000/tcp            # j-Vision
j-vision    2000/udp

netstat

root@automation-srv:~# netstat -a | grep j-vision
udp        0      0 0.0.0.0:j-vision        0.0.0.0:*

lsof

root@automation-srv:~# lsof -i | grep j-vision
fluentd   25549        root   12u  IPv4 3503058      0t0  UDP *:j-vision

tcpdump on eth2 (where the JTI packets are arriving)

root@automation-srv:~# tcpdump -n -i eth2 port 2000 -s 1500 -xx -XX
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), capture size 1500 bytes
06:07:37.956842 IP 10.254.254.100.1000 > 10.1.1.100.2000: UDP, length 104
    0x0000:  0050 5685 3ce3 0050 5685 6656 0800 45c0  .PV.<..PV.fV..E.
    0x0010:  0084 0504 0000 fe11 a1dd 0afe fe64 0a01  .............d..
    0x0020:  0164 03e8 07d0 0070 0000 0a0e 3130 2e32  .d.....p....10.2
    0x0030:  3534 2e32 3534 2e31 3030 1000 2219 726f  54.254.100..".ro
    0x0040:  7574 696e 672d 656e 6769 6e65 2d63 7075  uting-engine-cpu
    0x0050:  2d6d 656d 6f72 7928 e38b 0b30 f6a4 ebb7  -memory(...0....
    0x0060:  05aa 062e e2a4 012a 0a28 0a15 0a06 4b65  .......*.(....Ke
    0x0070:  726e 656c 1080 8080 8008 18d8 e094 4720  rnel..........G.
    0x0080:  060a 0f0a 0344 4d41 1080 8080 8002 1800  .....DMA........
    0x0090:  2000                                     ..
06:07:37.970028 IP 10.254.254.100.1000 > 10.1.1.100.2000: UDP, length 116
    0x0000:  0050 5685 3ce3 0050 5685 6656 0800 45c0  .PV.<..PV.fV..E.
    0x0010:  0090 0584 0000 fe11 a151 0afe fe64 0a01  .........Q...d..
    0x0020:  0164 03e8 07d0 007c 0000 0a0e 3130 2e32  .d.....|....10.2
    0x0030:  3534 2e32 3534 2e31 3030 1000 2217 6c6f  54.254.100..".lo
    0x0040:  6769 6361 6c2d 696e 7465 7266 6163 652d  gical-interface-
    0x0050:  7374 6174 7328 e38b 0b30 f6a4 ebb7 05aa  stats(...0......
    0x0060:  063c e2a4 0138 3a36 0a34 0a0a 6765 2d30  .<...8:6.4..ge-0
    0x0070:  2f30 2f30 2e30 1085 91fb b505 188f 042a  /0/0.0.........*
    0x0080:  0c08 b493 3b10 fea0 c421 20ce 0332 0908  ....;....!...2..
    0x0090:  edfe 1210 81bd b814 3a04 0a02 7570       ........:...up
06:07:58.383167 IP 10.254.254.100.1000 > 10.1.1.100.2000: UDP, length 104
    0x0000:  0050 5685 3ce3 0050 5685 6656 0800 45c0  .PV.<..PV.fV..E.
    0x0010:  0084 0505 0000 fe11 a1dc 0afe fe64 0a01  .............d..
    0x0020:  0164 03e8 07d0 0070 0000 0a0e 3130 2e32  .d.....p....10.2
    0x0030:  3534 2e32 3534 2e31 3030 1000 2219 726f  54.254.100..".ro
    0x0040:  7574 696e 672d 656e 6769 6e65 2d63 7075  uting-engine-cpu
    0x0050:  2d6d 656d 6f72 7928 e48b 0b30 8ba5 ebb7  -memory(...0....
    0x0060:  05aa 062e e2a4 012a 0a28 0a15 0a06 4b65  .......*.(....Ke
    0x0070:  726e 656c 1080 8080 8008 18d8 e094 4720  rnel..........G.
    0x0080:  060a 0f0a 0344 4d41 1080 8080 8002 1800  .....DMA........
    0x0090:  2000                                     ..
06:07:58.393869 IP 10.254.254.100.1000 > 10.1.1.100.2000: UDP, length 116
    0x0000:  0050 5685 3ce3 0050 5685 6656 0800 45c0  .PV.<..PV.fV..E.
    0x0010:  0090 0585 0000 fe11 a150 0afe fe64 0a01  .........P...d..
    0x0020:  0164 03e8 07d0 007c 0000 0a0e 3130 2e32  .d.....|....10.2
    0x0030:  3534 2e32 3534 2e31 3030 1000 2217 6c6f  54.254.100..".lo
    0x0040:  6769 6361 6c2d 696e 7465 7266 6163 652d  gical-interface-
    0x0050:  7374 6174 7328 e48b 0b30 8ba5 ebb7 05aa  stats(...0......
    0x0060:  063c e2a4 0138 3a36 0a34 0a0a 6765 2d30  .<...8:6.4..ge-0
    0x0070:  2f30 2f30 2e30 1085 91fb b505 188f 042a  /0/0.0.........*
    0x0080:  0c08 b493 3b10 fea0 c421 20ce 0332 0908  ....;....!...2..
    0x0090:  effe 1210 95bf b814 3a04 0a02 7570       ........:...up
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
root@automation-srv:~#

on InfluxDB, the table measurements do exist the result of query ---| SELECT * FROM "" |----- are empty

checking the fluentd documentation, I have changed the stdout/stderr redirection to the -o directive

root@automation-srv:~# ps ax | grep fluent
25949 pts/4    S      0:00 /bin/sh /etc/service/fluentd/run
25950 pts/4    Sl     0:00 /usr/bin/ruby2.2 /usr/local/bin/fluentd -c /fluent/fluent.conf -vv -o /var/log/fluentd.log
25952 pts/4    Sl     0:00 /usr/bin/ruby2.2 /usr/local/bin/fluentd -c /fluent/fluent.conf -vv -o /var/log/fluentd.log
25969 pts/4    S+     0:00 grep fluent
root@automation-srv:~#

but I still don't see any additional output in the fluentd logs

root@automation-srv:~# tail /var/log/fluentd.log
  </match>
</ROOT>
2016-03-30 06:12:36 +1100 [info]: plugin/in_forward.rb:81:listen: listening fluent socket on 0.0.0.0:24224
2016-03-30 06:12:36 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:2000
2016-03-30 06:12:36 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:50010
2016-03-30 06:12:36 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:50020
2016-03-30 06:12:36 +1100 [info]: plugin/in_udp.rb:27:listen: listening udp socket on 0.0.0.0:50021
2016-03-30 06:12:36 +1100 [info]: plugin/in_syslog.rb:176:listen: listening syslog socket on 0.0.0.0:6000 with udp
2016-03-30 06:12:36 +1100 [debug]: plugin/in_monitor_agent.rb:235:start: listening monitoring http server on http://0.0.0.0:24220/api/plugins
2016-03-30 06:12:36 +1100 [info]: plugin/in_debug_agent.rb:49:start: listening dRuby uri="druby://127.0.0.1:24230" object="Engine"
root@automation-srv:~#

looking at the InfluxDB logs, I can't see any INSERT query

if required, I can provide remote access to my server for further troubleshooting

fluent.conf.txt
fluentd.log.txt
influxdb.log.zip

<<<<<< I have attached the fluent.conf, fluentd.log and influxdb.log of my setup.

Create Graph and Row for CPU Memory Utilization

here is the data structure for reference

2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.size","name":"Kernel","value":1878212988}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.bytes_allocated","name":"Kernel","value":1715854768}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.utilization","name":"Kernel","value":91}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.size","name":"LAN_buffer","value":67108860}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.bytes_allocated","name":"LAN_buffer","value":10721208}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.utilization","name":"LAN_buffer","value":15}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.size","name":"Blob","value":52428784}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.bytes_allocated","name":"Blob","value":0}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.utilization","name":"Blob","value":0}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.size","name":"ISSU_scratch","value":62914556}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.bytes_allocated","name":"ISSU_scratch","value":0}
2016-11-03 00:43:04 +0000 jnpr.test: {"device":"Krypton-re0:10.17.0.10","type":"cpu_mem.utilization","name":"ISSU_scratch","value":0}

InfluxDB Tags

Is there a way to add InfluxDB Tags to datapoints from the parser yaml?
Besides the existing device, key, ... tags I'd like to add more tags to datapoints.

Found this: (show-route-summary.parser.yaml)

    loop:
        key: ./table-name
        sub-matches:
        -
            xpath: ./destination-count
            variable-name:  $host.route-table.summary.$key.destinations
            tags:
            -
                table: $key
                test: delete_me
        -

But unfortunately these tags don't appear in the InfluxDB.
Is there a way to add InfluxDB tags?

EX4300 guage appears to be off by a factor of 10.

Hi,

In the lab I have a MX104---EX4600---EX4300 in a string and collecting analytics from all 3 devices (they should give me roughly the same values). I have noticed that the guage for bps for EX4300 out by a factor of 10. The attached file has screen caps of what I'm seeing, which hopefully is attached.

EX4300bpsguageproblem.docx

Respective show version and configuration below.

root@ex4300-01> show version

fpc0:

Hostname: ex4300-01
Model: ex4300-24p
Junos: 14.1X53-D30.3
JUNOS EX Software Suite [14.1X53-D30.3]
JUNOS FIPS mode utilities [14.1X53-D30.3]
JUNOS Crypto Software Suite [14.1X53-D30.3]
JUNOS Online Documentation [14.1X53-D30.3]
JUNOS EX 4300 Software Suite [14.1X53-D30.3]
JUNOS Web Management Platform Package [14.1X53-D30.3]
JUNOS py-base-powerpc [14.1X53-D30.3]

---------------snip

root@ex4300-01> show configuration services
analytics {
export-profiles {
exp-prof {
stream-format json;
interface {
statistics {
traffic;
queue;
}
}
}
}
resource-profiles {
res-prof {
queue-monitoring;
traffic-monitoring;
}
}
resource {
interfaces {
ge-0/0/0 {
resource-profile res-prof;
}
}
}
collector {
address 2.2.2.2 {
port 50020 {
transport udp {
export-profile exp-prof;
}
}
}
}
}

OpenNTI - Deployment fine tuning - Data Streaming Collector failing

Hi all,
we are finalizing the OpenNTI deployment & configuration in our project.
OpenNTI - Data Collection part is working properly
However, OpenNTI - Data Streaming Collector is working only partially. I have attached how it looks like by now in Grafana. Only some of the Graphs are showing results.
We have several QFX5100 switches in a Virtual Chassis configuration. We are using Junos: 14.1X53-D35.3
I have also attached a file with our current Analytics configuration.
Could you please let us know what's incorrect in this configuration?
In addition to that, the Graph titled: Interface Queue Stats is warning an error: "InfluxDB Error Response: error parsing query: derivative aggregate requires a GROUP BY interval"

Thank in advance, Jose

Juniper_Analytics_Config.txt

Grafana - Data Steaming Collector Dashboard.pdf

Syslog stop working after some days

Hi

After running the server for a few days I noticed that I cannot see the annotations anymore.
I just tried some commits and I cannot see it any longer. I have 21 devices and I'm sending all syslog messages to the open nti server.

BR
Al

Chassis-routing-engine.parser key: ./slot

Existing parser assumes the device has multiple REs. With VMX or other devices that cannot support dual RE this will not produce any data.

Is it possible to define the same command in different tags with different parsers ?

no data seemingly in influxdb or displayed by Grafana

Pulled the repository this morning and ran on Ubuntu 14.04 server. Switch sending analytics data was QFX5100 running 14.1X53-D15 -- I was able to verify that the QFX5100 is sending the data on UDP port 50020 and the server is receiving it. After that i'm not sure why it fails, but below is what I was able to discern:

  • inside Influxdb web gui looking at Measurements the only item is "juniper.analyticsd" -- no switch interfaces or anything. From the demo videos I believe I should see the various interfaces for the switch listed
  • Inside the container if I run tcpdump on the Influxdb port I see HTTP Posts with 204 codes, which per influxdb documentation means some sort of data was successfully written.

At this point I don't know if I screwed something up (not sure how I could...cloned the repo, then ran it per documentation), or if there is a versioning issue, or what.

My best guess is either issue with Fluentd or the juniper plugin for it sending the data to Influxdb, but I really don't know anything about Fluentd or Influxdb. Couldn't see how to get any info out of Fluentd as to what it was actually doing.

Thanks,

Will

Data Collection Agent - SSH Key

Hi All,

I'm trying to get the Data Collecting Agent to work with SSH key authentication but unfortunately without success. When trying to SSH to the devices from using the key from CLI everything works fine.

SSH with key from CLI works

ssh -i data/user.key user@host
--- JUNOS 14.1X53-D35.3 built 2016-02-29 23:35:46 UTC
{master:0}
user@host>

credentials.yaml

juniper-qfx:
username: 'user'
password:
method: key
key_file: ./data/user.key
tags: juniper-qfx

Open NTi host

make cron-debug TAG=juniper-qfx
[...]
[host]: Connection failed 1 time(s), retrying....
[host]: Connection failed 1 time(s), retrying....
[host]: Connection failed 1 time(s), retrying....
[host]: Connection failed 1 time(s), retrying....
[host]: Connection failed 1 time(s), retrying....
[...]

Juniper device

monitor start messages
Feb 2 12:36:20 host sshd[75999]: Did not receive identification string from
Feb 2 12:36:20 host inetd[1702]: /usr/sbin/sshd[75999]: exited, status 255
Feb 2 12:36:20 host sshd[76000]: Connection closed by 10.113.0.225 [preauth]
Feb 2 12:36:20 host inetd[1702]: /usr/sbin/sshd[76000]: exited, status 255
Feb 2 12:36:21 host sshd[76002]: Did not receive identification string from
Feb 2 12:36:21 host inetd[1702]: /usr/sbin/sshd[76002]: exited, status 255
Feb 2 12:36:22 host sshd[76003]: Connection closed by 10.113.0.225 [preauth]
Feb 2 12:36:22 host inetd[1702]: /usr/sbin/sshd[76003]: exited, status 255
Feb 2 12:36:23 host sshd[76005]: Did not receive identification string from
Feb 2 12:36:23 host inetd[1702]: /usr/sbin/sshd[76005]: exited, status 255

Am i missing something? Any help is apprecited.

Thanks, Martin

show-interfaces-media parser not working.

I think the show interfaces media parser is not working. I have tried this on two different systems to verify.

This is how the commands.yaml looks:
nti_commands:
commands: |
show route summary | display xml
show interfaces media | display xml
show pfe statistics traffic | display xml
show snmp statistics | display xml
tags: ex mxcontrail

I have enabled traceoptions for netconf on the device to check incomning rpc.
Oct 26 15:44:09 [34087] Incoming: <nc:rpc xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:a6
283cd2-1913-4aa5-899b-e0d67a4952bc">show interfaces media /nc:rpc]]>]]>

In influxDB, SHOW MEASUREMENTS does not list any data for interfaces, its listing other things.
I have tried creating a custom parser for interfaces but did not manage yet, a second pair of eyes may help.

Thanks!

Influx data source not being pre-configured in granfana upon container startup

I'm my usage of the latest master branch, when I run either the ./docker.start.sh or the ./docker.start.persistent.sh scripts, once the container comes up there are no data sources pre-configured in grafana. Is this to be expected?

Getting 502 errors during the CURL calls from the "grafana.init.sh" file:

messages from Nginx_error.log:
2016/06/13 13:15:59 [error] 30#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: _, request: "GET /api/datasources HTTP/1
.1", upstream: "http://127.0.0.1:3000/api/datasources", host: "localhost"
2016/06/13 13:15:59 [error] 30#0: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: _, request: "POST /api/datasources HTTP/
1.1", upstream: "http://127.0.0.1:3000/api/datasources", host: "localhost"

Graph not working on port 80 with interface all

In some cases, we found that graphs in Grafana are not working, only when selecting "interfaces all", when we access the interface on port 80.
In this case, everything was working fine on port 3000.

The issue is probably with NGINX

Migration to a modular architecture

As we are adding support for more features and input formats in OpenNTI, the size of the container is becoming bigger and bigger, also it's not always ideal to rebuild everything if only one part needs to be upgraded

Considering that all components are communicating over IP inside the container, it's very easy to break down the architecture into multiple smaller containers.

For example

  • 1 container for the database + GUI
  • 1 container for the Data Collection Engine
  • 1 container for the Data Streaming Collector
  • 1 container for Syslog
  • 1 container for Kapacitor
    As container don't have overhead in resources, the among of resources consumed globally won't be more important.
    With this model it will become very simple to add new type of input (snmp, netflow, etc ..)

To keep it as simple to deploy and hide the number of containers, the idea is to use docker compose by default. it's a very easy solution to create a multi-container architecture without adding complexity.
Also, if needed it become possible to create multiple containers of the same type to distribute the load. For Syslog or JTI for example.

The transition should be transparent for most people as docker compose is installed by default with docker in most cases.

I started to create a standalone container for the Data Streaming Collector, it's available here
https://github.com/Juniper/open-nti-input-jti
I'll create a branch on this project to integrate it soon

Ideas, comments, help is welcome

JET integration enhancement question

Has anyone considered adding support for integrating telemetry data learned from JET? e.g. RIB API subscription information instead of periodic CLI execution of 'show route ...'?

thanks,

-Colby

Installation errors

Hi,

I am facing issues in installation of open-nti on Ubuntu 16.04.1. Getting below error during installation.

root@poc-docker1:~/open-nti# ./docker.start.sh
Use docker compose file : docker-compose.yml
Pulling opennti (juniper/open-nti:latest)...
Pulling repository docker.io/juniper/open-nti
ERROR: Network timed out while trying to connect to https://index.docker.io/v1/repositories/juniper/open-nti/images. You may want to check your internet connection or if you are behind a proxy.

The site which is being accessed is empty and has no files under images. Is there an issue with this link - https://index.docker.io/v1/repositories/juniper/open-nti/images ?

MPLS LSP stats @ grafana

Hi

Is MPLS LSP stats can be plotted by grafana?
I have a vMX with 16.1R3 and appropriate sensor is already configured.
I can also netcat packets sent by vMX and decode successfully.

The problem I am facing is, Grafana with its predefined "Data Streaming Collector Dashboard" shows nothing.
Row: MPLS LSP
Graph: LSP Traffic Rate (PPS)
Metrics: SELECT "value" FROM "jnpr.jvision" WHERE "type" = 'lspstata.packet_rate' AND "device" =~ /$host_regex$/ AND $timeFilter GROUP BY "device", "lspname"

By the way, LSP traffic Rate BPS also shows nothing.

Regards

Support of sexagesimal variable type

A 'show command' may report a counter value in the form of a sexagesimal value.
E.g : mm:ss , hh:mm:ss

This should be converted to integer : hh*3600+60*mm+ss in the database, so graphing can be performed.

at present in the open-ntyi.py file

def eval_variable_value(value,**kwargs):

only process integer or string types.

Netconf throws ConnectError

Workaround until we commit a change:
Inside container downgrade cryptography module to 1.2.1 (pip install cryptography==1.2.1)

Once downgraded you should not see errors below.

2016-05-19 06:07:07 root ERROR Thread-2 : ConnectError(host: 138.187.21.15, msg: 'EntryPoint' object has no attribute 'resolve')
Traceback (most recent call last):
File "/opt/open-nti/open-nti.py", line 515, in collector
jdev.open()
File "/usr/local/lib/python2.7/dist-packages/jnpr/junos/device.py", line 483, in open
raise cnx_err
ConnectError: ConnectError(host: 138.187.21.15, msg: 'EntryPoint' object has no attribute 'resolve')
2016-05-19 06:07:07 open-nti ERROR Thread-2 : [138.187.21.15]: Skipping host due connectivity issue
2016-05-19 06:07:07 root ERROR Thread-1 : ConnectError(host: 138.187.21.16, msg: 'EntryPoint' object has no attribute 'resolve')
Traceback (most recent call last):
File "/opt/open-nti/open-nti.py", line 515, in collector
jdev.open()
File "/usr/local/lib/python2.7/dist-packages/jnpr/junos/device.py", line 483, in open
raise cnx_err
ConnectError: ConnectError(host: 138.187.21.16, msg: 'EntryPoint' object has no attribute 'resolve')
2016-05-19 06:07:07 open-nti ERROR Thread-1 : [138.187.21.16]: Skipping host due connectivity issue
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : Unknown exception: 'EntryPoint' object has no attribute 'resolve'
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : Traceback (most recent call last):
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : File "/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 1757, in run
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : self.kex_engine.parse_next(ptype, m)
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : File "/usr/local/lib/python2.7/dist-packages/paramiko/kex_group1.py", line 75, in parse_next
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : return self._parse_kexdh_reply(m)
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : File "/usr/local/lib/python2.7/dist-packages/paramiko/kex_group1.py", line 111, in _parse_kexdh_reply
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : self.transport._verify_key(host_key, sig)
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : File "/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 1602, in _verify_key
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : key = self._key_infoself.host_key_type
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : File "/usr/local/lib/python2.7/dist-packages/paramiko/rsakey.py", line 58, in init
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : ).public_key(default_backend())
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/init.py", line 35, in default_backend
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : _default_backend = MultiBackend(_available_backends())
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/init.py", line 22, in _available_backends
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : "cryptography.backends"
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 : AttributeError: 'EntryPoint' object has no attribute 'resolve'
2016-05-19 06:07:09 ncclient.transport.ssh ERROR Thread-47 :
2016-05-19 06:07:09 root ERROR Thread-3 : ConnectError(host: 138.187.93.37, msg: 'EntryPoint' object has no attribute 'resolve')
Traceback (most recent call last):
File "/opt/open-nti/open-nti.py", line 515, in collector
jdev.open()
File "/usr/local/lib/python2.7/dist-packages/jnpr/junos/device.py", line 483, in open
raise cnx_err
ConnectError: ConnectError(host: 138.187.93.37, msg: 'EntryPoint' object has no attribute 'resolve')

OpenNTI - FluentD parser issue in Data Streaming Collector

Hello all,
we are just installing and configuring OpenNTI to monitor some QFX5100 switches. They are configured as a virtual chassis and are running JUNOS 14.1X53-D35.3 built 2016-02-29 23:35:46 UTC.
We have installed OpenNTI virtualized in monitoring server.

OpenNTI DataCollectionAgent is working properly and we are seeing information in Grafana.
However, when talking about OpenNTI DataStreamingCollector, it is not working.
Docker Proxy is exposing 172.17.0.1 IP Address and opennti_input_jti container is located at 172.17.0.4 IP Address.
Our Switches are sending the UDP streaming data to the 50020 port.
Then a parser error is registered in the dockers logs (see details below):
<JSON::ParserError: 757: unexpected token at '�\u0003'

Could you let us know what can we do to solve this topic?

Thank in advance, Jose

016-10-19T15:18:27.361858000Z 2016-10-19 17:18:27 +0200 [error]: /usr/lib/ruby/gems/2.3.0/gems/fluentd-0.12.26/lib/fluent/plugin/socket_util.rb:46:in call' 2016-10-19T15:18:27.362022000Z 2016-10-19 17:18:27 +0200 [error]: /usr/lib/ruby/gems/2.3.0/gems/fluentd-0.12.26/lib/fluent/plugin/socket_util.rb:46:inon_readable'
2016-10-19T15:18:27.362189000Z 2016-10-19 17:18:27 +0200 [error]: /usr/lib/ruby/gems/2.3.0/gems/cool.io-1.4.4/lib/cool.io/io.rb:186:in on_readable' 2016-10-19T15:18:27.362376000Z 2016-10-19 17:18:27 +0200 [error]: /usr/lib/ruby/gems/2.3.0/gems/cool.io-1.4.4/lib/cool.io/loop.rb:88:inrun_once'
2016-10-19T15:18:27.362566000Z 2016-10-19 17:18:27 +0200 [error]: /usr/lib/ruby/gems/2.3.0/gems/cool.io-1.4.4/lib/cool.io/loop.rb:88:in run' 2016-10-19T15:18:27.362739000Z 2016-10-19 17:18:27 +0200 [error]: /usr/lib/ruby/gems/2.3.0/gems/fluentd-0.12.26/lib/fluent/plugin/socket_util.rb:133:inrun'
2016-10-19T15:18:27.400248000Z 2016-10-19 17:18:27 +0200 fluent.error: {"error":"#<JSON::ParserError: 757: unexpected token at '9\u0005'>","error_class":"JSON::ParserError","host":"172.17.0.1","message":""9\x05\x00\x00\x01\x00\x00\x00\b\xE1\x9B\xD1\xE9\x8B\xE7\xCF\x02\x1AD\n\x11MXMETSW1:xe-2/0/0\"/\n-\x10\xD0\x9B\xD3\x03\x18\xFC\x9A\xD3\x03(T8\xFD\xDC\xA6\x9F\x17@\x0FX\x81\x9D\xCD\x03\\xEC\\xE4\\xAF\\x03h\\x95\\xFC\\x1Ap\\x80\\xBC\\x02\\x80\\x01\\x88\\x91\\xC4\\xAE\\x1A\\x1AD\\n\\x11MXMETSW1:xe-2/0/2\\\"/\\n-\\x10\\x9E\\xCD7\\x18\\xAB\\xCC7 <(78\\xD7\\x9B\\x9B\\x83\\[email protected]\\xEC\\xDAZ\x99\xA3=h\xC2\xFB\x1Ap\x91\xBC\x02\x80\x01\xA0\xB0\x8B[\x88\x01.\x1AJ\n\x11MXMETSW1:xe-2/0/3\"5\n3\x10\xF8\x8A\x9A\x04\x18\xD1\x84\x9A\x04\xC6\x01(\xE1\x048\xC1\xA4\x96\xAF\x05X\xFC\x96\xB7\b\\x8A\\xEA\\x99\\bh\\x99\\xF5\\x1Ap\\xD9\\xB7\\x02\\x80\\x01\\xFC\\xC9\\xF5\\xF4T\\x88\\x01\\xB7\\b\\x1AG\\n\\x11MXMETSW1:xe-2/0/5\\\"2\\n0\\x10\\xBE\\x8C\\x11\\xBE\\x8C\\x118\\xF8\\xA2\\xC6\\bX\\x96\\xA3\\xCF\\xEE@#h\xDE\x9D\xCF\xEE@p\x95\x05x\xF7/\x80\x01\xCA\xE3\xBA\x82\x97\xB2\x05\x88\x01\x8E\x85\xFE\x03\x1AR\n\x12MXMETSW1:xe-2/0/20\"<\n:\x10\xB4\xE3\x8F\x16\x18\x9F\xF0\x8B\x16(\x95\xF3\x030\b8\xCE\xEF\xD5\xDBK@\xE8\aX\x99\xF4\xB0g\\xAB\\xA2\\xA1gh\\xE9\\xF2\\x06p\\x85\\xDF\\bxi\\x80\\x01\\xBE\\xBD\\xDB\\xCE\\x87\\x01\\x88\\x01\\xB3\\x92\\x01\\x1AJ\\n\\x12MXMETSW1:xe-2/0/21\\\"4\\n2\\x10\\xEF\\x87\\xB5\\x16\\x18\\xC7\\xD3\\xB3\\x16 )(\\xFF\\xB3\\x018\\x9C\\x9A\\xEA\\xD10X\\xCA\\xBB\\xD7\\x02\xF4\xAA\xC5\x02h\xD3\xF2\x06p\x83\x9E\v\x80\x01\xBE\xF6\xE5\xD2\x05\x88\x01W\x1AR\n\x12MXMETSW1:xe-2/0/22\"<\n:\x10\x97\xA8\xF2\x13\x18\xA6\xF4\xF0\x13\x15(\xDC\xB3\x01088\xF8\xCC\xD5\xDD-@\x8E}X\xDF\xEC\xB9\x02\\xBD\\xDB\\xA7\\x02h\\xF4\\xF2\\x06p\\xAE\\x9E\\vx\\x02\\x80\\x01\\x91\\x82\\xD5\\xB2\\x03\\x88\\x01\\xBF\\x01\\x1AQ\\n\\x12MXMETSW1:xe-2/0/28\\\";\\n9\\x10\\xED\\xB3\\x81\\b\\x18\\xB0\\xDF\\xBB\\x06\\xBA\\xD4\\xC5\\x01(\\x030\\x018\\xCE\\xE5\\xDC\\xCB+@\\xBC\\x01X\\xEC\\xDD\\xB0\\a\xC4\x9E\xEC\x05h\xFD\xAA\xC4\x01p\xAB\x14\x80\x01\xF2\xD0\xE9\xF6-\x88\x01\xAF\x01\x1AX\n\x12MXMETSW1:xe-2/0/30\"B\n@\x10\x99\xD7\xB9\xDA\x01\x18\xC5\xC8\xB9\xDA\x01(\xD4\x0E0\x8B\x018\xCE\xED\xAC\x86\xC0\x03@\xF3\x9E\x02X\xA7\xD0\xE0\xB5\x02\\xC0\\xE6\\xD8\\xB5\\x02h\\xE5\\xF3\\x06p\\x82vx\\xC6\\x01\\x80\\x01\\xF7\\x97\\xE6\\xD9\\xF0\\x04\\x88\\x01\\x8D\\x93\\x03\\x1AQ\\n\\x12MXMETSW1:xe-2/0/31\\\";\\n9\\x10\\xC3\\xA2\\x9E.\\x18\\xA3\\x9F\\x9E. +(\\xF5\\x020\\x1D8\\xB5\\xE1\\xAB\\xBCi@\\xD4FX\\x89\\xA3\\xD7\\e\xDF\xAD\xCF\eh\xCB\xF3\x06p\xDF\x81\x01x\x12\x80\x01\xAA\x87\x82\xF1E\x88\x01\xD0/\x1AQ\n\x12MXMETSW1:xe-2/0/32\";\n9\x10\xFD\xE8\xC3-\x18\x8B\xE6\xC3-\x14(\xDE\x020\x1D8\xE6\xB2\x8B\x9Bh@\x87EX\x88\xF2\xA7\e\\xB7\\xFC\\x9F\\eh\\xE8\\xF3\\x06p\\xE9\\x81\\x01x\\x11\\x80\\x01\\xC8\\xD8\\xB0\\xE1D\\x88\\x01\\xF8.\\x1AQ\\n\\x12MXMETSW1:xe-2/0/33\\\";\\n9\\x10\\xA9\\x98\\xD2-\\x18\\xB2\\x95\\xD2-\\x17(\\xE0\\x020\\x1E8\\xEC\\xB8\\xA8\\xDAh@\\xC8FX\\x83\\x9C\\xB6\\e\xBE\xA6\xAE\eh\xD9\xF3\x06p\xEC\x81\x01x\x12\x80\x01\xAF\xFE\xFA\xFAD\x88\x01\xB7/\x1AQ\n\x12MXMETSW1:xe-2/0/34\";\n9\x10\x81\xBD\xF2-\x18\xD0\xB9\xF2- >(\xF3\x020\x1D8\x91\xD3\xF6\xB3h@\x94CX\xF9\xD0\xB9\e\\xEC\\xDB\\xB1\\eh\\xC7\\xF3\\x06p\\xC6\\x81\\x01x\\x11\\x80\\x01\\xA5\\xB2\\xD9\\x80E\\x88\\x01\\x87.\\x1AQ\\n\\x12MXMETSW1:xe-2/0/35\\\";\\n9\\x10\\xE3\\xD5\\xF5-\\x18\\xD0\\xD2\\xF5- ,(\\xE7\\x020\\x1D8\\xBB\\xAA\\x96\\xE2h@\\xD8DX\\x93\\xB7\\xBE\\e\xF9\xC1\xB6\eh\xC2\xF3\x06p\xD8\x81\x01x\x11\x80\x01\xCD\xB7\xFC\x98E\x88\x01\xF4-\x1AD\n\x12MXMETSW1:xe-2/0/38\".\n,\x10\xA9\xE9\a\x18\xC6u\x85\x97\x03(\xDE\xDC\x038\xDE\xFC\x8F\aX\xC6\xBBo\\xCA\\ah\\xCD\\x92np\\xAF\\xA1\\x01\\x80\\x01\\xDF\\xF3\\x96H\\x88\\x01\\x1F\\x1AN\\n\\x12MXMETSW1:xe-2/0/46\\\"8\\n6\\x10\\xC2\\xAC\\xE7\\x01\\x18\\xA9\\xAC\\xE7\\x01(\\x190\\x028\\xCC\\xF1\\x98\\x82\\x02@\\xC0\\x02X\\x98\\xD7\\xB4\\x02\xBA\xB9\xF1\x01h\xD3\xD9\x1Cp\x8B\xC4&x\x02\x80\x01\xD3\xDA\x82\xE6\x01\x88\x01\xF8\x01\x1A7\n\x12MXMETSW1:xe-2/0/47\"!\n\x1F\x10\x01\x18\x018@X\xEB\xFFG\\xE8\\xE1\\x04h\\xDF\\xD9\\x1Cp\\xA4\\xC4&\\x80\\x01\\xA3\\xD9\\xE38\\x88\\x01\\x17\" error=#<JSON::ParserError: 757: unexpected token at '9\u0005'> error_class=JSON::ParserError host=\"172.17.0.1\""} 2016-10-19T15:18:27.673318000Z 2016-10-19 17:18:27 +0200 [error]: "\xAA\x03\x00\x00\x01\x00\x00\x00\b\xE0\xF2\xED\xE9\x8B\xE7\xCF\x02\x1AG\n\x11MXMETSW1:xe-1/0/0\"2\n0\x10\xC6\x02 \xBC\x02(\n8\xE6\xB0\x01X\xE6\xD6\x94\xDE;\xBC\x01h\xA8\xD0\x94\xDE;p\x82\x05x\xF7/\x80\x01\xEE\xE6\xC7\xC0\xCC\xFB\x04\x88\x01\x8E\x82\xFE\x03\x1AG\n\x11MXMETSW1:xe-1/0/1"2\n0\x10\xFE\x01 \xF7\x01(\a8\xFC\x86\x01X\xAD\x92\x9E\xDE;\xBC\x01h\xEC\x8B\x9E\xDE;p\x85\x05x\xF6/\x80\x01\xBC\xEA\xA2\xA5\xCD\xFB\x04\x88\x01\xEC\xFE\xFD\x03\x1AG\n\x11MXMETSW1:xe-1/0/2\"2\n0\x10\xA0\x02 \x99\x02(\a8\xF2\x97\x01X\xA6\xB9\x9E\xDE;\xBC\x01h\xE4\xB2\x9E\xDE;p\x86\x05x\xF7/\x80\x01\xAE\x94\xC0\xA8\xCD\xFB\x04\x88\x01\xB1\x85\xFE\x03\x1AG\n\x11MXMETSW1:xe-1/0/3"2\n0\x10\x9C\x02 \x87\x02(\x158\xBC\xC1\x01X\xBB\xB9\xFA\xDD;\xBC\x01h\x88\xB3\xFA\xDD;p\xF7\x04x\xF7/\x80\x01\xEE\xC3\xBF\xA9\xCA\xFB\x04\x88\x01\x8E\x82\xFE\x03\x1AG\n\x11MXMETSW1:xe-1/0/4\"2\n0\x10\x88\x1E \xE5\x19(\xA3\x048\x94\x80\x10X\xA1\xF7\x90\xDE;\xBC\x01h\xFB\xF4\x90\xDE;pjx\xF5/\x80\x01\xFA\xC9\xAF\x99\xCC\xFB\x04\x88\x01\xCC\xF4\xFD\x03\x1AH\n\x12MXMETSW1:xe-1/0/18"2\n0\x10\xB3\x91\x01\x18\x86\x91\x01 #(\n8\xAD\xAF\xE8\tX\xE7\xE9\xD7\x03\x8F\xA9\x02h\xB0\xD8\x1Cp\xA8\xE8\xB8\x03x\x02\x80\x01\xA2\xE5\xD8\x80\x02\x88\x01\xBA\x01\x1AI\n\x12MXMETSW1:xe-1/0/19\"3\n1\x10\xAE\x83\x15\x18\x94\x83\x15 \x13(\a8\xC8\x86\xDC\xF5\x01X\xED\xF5\xD8\x03\xDA\xB3\x03h\xC3\xD8\x1Cp\xD0\xE9\xB8\x03x\x02\x80\x01\x89\x99\xA5\x81\x02\x88\x01\xA7\x02\x1AE\n\x12MXMETSW1:xe-1/0/20"/\n-\x10\x94\x02\x18\xF9\x01 \x14(\a8\xC8\x91\x01X\xAB\xA3\xD7\x03\xD5\xE0\x01h\xBD\xD8\x1Cp\x99\xEA\xB8\x03x\x02\x80\x01\x94\xF9\xA8\x80\x02\x88\x01\xBA\x01\x1AE\n\x12MXMETSW1:xe-1/0/21\"/\n-\x10\x86\x02\x18\xC5\x01 4(\r8\x88\x98\x01X\xB8\xA1\xD7\x03\xCA\xE1\x01h\x96\xD8\x1Cp\xD8\xE7\xB8\x03x\x02\x80\x01\x97\xEF\xA7\x80\x02\x88\x01\xBA\x01\x1AE\n\x12MXMETSW1:xe-1/0/22"/\n-\x10\xB0\x11\x18\x85\x11 !(\n8\xE0\xE2\x0FX\xDE\xB3\xD7\x03\xC9\xF3\x01h\xB0\xD8\x1Cp\xE5\xE7\xB8\x03x\x02\x80\x01\xC3\xE4\xB4\x80\x02\x88\x01\xBA\x01\x1AD\n\x12MXMETSW1:xe-1/0/40\".\n,\x10\xCE\xD1\xCD\e\x18\xA3\xD1\xCD\e \"(\t8\xCD\xE4\x83\xB4\x11X\x86\xFD\xB8\e\xB1\xE1\xB1\eh\xF4\xF1\x06p\xE1)\x80\x01\xB0\x98\xAE\xFE\x10\x1A/\n\x12MXMETSW1:xe-1/0/55"\x19\n\x17X\xEE\xDE\xC4\x01h\xD3\xDA\xC4\x01p\x9B\x04\x80\x01\xEA\xBF\xFE\xD0\x01\x88\x01u\x1AQ\n\x12MXMETSW1:xe-1/0/56";\n9\x10\xFA\xEB\xAF\t\x18\xD9\xC1\x81\t \x8C\xAA.(\x150\x068\x81\xA5\x94\xA1\t@\x8E\x06X\x8D\xDF\xDF\x0E\x8F\xC4\xA5\x0Eh\xCF\x90:p\xAF\nx\x15\x80\x01\xAB\xA4\xE4\x91/\x88\x01\xF7\x16" error=#<JSON::ParserError: 757: unexpected token at '�'> error_class=JSON::ParserError host="172.17.0.1" 2016-10-19T15:18:27.674054000Z 2016-10-19 17:18:27 +0200 [error]: suppressed same stacktrace 2016-10-19T15:18:27.700570000Z 2016-10-19 17:18:27 +0200 fluent.error: {"error":"#<JSON::ParserError: 757: unexpected token at '�\u0003'>","error_class":"JSON::ParserError","host":"172.17.0.1","message":"\"\\xAA\\x03\\x00\\x00\\x01\\x00\\x00\\x00\\b\\xE0\\xF2\\xED\\xE9\\x8B\\xE7\\xCF\\x02\\x1AG\\n\\x11MXMETSW1:xe-1/0/0\\\"2\\n0\\x10\\xC6\\x02\\xBC\\x02(\\n8\\xE6\\xB0\\x01X\\xE6\\xD6\\x94\\xDE;\xBC\x01h\xA8\xD0\x94\xDE;p\x82\x05x\xF7/\x80\x01\xEE\xE6\xC7\xC0\xCC\xFB\x04\x88\x01\x8E\x82\xFE\x03\x1AG\n\x11MXMETSW1:xe-1/0/1\"2\n0\x10\xFE\x01\xF7\x01(\a8\xFC\x86\x01X\xAD\x92\x9E\xDE;\\xBC\\x01h\\xEC\\x8B\\x9E\\xDE;p\\x85\\x05x\\xF6/\\x80\\x01\\xBC\\xEA\\xA2\\xA5\\xCD\\xFB\\x04\\x88\\x01\\xEC\\xFE\\xFD\\x03\\x1AG\\n\\x11MXMETSW1:xe-1/0/2\\\"2\\n0\\x10\\xA0\\x02\\x99\\x02(\\a8\\xF2\\x97\\x01X\\xA6\\xB9\\x9E\\xDE;\xBC\x01h\xE4\xB2\x9E\xDE;p\x86\x05x\xF7/\x80\x01\xAE\x94\xC0\xA8\xCD\xFB\x04\x88\x01\xB1\x85\xFE\x03\x1AG\n\x11MXMETSW1:xe-1/0/3\"2\n0\x10\x9C\x02\x87\x02(\x158\xBC\xC1\x01X\xBB\xB9\xFA\xDD;\\xBC\\x01h\\x88\\xB3\\xFA\\xDD;p\\xF7\\x04x\\xF7/\\x80\\x01\\xEE\\xC3\\xBF\\xA9\\xCA\\xFB\\x04\\x88\\x01\\x8E\\x82\\xFE\\x03\\x1AG\\n\\x11MXMETSW1:xe-1/0/4\\\"2\\n0\\x10\\x88\\x1E\\xE5\\x19(\\xA3\\x048\\x94\\x80\\x10X\\xA1\\xF7\\x90\\xDE;\xBC\x01h\xFB\xF4\x90\xDE;pjx\xF5/\x80\x01\xFA\xC9\xAF\x99\xCC\xFB\x04\x88\x01\xCC\xF4\xFD\x03\x1AH\n\x12MXMETSW1:xe-1/0/18\"2\n0\x10\xB3\x91\x01\x18\x86\x91\x01 #(\n8\xAD\xAF\xE8\tX\xE7\xE9\xD7\x03\\x8F\\xA9\\x02h\\xB0\\xD8\\x1Cp\\xA8\\xE8\\xB8\\x03x\\x02\\x80\\x01\\xA2\\xE5\\xD8\\x80\\x02\\x88\\x01\\xBA\\x01\\x1AI\\n\\x12MXMETSW1:xe-1/0/19\\\"3\\n1\\x10\\xAE\\x83\\x15\\x18\\x94\\x83\\x15\\x13(\\a8\\xC8\\x86\\xDC\\xF5\\x01X\\xED\\xF5\\xD8\\x03\xDA\xB3\x03h\xC3\xD8\x1Cp\xD0\xE9\xB8\x03x\x02\x80\x01\x89\x99\xA5\x81\x02\x88\x01\xA7\x02\x1AE\n\x12MXMETSW1:xe-1/0/20\"/\n-\x10\x94\x02\x18\xF9\x01\x14(\a8\xC8\x91\x01X\xAB\xA3\xD7\x03\\xD5\\xE0\\x01h\\xBD\\xD8\\x1Cp\\x99\\xEA\\xB8\\x03x\\x02\\x80\\x01\\x94\\xF9\\xA8\\x80\\x02\\x88\\x01\\xBA\\x01\\x1AE\\n\\x12MXMETSW1:xe-1/0/21\\\"/\\n-\\x10\\x86\\x02\\x18\\xC5\\x014(\\r8\\x88\\x98\\x01X\\xB8\\xA1\\xD7\\x03\xCA\xE1\x01h\x96\xD8\x1Cp\xD8\xE7\xB8\x03x\x02\x80\x01\x97\xEF\xA7\x80\x02\x88\x01\xBA\x01\x1AE\n\x12MXMETSW1:xe-1/0/22\"/\n-\x10\xB0\x11\x18\x85\x11 !(\n8\xE0\xE2\x0FX\xDE\xB3\xD7\x03\\xC9\\xF3\\x01h\\xB0\\xD8\\x1Cp\\xE5\\xE7\\xB8\\x03x\\x02\\x80\\x01\\xC3\\xE4\\xB4\\x80\\x02\\x88\\x01\\xBA\\x01\\x1AD\\n\\x12MXMETSW1:xe-1/0/40\\\".\\n,\\x10\\xCE\\xD1\\xCD\\e\\x18\\xA3\\xD1\\xCD\\e \\\"(\\t8\\xCD\\xE4\\x83\\xB4\\x11X\\x86\\xFD\\xB8\\e\xB1\xE1\xB1\eh\xF4\xF1\x06p\xE1)\x80\x01\xB0\x98\xAE\xFE\x10\x1A/\n\x12MXMETSW1:xe-1/0/55\"\x19\n\x17X\xEE\xDE\xC4\x01h\xD3\xDA\xC4\x01p\x9B\x04\x80\x01\xEA\xBF\xFE\xD0\x01\x88\x01u\x1AQ\n\x12MXMETSW1:xe-1/0/56\";\n9\x10\xFA\xEB\xAF\t\x18\xD9\xC1\x81\t \x8C\xAA.(\x150\x068\x81\xA5\x94\xA1\t@\x8E\x06X\x8D\xDF\xDF\x0E`\x8F\xC4\xA5\x0Eh\xCF\x90:p\xAF\nx\x15\x80\x01\xAB\xA4\xE4\x91/\x88\x01\xF7\x16" error=#<JSON::ParserError: 757: unexpected token at '�\u0003'> error_class=JSON::ParserError host="172.17.0.1""}

InfluxDB is not showing records

Hi,

Installation of open-nti is successful and I am streaming IFL sensor data from one of MX devices.
I can see the Jvision packets inside the container via tcpdump. But when accessing InfluxDB via web, there are no records in juniper database.

How can we check if fluentd is decoding received UDP GPB packets?

root@poc-docker1:# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9d663ec83e4 juniper/open-nti-input-jti "/bin/sh -c /home/flu" 5 hours ago Up 5 hours 5140/tcp, 24224/tcp, 0.0.0.0:50000->50000/udp, 24284/tcp, 0.0.0.0:50020->50020/udp opennti_input_jti
a2214c73048a juniper/open-nti-input-syslog "/bin/sh -c /home/flu" 5 hours ago Up 5 hours 5140/tcp, 24220/tcp, 24224/tcp, 0.0.0.0:6000->6000/udp opennti_input_syslog
a3d228b7d26c juniper/open-nti "/sbin/my_init" 5 hours ago Up 5 hours 0.0.0.0:80->80/tcp, 0.0.0.0:3000->3000/tcp, 0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp, 0.0.0.0:8125->8125/udp opennti_con
root@poc-docker1:
#
root@poc-docker1:# docker exec -it e9d663ec83e4 /bin/bash
bash-4.3$
bash-4.3$ sudo –i
e9d663ec83e4:
#
e9d663ec83e4:~# tcpdump -XXX -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes

17:06:19.026296 IP e9d663ec83e4.36976 > opennti.8086: Flags [P.], seq 257:854, ack 1, win 229, options [nop,nop,TS val 214175094 ecr 214175094], length 597
0x0000: 0242 ac11 0002 0242 ac11 0004 0800 4500 .B.....B......E.
0x0010: 0289 63d4 4000 4006 7c72 ac11 0004 ac11 ..c.@.@.|r......
0x0020: 0002 9070 1f96 3e8b 6fa8 6ad1 633d 8018 ...p..>.o.j.c=..
0x0030: 00e5 5aa4 0000 0101 080a 0cc4 0d76 0cc4 ..Z..........v..
0x0040: 0d76 6a6e 7072 2e6a 7669 7369 6f6e 2c64 .vjnpr.jvision,d
0x0050: 6576 6963 653d 6361 6c69 6775 6c61 3a31 evice=caligula:1
0x0060: 3732 2e31 372e 302e 3635 2c69 6e74 6572 72.17.0.65,inter
0x0070: 6661 6365 3d78 652d 372f 332f 302e 302c face=xe-7/3/0.0,
0x0080: 7479 7065 3d69 6e67 7265 7373 5f73 7461 type=ingress_sta
0x0090: 7473 2e69 665f 7061 636b 6574 7320 7661 ts.if_packets.va
0x00a0: 6c75 653d 3737 3534 3539 3869 2031 3437 lue=7754598i.147
0x00b0: 3934 3639 3036 300a 6a6e 7072 2e6a 7669 9469060.jnpr.jvi
0x00c0: 7369 6f6e 2c64 6576 6963 653d 6361 6c69 sion,device=cali
0x00d0: 6775 6c61 3a31 3732 2e31 372e 302e 3635 gula:172.17.0.65
0x00e0: 2c69 6e74 6572 6661 6365 3d78 652d 372f ,interface=xe-7/
0x00f0: 332f 302e 302c 7479 7065 3d69 6e67 7265 3/0.0,type=ingre
0x0100: 7373 5f73 7461 7473 2e69 665f 6f63 7465 ss_stats.if_octe
0x0110: 7473 2076 616c 7565 3d34 3334 3437 3334 ts.value=4344734
0x0120: 3433 3769 2031 3437 3934 3639 3036 300a 437i.1479469060.
0x0130: 6a6e 7072 2e6a 7669 7369 6f6e 2c64 6576 jnpr.jvision,dev
0x0140: 6963 653d 6361 6c69 6775 6c61 3a31 3732 ice=caligula:172
0x0150: 2e31 372e 302e 3635 2c69 6e74 6572 6661 .17.0.65,interfa
0x0160: 6365 3d78 652d 372f 332f 302e 302c 7479 ce=xe-7/3/0.0,ty
0x0170: 7065 3d69 6e67 7265 7373 5f73 7461 7473 pe=ingress_stats
0x0180: 2e69 665f 6d63 6173 745f 7061 636b 6574 .if_mcast_packet
0x0190: 7320 7661 6c75 653d 3135 3338 3769 2031 s.value=15387i.1
0x01a0: 3437 3934 3639 3036 300a 6a6e 7072 2e6a 479469060.jnpr.j
0x01b0: 7669 7369 6f6e 2c64 6576 6963 653d 6361 vision,device=ca
0x01c0: 6c69 6775 6c61 3a31 3732 2e31 372e 302e ligula:172.17.0.
0x01d0: 3635 2c69 6e74 6572 6661 6365 3d78 652d 65,interface=xe-
0x01e0: 372f 332f 302e 302c 7479 7065 3d65 6772 7/3/0.0,type=egr
0x01f0: 6573 735f 7374 6174 732e 6966 5f70 6163 ess_stats.if_pac
0x0200: 6b65 7473 2076 616c 7565 3d38 3036 3736 kets.value=80676
0x0210: 3731 3169 2031 3437 3934 3639 3036 300a 711i.1479469060.
0x0220: 6a6e 7072 2e6a 7669 7369 6f6e 2c64 6576 jnpr.jvision,dev
0x0230: 6963 653d 6361 6c69 6775 6c61 3a31 3732 ice=caligula:172
0x0240: 2e31 372e 302e 3635 2c69 6e74 6572 6661 .17.0.65,interfa
0x0250: 6365 3d78 652d 372f 332f 302e 302c 7479 ce=xe-7/3/0.0,ty
0x0260: 7065 3d65 6772 6573 735f 7374 6174 732e pe=egress_stats.
0x0270: 6966 5f6f 6374 6574 7320 7661 6c75 653d if_octets.value=
0x0280: 3939 3934 3838 3538 3539 3569 2031 3437 99948858595i.147
0x0290: 3934 3639 3036 30 9469060

root@poc-docker1:~/open-nti# docker logs opennti_input_jti
2016-11-18 11:39:18 +0530 [info]: reading config file path="/tmp/fluent.conf"
2016-11-18 11:39:18 +0530 [info]: starting fluentd-0.12.26
2016-11-18 11:39:18 +0530 [info]: gem 'fluent-plugin-juniper-telemetry' version '0.2.11'
2016-11-18 11:39:18 +0530 [info]: gem 'fluentd' version '0.12.26'
2016-11-18 11:39:18 +0530 [info]: adding match pattern="jnpr." type="copy"
2016-11-18 11:39:18 +0530 [info]: adding match pattern="debug.
" type="stdout"
2016-11-18 11:39:18 +0530 [info]: adding match pattern="fluent.**" type="stdout"
2016-11-18 11:39:18 +0530 [info]: adding source type="forward"
2016-11-18 11:39:18 +0530 [info]: adding source type="udp"
2016-11-18 11:39:18 +0530 [info]: adding source type="udp"
2016-11-18 11:39:18 +0530 [info]: adding source type="monitor_agent"
2016-11-18 11:39:18 +0530 [info]: adding source type="debug_agent"
2016-11-18 11:39:18 +0530 [info]: using configuration file:

@type forward @id forward_input @type udp tag jnpr.jvision format juniper_jti port 50000 bind 0.0.0.0 @type udp tag jnpr.analyticsd format juniper_analyticsd port 50020 bind 0.0.0.0 type copy type influxdb host opennti port 8086 dbname juniper user juniper password xxxxxx value_keys ["value"] buffer_type memory flush_interval 2 @type monitor_agent @id monitor_agent_input port 24220 @type debug_agent @id debug_agent_input bind 127.0.0.1 port 24230 @type stdout @id stdout_output @type stdout 2016-11-18 11:39:18 +0530 [info]: listening fluent socket on 0.0.0.0:24224 2016-11-18 11:39:18 +0530 [info]: listening udp socket on 0.0.0.0:50000 2016-11-18 11:39:18 +0530 [info]: listening udp socket on 0.0.0.0:50020 2016-11-18 11:39:18 +0530 [info]: listening dRuby uri="druby://127.0.0.1:24230" object="Engine" root@poc-docker1:~/open-nti#

Am I missing something here? Kindly suggest.

Thanks,
Swetha.

CG Nat Parser Problems

Hello guys, my name is Andrés Medina, im a network engineer at Auben Networks, im using this tool called Open-NTI, i think is just amazing, but im experiencing some problems with a parser that i created myself.

I was trying to create a parser that allows me to collect some variables of a CG Nat (Carrier Grade Nat) such as port block in use and port block allocation error.
the problem is that the parser is not collecting any variable..

I Already know that, you can create the parser with a type ("Multivalue or Single Value"), both ways are the same, it just differs on the way that it collects the variable. multivalue use loops maybe jumping over the xml, and single value use the entire xpath following the XML Strcuture.

I've tested many Parsers, such like, Show ospf statistics, or picking other variables from the templates.. But the CG Nat parser is not working for me, so i would like to you gus help me out with this, i already have like a month ttrying to do this and i still haven't done it yet.

I posted a thread in Juniper Forums, but got no right answer about it, they just told me to post it here and maybe you guys can help me out

Ill just attach the 2 parsers that i made (Single value - MultiValue) and the XML Output of the Juniper device.

Thanks in advance.

multi-value-cgnat.txt
single-value-cgnat.txt
XML OSPF.txt

also, some support guys in the juniper forum wrote this:

parser:
regex-command: show\s+services\s+nat\s+pool\s+detail\s+|\s+display\s+xml
matches:
-
type: multi-value
method: xpath
xpath: //sfw-per-service-set-nat-pool[starts-with(interface-name, 'sp-')]
loop:
key: ./interface-name
sub-matches:
-
xpath: ./service-nat-pool/port-blocks-in-use
variable-name: $host.service-nat-pool.$key.port-blocks-in-use

        -
            xpath: ./service-nat-pool/port-block-allocation-errors
            variable-name: $host.service-nat-pool.$key.port-block-allocation-errors

and didn't work anyways.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.