Giter Site home page Giter Site logo

netenglabs / suzieq Goto Github PK

View Code? Open in Web Editor NEW
747.0 31.0 101.0 145.84 MB

Using network observability to operate and design healthier networks

Home Page: https://www.stardustsystems.net/suzieq/

License: Apache License 2.0

Python 99.45% Shell 0.49% Dockerfile 0.06%
networking network-analysis data-center infrastructure datacenter python network-validation network-monitoring

suzieq's Introduction

integration-tests GitHub release (latest by date) GitHub Docker Image Version (latest by date) Docker Image Size (latest by date) Docker Pulls

Suzieq -- Healthier Networks Through Network Observability

Would you like to be able to easily answer trivial questions such as how many unique prefixes are there in your routing table, or how many MAC addresses are there in the MAC tables across the network? How about more difficult questions, such as what changes did your routing table see between 10 pm and midnight last night, or which of your nodes have been up the longest, or which BGP sessions have had the most routing updates? How about being able to answer if your OSPF (or BGP) sessions are working correctly, or is all well with your EVPN? How about a quick way to determine the amount of ECMP at every hop between two endpoints? Do you wish you could easily validate the configuration you deployed across your network?

Do you login to every network node you have to figure out answers to a questions like these? Do you then struggle to piece the information together into a consistent whole across the various formats provided by various vendors? Do you wish you had an open source, multi-vendor tool that could help you answer questions like these and more?

If you answered yes to one or more of these questions, then Suzieq is a tool that we think will be interesting to you. Suzieq helps you find things in your network.

Suzieq is both a framework and an application using that framework, that is focused on improving the observability of your network. We define observability as the ability of a system to answer either trivial or complex questions that you pose as you go about operating your network. How easily you can answer your questions is a measure of how good the system's observability is. A good observable system goes well beyond monitoring and alerting. Suzieq is primarily meant for use by network engineers and designers.

Suzieq does multiple things. It collects data from devices and systems across your network. It normalizes the data and then stores it in a vendor independent way. Then it allows analysis of that data. With the applications that we build on top of the framework we want to demonstrate a different and more systematic approach to thinking about networks. We want to show how useful it is to think of your network holistically.

Quick Start

Using Docker Container

We want to make it as easy as possible for you to start engaging with Suzieq so we have a demo that has data included in the image. To get started:

  • docker run -it -p 8501:8501 --name suzieq netenglabs/suzieq-demo
  • suzieq-cli for the CLI OR
  • suzieq-gui for the GUI. Connect to http://localhost:8501 via the browser to access the GUI

When you're within the suzieq-cli, you can run device unique columns=namespace to see the list of different scenarios, we've gathered data for.

Additional information about running the analyzer (suzieq-cli) is available via the official documentation page.

To start collecting data for your network, create an inventory file to gather the data from following the instructions here. Decide the directory where the data will be stored (ensure you have sufficient available space if you're going to be running the poller, say 100 MB at least). Lets call this dbdir. Now launch the suzieq docker container as follows:

  • docker run -it -v <parquet-out-local-dir>:/home/suzieq/parquet -v <inventory-file>:/home/suzieq/inventory.yml --name sq-poller netenglabs/suzieq:latest
  • Launch the poller with the appropriate options. For example, sq-poller -D inventory.yml -n mydatacenter where mydatacenter is the name of the namespace where the data associated with the inventory is stored and inventory.yml is the inventory file in Suzieq poller native format (Use -a instead of -D if you're using Ansible inventory file format).

Using Python Packaging

If you don't want to use docker container or cannot use a docker container, an alternative approach is to install Suzieq as a python package. It is strongly recommended to install suzieq inside a virtual environment. If you already use a tool to create and manage virtual environments, you can skip the step of creating a virtual envirobment below.

Suzieq requires python version 3.7.1 at least, and has been tested with python versions 3.7 and 3.8. It has not been tested to work on Windows. Use Linux (recommended) or macOS. To create a virtual environment, in case you haven't got a tool to create one, type:

python -m venv suzieq

This creates a directory called suzieq and all suzieq related info is stored there. Switch to that directory and activate the virtual environment with:

source activate

Now the virtual environment is alive and you can install suzieq. To install suzieq, execute:

pip install suzieq

Once the command completes, you have the main programs of suzieq available for use:

  • sq-poller: For polling the devices and gathering the data
  • suzieq-gui: For launching the GUI
  • suzieq-cli: For running the CLI
  • sq-rest-server: For running the REST API server

The official documentation is at suzieq.readthedocs.io, and you can watch the screencasts about Suzieq on Youtube.

Analysis

Suzieq supports Analysis using CLI, GUI, REST API, and python objects. For the most part they are equivalent, though with the GUI we have combined the output of multiple commands of the CLI into one page.

The GUI has a status page to let you know what the status of entities in your network. Suzieq GUI status

The Xplore page lets you dive into what is in your network. Explore device

The CLI supports the same kind of analysis as the explore page. CLI device

More examples of the CLI can be seen in the docs and blog posts we've created.

Path

Suzieq has the ability to show the path between two IP addresses, including the ability to show the path through EVPN overlay. You can use this to see each of the paths from a source to a destination and to see if you have anything asymetrical in your paths. GUI PATH

Asserts

One of Suzieq's powerful capabilities are asserts, which are statements that should be true in the network. We've only just started on asserts; what Suzieq has now only demonstrates it's power, there's a lot more to be added in this space. interfaces assert

Suzieq Data

Suzieq supports gathering data from Cumulus, EOS, IOS, IOSXE, IOSXR, JunOS(QFX, EX, MX and SRX platforms and Evolved OS), Palo Alto's Panos (version 8.0 or higher), NXOS and SONIC routers, and Linux servers. Suzieq gathers:

  • Basic device info including serial number, model, version, platform etc.
  • Interfaces
  • LLDP
  • MAC address table (VPLS MAC table for Junos MX)
  • MLAG
  • Routing table
  • ARP/ND table
  • OSPFv2
  • BGP
  • EVPN VNI info

We're adding support for more platforms and features with every release. See the documentation on details of specific tables and its NOS support.

We're also looking for collaborators to help us make Suzieq a truly useful multi-vendor, open source platform for observing all aspects of networking. Please read the collaboration document for ideas on how you can help.

Release Notes

The official release notes are here.

Engage

You can join the conversation via slack. Send email to suzieq AT stardustsystems.net with the email address to send the Slack invitation to.

Additional Documentation & Screencasts

We've done some blogging about Suzieq:

We've also been adding screencasts on Youtube.

Suzieq Priorities

We don't have a roadmap, but we do have a list of our priorities. We mix this with the issues reported.

suzieq's People

Contributors

aegiacometti avatar andrynick98 avatar anubisg1 avatar beufanet avatar claudiolor avatar claudious96 avatar crutcha avatar ddutt avatar dependabot[bot] avatar donaldsharp avatar ewlumpkin avatar hooligan-sa avatar jimmelville avatar jopietsch avatar kircheneer avatar lucanicosia avatar nward avatar rickstardust avatar ryanmerolle avatar sepehr-a avatar skg-net avatar tgupta3 avatar vivekvashist avatar zxiiro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

suzieq's Issues

cli filtering by time isn't allowed

I don't know what the right argument should be, but none seem to work
jpiet> system show start-time=foop
Unknown argument(s) ['start-time'] were passed
jpiet> system show end-time='foop' Error: 2Unknown argument(s) ['end-time'] were passed
jpiet> system show end-time=007373730 Error: 2Unknown argument(s) ['end-time'] were passed
jpiet>

in cli, for the system cmd, the wrong filter dumps a stack trace rather than just not returning anything

piet> system show hostname='foop'
Error running command: "['bootupTimestamp'] not found in axis"

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/systemCmd.py", line 68, in show
df = df.drop(columns=['bootupTimestamp'])
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 4117, in drop
errors=errors,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 3914, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 3946, in _drop_axis
new_axis = axis.drop(labels, errors=errors)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 5340, in drop
raise KeyError("{} not found in axis".format(labels[mask]))
KeyError: "['bootupTimestamp'] not found in axis"

no address argument to routesCmd.lpm() prints an error, but then runs the query and spits out a stack trace

jpiet> routes lpm
address is mandatory parameter
Error running command: '' does not appear to be an IPv4 or IPv6 network

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/routesCmd.py", line 117, in lpm
columns=self.columns,
File "/home/jpiet/suzieq/suzieq/sqobjects/routes.py", line 31, in lpm
return self.engine_obj.lpm(**kwargs)
File "/home/jpiet/suzieq/suzieq/engines/pandas/routes.py", line 79, in lpm
.query("prefix.ipnet.supernet_of('{}')".format(ipaddr))
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3199, in query
res = self.eval(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3315, in eval
return _eval(expr, inplace=inplace, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/eval.py", line 322, in eval
parsed_expr = Expr(expr, engine=engine, parser=parser, env=env, truediv=truediv)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 830, in init
self.terms = self.parse()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 847, in parse
return self._visitor.visit(self.expr)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 447, in visit_Module
return self.visit(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 450, in visit_Expr
return self.visit(node.value, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 734, in visit_Call
return self.const_type(res(*new_args, **kwargs), self.env)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/cyberpandas/ipnetwork_array.py", line 507, in supernet_of
self._name, other)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/cyberpandas/_accessor.py", line 5, in delegated_method
return pd.Series(method(*args, **kwargs), index, name=name)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/cyberpandas/ipnetwork_array.py", line 347, in supernet_of
match = ip_network(addr, strict=False)
File "/usr/lib/python3.7/ipaddress.py", line 84, in ip_network
address)
ValueError: '' does not appear to be an IPv4 or IPv6 network

engines has data in arpnd that we are dropping based on only macaddr difference

the data I'm using is in tests/data/basic_dual_bgp, and looking at arpnd table.
the data saved in the csv was produced with view='', and it has 75 lines. When I run the query with view='latest', then I only get 74 lines. The duplicate lines that are merged are:

(Pdb) df_two[54:56]
   datacenter   hostname   ipAddress    oif            macaddr  state  offload               timestamp
54   dual-bgp  server102  172.16.2.1  bond0  52:54:00:9e:4c:92  stale    False 2020-01-24 22:28:45.696
55   dual-bgp  server102  172.16.2.1  bond0  52:54:00:c5:40:6d  stale    False 2020-01-24 22:28:45.696

the only difference is in the macaddr, but when you do a view='latest', then you drop duplicates by key_fields, and macaddr is not a keyfield so that this gets dropped. they all have the same timestamp.
Is this what we want?

routes lpm fails with "Error running command: 'hostname'

jpiet> routes lpm address='10.0.0.1'
Error running command: 'hostname'

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/RoutesCmd.py", line 127, in lpm
datacenter=self.datacenter,
File "/home/jpiet/suzieq/suzieq/sqobjects/routes.py", line 31, in lpm
return self.engine_obj.lpm(**kwargs)
File "/home/jpiet/suzieq/suzieq/engines/pandas/routes.py", line 81, in lpm
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/home/jpiet/suzieq/suzieq/engines/pandas/engineobj.py", line 126, in get_valid_df
**kwargs
File "/home/jpiet/suzieq/suzieq/engines/pandas/engine.py", line 185, in get_table_df
return final_df[fields].sort_values(by=sort_fields)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 5001, in sort_values
keys = [self._get_label_or_level_values(x, axis=axis) for x in by]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 5001, in
keys = [self._get_label_or_level_values(x, axis=axis) for x in by]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 1774, in _get_label_or_level_values
raise KeyError(key)
KeyError: 'hostname'

jpiet>

some of the data in basic_dual_bgp has metrics of 2^32

14   dual-bgp   exit01  internet-vrf         IPv4Network('0.0.0.0/0')                          []            []     [1]                            4278198272 2020-01-24 22:28:45.696
(suzieq) jpiet@t14:/tmp/pycharm_project_304/suzieq$ time python suzieq/cli/suzieq-cli route unique --columns=metric  --view=all
Logging to /tmp/suzieq-otal3boh
       metric  count
0          20    236
1  4278198272     10

sometimes sqpoller creates hosts that are ipaddresses and are dead

I don't know when this happens, but I've seen it several times

Suzieq  Verbose   OFF  Datacenter     Hostname     StartTime     EndTime     Engine   pandas  Query Time   1.6024s
jpiet> system show datacenter=single
    datacenter         hostname model version   vendor architecture status          address                     uptime               timestamp
45      single           exit01    vm  3.7.11  Cumulus       x86-64  alive    192.168.121.5     0 days 00:03:34.984000 2020-01-16 23:28:41.984
48      single           exit02    vm  3.7.11  Cumulus       x86-64  alive  192.168.121.209     0 days 00:03:32.984000 2020-01-16 23:28:41.984
52      single         internet    vm  3.7.11  Cumulus       x86-64  alive  192.168.121.201     1 days 03:41:44.984000 2020-01-16 23:28:41.984
55      single           leaf01    vm  3.7.11  Cumulus       x86-64  alive  192.168.121.206     0 days 00:03:34.984000 2020-01-16 23:28:41.984
58      single           leaf02    vm  3.7.11  Cumulus       x86-64  alive  192.168.121.140     0 days 00:03:32.984000 2020-01-16 23:28:41.984
61      single           leaf03    vm  3.7.11  Cumulus       x86-64  alive   192.168.121.28     0 days 00:03:39.984000 2020-01-16 23:28:41.984
64      single           leaf04    vm  3.7.11  Cumulus       x86-64  alive  192.168.121.160     0 days 00:03:36.984000 2020-01-16 23:28:41.984
116     single          spine01    vm  3.7.11  Cumulus       x86-64  alive    192.168.121.7     0 days 00:03:36.984000 2020-01-16 23:28:41.984
119     single          spine02    vm  3.7.11  Cumulus       x86-64  alive  192.168.121.222     0 days 00:03:31.984000 2020-01-16 23:28:41.984
1       single  192.168.121.138                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
3       single  192.168.121.140                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
5       single  192.168.121.160                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
7       single  192.168.121.201                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
9       single  192.168.121.206                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
11      single  192.168.121.209                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
13      single  192.168.121.221                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
15      single  192.168.121.222                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
17      single   192.168.121.27                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
19      single   192.168.121.28                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
21      single    192.168.121.5                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
23      single    192.168.121.7                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
25      single   192.168.121.79                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912
27      single   192.168.121.96                                       dead                  18278 days 22:11:50.912000 2020-01-17 22:11:50.912

only getting one entry for topcpu when there should be more

(suzieq) jpiet@t12:/tmp/pycharm_project_1000/suzieq/suzieq/cli$ python3 suzieq-cli
Logging to /tmp/suzieq-97go9iho
jpiet> topcpu show
datacenter hostname timestamp
11 single leaf03 2020-01-28 21:43:30.048
jpiet>

Suzieq Verbose OFF Datacenter Hostname StartTime EndTime Engine pandas Query Time 4.79

but when I look at the parquet-out there are lots of parquet files. I don't know why only one host. There is no filtering
(suzieq) jpiet@t12:~/parquet-out$ ls topcpu/datacenter=dual//.parquet|wc
543 543 43712

I'm not sure where to put this data to investigate

CLI, IPv4Network doesn't convert to json

(suzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli routes show --format=json
Logging to /tmp/suzieq-or6_d9ro
Error running command: Unsupported UTF-8 sequence length when encoding string
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 448, in run_cli
    return fn(**kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/RoutesCmd.py", line 68, in show
    return self._gen_output(df)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 108, in _gen_output
    print(df.to_json(orient="records"))
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 2424, in to_json
    index=index,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 78, in to_json
    index=index,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 135, in write
    self.default_handler,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 234, in _write
    default_handler,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/io/json/_json.py", line 155, in _write
    default_handler=default_handler,
OverflowError: Unsupported UTF-8 sequence length when encoding string
------------------------------------------------------------

sqcmds addrCmd, macsCmd, vlanCmd show with invalid hostname filter fail with ArrowInvalid exception

what's extra weird is that on the cli, the address show works just fine and returns an empty dataframe, but the others fail

In [27]: addrCmd.addrCmd(hostname='l').show()

ArrowInvalid Traceback (most recent call last)
in
----> 1 addrCmd.addrCmd(hostname='l').show()

~/suzieq/suzieq/cli/sqcmds/addrCmd.py in show(self, address)
56 columns=self.columns,
57 address=address,
---> 58 datacenter=self.datacenter,
59 )
60 self.ctxt.exec_time = "{:5.4f}s".format(time.time() - now)

~/suzieq/suzieq/sqobjects/basicobj.py in get(self, **kwargs)
123 return(pd.DataFrame(columns=['datacenter', 'hostname']))
124
--> 125 return self.engine_obj.get(**kwargs)
126
127 def summarize(self, **kwargs) -> pd.DataFrame:

~/suzieq/suzieq/engines/pandas/addr.py in get(self, **kwargs)
57
58 df = self.get_valid_df("interfaces", sort_fields, columns=columns,
---> 59 **kwargs)
60
61 # Works with pandas 0.25.0 onwards

~/suzieq/suzieq/engines/pandas/engineobj.py in get_valid_df(self, table, sort_fields, **kwargs)
124 end_time=self.iobj.end_time,
125 sort_fields=sort_fields,
--> 126 **kwargs
127 )
128

~/suzieq/suzieq/engines/pandas/engine.py in get_table_df(self, cfg, schemas, **kwargs)
135 folder, filters=filters or None, validate_schema=False
136 )
--> 137 .read(columns=fields)
138 .to_pandas()
139 .query(query_str)

~/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pyarrow/parquet.py in read(self, columns, use_threads, use_pandas_metadata)
1138 tables.append(table)
1139
-> 1140 all_data = lib.concat_tables(tables)
1141
1142 if use_pandas_metadata:

~/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.concat_tables()

~/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()

ArrowInvalid: Must pass at least one table

jpiet> macs show datacenter=le Error: 1Error running command: Must pass at least one table

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/macsCmd.py", line 62, in show
datacenter=self.datacenter,
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/macs.py", line 30, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 126, in get_valid_df
**kwargs
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engine.py", line 137, in get_table_df
.read(columns=fields)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pyarrow/parquet.py", line 1140, in read
all_data = lib.concat_tables(tables)
File "pyarrow/table.pxi", line 1625, in pyarrow.lib.concat_tables
File "pyarrow/error.pxi", line 78, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Must pass at least one table

we need to remove the ability to filter by engine since we don't allow it

right now from the command line if you try to filter by something other than pandas it complains because it's not a valid argument, but if you are on the cli and add engine=foo it just uses pandas and returns the pandas data. our other filters return 0 if you don't use a valid thing to filter by.

in routes with view='latest', one of the default routes gets deleted.

with routes view='latest'

   datacenter hostname   vrf                           prefix nexthopIps    oifs weights protocol           source      metric               timestamp
50   dual-bgp   exit01  mgmt         IPv4Network('0.0.0.0/0')         []      []     [1]                            4278198272 2020-01-24 22:28:45.696
51   dual-bgp   exit01  mgmt       IPv4Network('127.0.0.0/8')         []  [mgmt]     [1]   kernel        127.0.0.1          20 2020-01-24 22:28:45.696
52   dual-bgp   exit01  mgmt  IPv4Network('192.168.121.0/24')         []  [eth0]     [1]   kernel  192.168.121.145          20 2020-01-24 22:28:45.696

with views='all'

   datacenter hostname   vrf                           prefix       nexthopIps    oifs weights protocol           source      metric               timestamp
51   dual-bgp   exit01  mgmt         IPv4Network('0.0.0.0/0')  [192.168.121.1]  [eth0]     [1]                                    20 2020-01-24 22:28:45.696
52   dual-bgp   exit01  mgmt         IPv4Network('0.0.0.0/0')               []      []     [1]                            4278198272 2020-01-24 22:28:45.696
53   dual-bgp   exit01  mgmt       IPv4Network('127.0.0.0/8')               []  [mgmt]     [1]   kernel        127.0.0.1          20 2020-01-24 22:28:45.696
54   dual-bgp   exit01  mgmt  IPv4Network('192.168.121.0/24')               []  [eth0]     [1]   kernel  192.168.121.145          20 2020-01-24 22:28:45.696

so we don't see the default that has a metric of 20 when view='latest'. the problem is that via the keys, they are a duplicate and that one gets dropped. so maybe adding 'metric' as key?

CLI, columns='*' doesn't work and produces a stack trace

jpiet> arpnd show columns='*'
Error running command: "None of [Index(['*'], dtype='object')] are in the [columns]"
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
    ret = fn(**args_dict)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/ArpndCmd.py", line 69, in show
    return self._gen_output(df)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 113, in _gen_output
    df = df[self.columns]
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3001, in __getitem__
    indexer = self.loc._convert_to_indexer(key, axis=1, raise_missing=True)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexing.py", line 1285, in _convert_to_indexer
    return self._get_listlike_indexer(obj, axis, **kwargs)[1]
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexing.py", line 1092, in _get_listlike_indexer
    keyarr, indexer, o._get_axis_number(axis), raise_missing=raise_missing
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexing.py", line 1177, in _validate_read_indexer
    key=key, axis=self.obj._get_axis_name(axis)
KeyError: "None of [Index(['*'], dtype='object')] are in the [columns]"
------------------------------------------------------------

I think this might be because of _gen_output, but I'm not sure exactly what broke this since we don't have testing around it.

in cli, sending a list to column filter fails the whole cli

jpiet> system show columns=[hostname, uptime] Error: 1Traceback (most recent call last):
File "suzieq-cli", line 18, in
sys.exit(shell.run())
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 311, in run
return self.start_interactive(args)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 211, in start_interactive
io_loop.run()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 145, in run
self.parse_and_evaluate(text)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 93, in parse_and_evaluate
return self.evaluate_command(cmd, args, input)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 124, in evaluate_command
result = cmd_instance.run_interactive(cmd, args, raw)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 227, in run_interactive
instance, remaining_args = self._create_subcommand_obj(args_dict)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 196, in _create_subcommand_obj
return self._fn(**kwargs), remaining
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/systemCmd.py", line 40, in init
columns=columns,
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 71, in init
self.columns = columns.split()
AttributeError: 'list' object has no attribute 'spl
(suzieq) jpiet@t12:/tmp/pycharm_project_1000/suzieq/suzieq/cli$

bad start_time gives a stack trace

jpiet> system  show start_time="909090"                                                                          Error: 1Error running command: month must be in 1..12: 909090
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
    ret = self._build_naive(res, default)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
    naive = default.replace(**repl)
ValueError: month must be in 1..12

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "pandas/_libs/tslib.pyx", line 610, in pandas._libs.tslib.array_to_datetime
  File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
    return DEFAULTPARSER.parse(timestr, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
    six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
  File "<string>", line 3, in raise_from
dateutil.parser._parser.ParserError: month must be in 1..12: 909090

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "pandas/_libs/tslib.pyx", line 617, in pandas._libs.tslib.array_to_datetime
TypeError: invalid string coercion to datetime

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
    ret = self._build_naive(res, default)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
    naive = default.replace(**repl)
ValueError: month must be in 1..12

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
    ret = fn(**args_dict)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/SystemCmd.py", line 63, in show
    datacenter=self.datacenter,
  File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 128, in get
    return self.engine_obj.get(**kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 173, in get
    df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 98, in get_valid_df
    **kwargs
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engine.py", line 73, in get_table_df
    files = get_latest_files(folder, start, end, view)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/utils.py", line 183, in get_latest_files
    ssecs = pd.to_datetime(start, infer_datetime_format=True).timestamp() * 1000
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
    return func(*args, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 796, in to_datetime
    result = convert_listlike(np.array([arg]), box, format)[0]
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 463, in _convert_listlike_datetimes
    allow_object=True,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1984, in objects_to_datetime64ns
    raise e
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1975, in objects_to_datetime64ns
    require_iso8601=require_iso8601,
  File "pandas/_libs/tslib.pyx", line 465, in pandas._libs.tslib.array_to_datetime
  File "pandas/_libs/tslib.pyx", line 688, in pandas._libs.tslib.array_to_datetime
  File "pandas/_libs/tslib.pyx", line 822, in pandas._libs.tslib.array_to_datetime_object
  File "pandas/_libs/tslib.pyx", line 813, in pandas._libs.tslib.array_to_datetime_object
  File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
    return DEFAULTPARSER.parse(timestr, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
    six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
  File "<string>", line 3, in raise_from
dateutil.parser._parser.ParserError: month must be in 1..12: 909090

this works

jpiet> system show start_time='2020-01-28 20:53:15.392'                                                          Error: 2  datacenter   hostname model      version  ... status          address          uptime               timestamp
0   dual-bgp     edge01    vm  16.04.6 LTS  ...  alive   192.168.121.28 00:22:34.696000 2020-01-24 22:28:45.696
0   dual-bgp     exit01    vm       3.7.11  ...  alive  192.168.121.145 00:22:26.696000 2020-01-24 22:28:45.696
0   dual-bgp     exit02    vm       3.7.11  ...  alive  192.168.121.199 00:22:24.696000 2020-01-24 22:28:45.696
0   dual-bgp   internet    vm       3.7.11  ...  alive  192.168.121.213 00:22:29.696000 2020-01-24 22:28:45.696
0   dual-bgp     leaf01    vm       3.7.11  ...  alive  192.168.121.247 00:22:27.696000 2020-01-24 22:28:45.696
0   dual-bgp     leaf02    vm       3.7.11  ...  alive  192.168.121.127 00:22:27.696000 2020-01-24 22:28:45.696
0   dual-bgp     leaf03    vm       3.7.11  ...  alive    192.168.121.7 00:22:26.696000 2020-01-24 22:28:45.696
0   dual-bgp     leaf04    vm       3.7.11  ...  alive  192.168.121.184 00:22:29.696000 2020-01-24 22:28:45.696
0   dual-bgp  server101    vm  16.04.6 LTS  ...  alive  192.168.121.225 00:29:01.696000 2020-01-24 22:28:45.696
0   dual-bgp  server102    vm  16.04.6 LTS  ...  alive   192.168.121.90 00:29:10.696000 2020-01-24 22:28:45.696
0   dual-bgp  server103    vm  16.04.6 LTS  ...  alive  192.168.121.240 00:29:15.696000 2020-01-24 22:28:45.696
0   dual-bgp  server104    vm  16.04.6 LTS  ...  alive  192.168.121.243 00:29:23.696000 2020-01-24 22:28:45.696
0   dual-bgp    spine01    vm       3.7.11  ...  alive   192.168.121.47 00:22:26.696000 2020-01-24 22:28:45.696
0   dual-bgp    spine02    vm       3.7.11  ...  alive  192.168.121.155 00:22:22.696000 2020-01-24 22:28:45.696

[14 rows x 10 columns]

cli view filter changed to 'all', doesn't change the output. Always gets 'latest'

this is true for all commands

[40 rows x 10 columns]
jpiet> system show view=latest datacenter=single
     datacenter         hostname  ...                     uptime               timestamp
71       single           edge01  ...     0 days 00:54:33.048000 2020-01-28 21:43:30.048
164      single           exit01  ...     0 days 00:09:50.048000 2020-01-28 21:43:30.048
255      single           exit02  ...     0 days 00:09:49.048000 2020-01-28 21:43:30.048
360      single         internet  ...     0 days 00:54:30.048000 2020-01-28 21:43:30.048
458      single           leaf01  ...     0 days 00:09:50.048000 2020-01-28 21:43:30.048
563      single           leaf02  ...     0 days 00:09:48.048000 2020-01-28 21:43:30.048
664      single           leaf03  ...     0 days 00:09:49.048000 2020-01-28 21:43:30.048
763      single           leaf04  ...     0 days 00:09:51.048000 2020-01-28 21:43:30.048
826      single        server101  ...     0 days 00:05:38.976000 2020-01-28 21:41:18.976
892      single        server102  ...     0 days 00:00:44.048000 2020-01-28 21:43:30.048
956      single        server103  ...     0 days 00:00:44.048000 2020-01-28 21:43:30.048
1019     single        server104  ...     0 days 00:00:44.048000 2020-01-28 21:43:30.048
1116     single          spine01  ...     0 days 00:09:46.048000 2020-01-28 21:43:30.048
1219     single          spine02  ...     0 days 00:09:48.048000 2020-01-28 21:43:30.048
1        single  192.168.121.105  ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
4        single  192.168.121.118  ... 18289 days 21:10:43.968000 2020-01-28 21:10:43.968
5        single  192.168.121.123  ... 18289 days 20:05:11.808000 2020-01-28 20:05:11.808
7        single  192.168.121.159  ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
8        single  192.168.121.172  ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880
9        single  192.168.121.198  ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
10       single   192.168.121.55  ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880

[21 rows x 10 columns]
jpiet> system show view=all datacenter=single
     datacenter         hostname  ...                     uptime               timestamp
71       single           edge01  ...     0 days 00:54:33.048000 2020-01-28 21:43:30.048
164      single           exit01  ...     0 days 00:09:50.048000 2020-01-28 21:43:30.048
255      single           exit02  ...     0 days 00:09:49.048000 2020-01-28 21:43:30.048
360      single         internet  ...     0 days 00:54:30.048000 2020-01-28 21:43:30.048
458      single           leaf01  ...     0 days 00:09:50.048000 2020-01-28 21:43:30.048
563      single           leaf02  ...     0 days 00:09:48.048000 2020-01-28 21:43:30.048
664      single           leaf03  ...     0 days 00:09:49.048000 2020-01-28 21:43:30.048
763      single           leaf04  ...     0 days 00:09:51.048000 2020-01-28 21:43:30.048
826      single        server101  ...     0 days 00:05:38.976000 2020-01-28 21:41:18.976
892      single        server102  ...     0 days 00:00:44.048000 2020-01-28 21:43:30.048
956      single        server103  ...     0 days 00:00:44.048000 2020-01-28 21:43:30.048
1019     single        server104  ...     0 days 00:00:44.048000 2020-01-28 21:43:30.048
1116     single          spine01  ...     0 days 00:09:46.048000 2020-01-28 21:43:30.048
1219     single          spine02  ...     0 days 00:09:48.048000 2020-01-28 21:43:30.048
1        single  192.168.121.105  ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
4        single  192.168.121.118  ... 18289 days 21:10:43.968000 2020-01-28 21:10:43.968
5        single  192.168.121.123  ... 18289 days 20:05:11.808000 2020-01-28 20:05:11.808
7        single  192.168.121.159  ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
8        single  192.168.121.172  ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880
9        single  192.168.121.198  ... 18289 days 20:53:15.392000 2020-01-28 20:53:15.392
10       single   192.168.121.55  ... 18289 days 20:07:22.880000 2020-01-28 20:07:22.880

[21 rows x 10 columns]
jpiet> 

when starting sq-poller there are a lot of errors

some of these are valid, many are confusing and I think it will confuse anybody else trying to understand what's going on. Especially the first ones that are run as root are confusing.

In this case, I was using cloud-native-datacenter dual topology with bgp un-numbered. I successfully ran the ping.yml ansible-playbook before starting suzieq.

jpiet@a1:~/cloud-native-data-center-networking/topologies/dual-attach/bgp$ more /tmp/suzieq.log
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node internet
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf01
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf01
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf01
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf04
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,638 - root - ERROR - Unable to connect to node server104
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node server101
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node leaf02
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node leaf03
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node server101
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node internet
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node exit02
2020-01-28 22:26:28,639 - root - ERROR - Unable to connect to node exit02
2020-01-28 22:26:28,717 - root - ERROR - Unable to connect to node server103
2020-01-28 22:26:28,717 - root - ERROR - Unable to connect to node server103
2020-01-28 22:26:28,761 - root - ERROR - Unable to connect to node edge01
2020-01-28 22:26:28,773 - root - ERROR - Unable to connect to node edge01
2020-01-28 22:26:28,784 - root - ERROR - Unable to connect to node edge01
2020-01-28 22:26:34,367 - suzieq - ERROR - evpnVni: failed for node leaf02 with 408/
2020-01-28 22:26:34,367 - suzieq - ERROR - evpnVni: failed for node internet with 408/
2020-01-28 22:26:34,367 - suzieq - ERROR - evpnVni: failed for node exit02 with 408/
2020-01-28 22:26:35,743 - suzieq - ERROR - routes: failed for node leaf01 with 408/
2020-01-28 22:26:35,743 - suzieq - ERROR - routes: failed for node leaf02 with 408/
2020-01-28 22:26:35,776 - suzieq - ERROR - routes: failed for node server104 with 408/
2020-01-28 22:26:35,776 - suzieq - ERROR - routes: failed for node server101 with 408/
2020-01-28 22:26:35,809 - suzieq - ERROR - lldp: failed for node server103 with 408/
2020-01-28 22:26:35,815 - suzieq - ERROR - lldp: failed for node leaf02 with 408/
2020-01-28 22:26:35,815 - suzieq - ERROR - lldp: failed for node leaf01 with 408/
2020-01-28 22:26:35,824 - suzieq - ERROR - lldp: failed for node edge01 with 408/
2020-01-28 22:26:36,088 - suzieq - ERROR - topcpu: failed for node leaf04 with 408/
2020-01-28 22:26:36,092 - suzieq - ERROR - topcpu: failed for node internet with 408/
2020-01-28 22:26:36,100 - suzieq - ERROR - topcpu: failed for node leaf01 with 408/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server103 with 1/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server101 with 1/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server104 with 1/
2020-01-28 22:26:36,669 - suzieq - ERROR - ospfNbr: failed for node server102 with 1/
2020-01-28 22:26:36,670 - suzieq - ERROR - system: failed for node edge01 with 408/
2020-01-28 22:26:36,671 - suzieq - ERROR - system: failed for node leaf02 with 408/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server104 with 1/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:26:36,674 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:26:36,696 - suzieq - ERROR - topmem: failed for node leaf02 with 408/
2020-01-28 22:26:36,696 - suzieq - ERROR - topmem: failed for node server101 with 408/
2020-01-28 22:26:36,704 - suzieq - ERROR - topmem: failed for node leaf03 with 408/
2020-01-28 22:26:36,706 - suzieq - ERROR - topmem: failed for node edge01 with 408/
2020-01-28 22:26:36,706 - suzieq - ERROR - topmem: failed for node server103 with 408/
2020-01-28 22:26:36,706 - suzieq - ERROR - topmem: failed for node exit02 with 408/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfIf: failed for node server104 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server102 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server101 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server104 with 1/
2020-01-28 22:26:56,535 - suzieq - ERROR - ospfNbr: failed for node server103 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server101 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server104 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server103 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfNbr: failed for node server102 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfIf: failed for node server104 with 1/
2020-01-28 22:27:16,146 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:27:16,147 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:27:16,147 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server101 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server102 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server103 with 1/
2020-01-28 22:27:35,474 - suzieq - ERROR - ospfIf: failed for node server104 with 1/

cli, mlag fails when columns=hostnames

if there is no mlag data, it's just fine, but when there is data it fails:

jpiet> mlag show
  datacenter hostname           systemId  ... mlagSinglePortsCnt mlagErrorPortsCnt               timestamp
0   dual-bgp   leaf01  44:39:39:ff:40:94  ...                  0                 0 2020-01-24 22:28:45.696
1   dual-bgp   leaf02  44:39:39:ff:40:94  ...                  0                 0 2020-01-24 22:28:45.696
2   dual-bgp   leaf03  44:39:39:ff:40:95  ...                  0                 0 2020-01-24 22:28:45.696
3   dual-bgp   leaf04  44:39:39:ff:40:95  ...                  0                 0 2020-01-24 22:28:45.696

[4 rows x 11 columns]
jpiet> mlag show columns=hostname
Error running command: name 'state' is not defined
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 188, in resolve
    return self.resolvers[key]
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/__init__.py", line 916, in __getitem__    return self.__missing__(key)            # support subclasses that define __missing__
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/__init__.py", line 908, in __missing__    raise KeyError(key)
KeyError: 'state'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 199, in resolve
    return self.temps[key]
KeyError: 'state'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
    ret = fn(**args_dict)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/MlagCmd.py", line 51, in show
    print(df.query('state != "disabled"'))
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3199, in query
    res = self.eval(expr, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3315, in eval
    return _eval(expr, inplace=inplace, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/eval.py", line 322, in eval
    parsed_expr = Expr(expr, engine=engine, parser=parser, env=env, truediv=truediv)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 830, in __init__
    self.terms = self.parse()
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 847, in parse
    return self._visitor.visit(self.expr)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
    return visitor(node, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 447, in visit_Module
    return self.visit(expr, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
    return visitor(node, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 450, in visit_Expr
    return self.visit(node.value, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
    return visitor(node, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 747, in visit_Compare
    return self.visit(binop)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
    return visitor(node, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 563, in visit_BinOp
    op, op_class, left, right = self._maybe_transform_eq_ne(node)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 482, in _maybe_transform_eq_ne
    left = self.visit(node.left, side="left")
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
    return visitor(node, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 577, in visit_Name
    return self.term_type(node.id, self.env, **kwargs)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 78, in __init__
    self._value = self._resolve_name()
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 95, in _resolve_name
    res = self.env.resolve(self.local_name, is_local=self.is_local)
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 201, in resolve
    raise compu.ops.UndefinedVariableError(key, is_local)
pandas.core.computation.ops.UndefinedVariableError: name 'state' is not defined
------------------------------------------------------------
jpiet>                                    

after system timestamp changes, need to make sure that we write an entry for all tables.

We want to only serve data that is valid, so we don't want to show data that has polled before the last polled system time.

(suzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli tables show
Logging to /tmp/suzieq-4oyoeyqw
         table              first_time             latest_time  intervals  latest rows  all rows  datacenters  devices
0        arpnd 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          9           74       286            1       14
1          bgp 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          6           32       318            1        9
2           fs 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          9          229      1504            1       14
3   ifCounters 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          9          138      5758            1       14
4   interfaces 2020-03-09 16:54:20.416 2020-03-09 17:18:22.208          7          138       496            1       14
5         lldp 2020-03-09 16:54:20.416 2020-03-09 17:18:22.208          8           44       142            1       10
6         macs 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          8           39       141            1        5
7         mlag 2020-03-09 16:54:20.416 2020-03-09 17:07:26.848          3            4        10            1        4
8       routes 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          8           24      1376            1       14
9       system 2020-03-09 16:54:20.416 2020-03-09 17:07:26.848          2           14        28            1       14
10        time 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          9           14        69            1       14
11      topcpu 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          9           14      2034            1       14
12      topmem 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          9            9      1477            1        9
13        vlan 2020-03-09 16:54:20.416 2020-03-09 17:07:26.848          3           16        56            1        4
14       TOTAL 2020-03-09 16:54:20.416 2020-03-09 17:20:33.280          9          789     13695            1       14

look at topmem, it says that there are 9 latest_rows.
but command topmem show says 6:

suzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli topmem show
Logging to /tmp/suzieq-luzfqijc
  datacenter  hostname               timestamp
1   dual-bgp    exit02 2020-03-09 17:14:00.064
2   dual-bgp  internet 2020-03-09 17:16:11.136
4   dual-bgp    leaf02 2020-03-09 17:14:00.064
5   dual-bgp    leaf03 2020-03-09 17:09:37.920
6   dual-bgp    leaf04 2020-03-09 17:11:48.992
7   dual-bgp   spine01 2020-03-09 17:09:37.920

the problem is that in table show, I'm directly calling get_table_df, not get_valid_df, and so it's not doing the merge with the system table.
here is the system table

uzieq) jpiet@t14:/tmp/pycharm_project_1000/suzieq$ python3 /tmp/pycharm_project_1000/suzieq/suzieq/cli/suzieq-cli system show
Logging to /tmp/suzieq-uyzotg20
   datacenter   hostname model      version   vendor architecture status          address          uptime               timestamp
1    dual-bgp     edge01    vm  16.04.6 LTS   Ubuntu       x86-64  alive  192.168.121.165 00:17:30.848000 2020-03-09 17:07:26.848
3    dual-bgp     exit01    vm       3.7.12  Cumulus       x86-64  alive  192.168.121.108 00:17:25.848000 2020-03-09 17:07:26.848
5    dual-bgp     exit02    vm       3.7.12  Cumulus       x86-64  alive  192.168.121.236 00:17:28.848000 2020-03-09 17:07:26.848
7    dual-bgp   internet    vm       3.7.12  Cumulus       x86-64  alive  192.168.121.237 00:17:30.848000 2020-03-09 17:07:26.848
9    dual-bgp     leaf01    vm       3.7.12  Cumulus       x86-64  alive   192.168.121.86 00:17:28.848000 2020-03-09 17:07:26.848
11   dual-bgp     leaf02    vm       3.7.12  Cumulus       x86-64  alive   192.168.121.22 00:17:25.848000 2020-03-09 17:07:26.848
13   dual-bgp     leaf03    vm       3.7.12  Cumulus       x86-64  alive  192.168.121.132 00:17:29.848000 2020-03-09 17:07:26.848
15   dual-bgp     leaf04    vm       3.7.12  Cumulus       x86-64  alive   192.168.121.53 00:17:31.848000 2020-03-09 17:07:26.848
17   dual-bgp  server101    vm  16.04.6 LTS   Ubuntu       x86-64  alive  192.168.121.235 00:22:13.848000 2020-03-09 17:07:26.848
19   dual-bgp  server102    vm  16.04.6 LTS   Ubuntu       x86-64  alive  192.168.121.239 00:22:54.848000 2020-03-09 17:07:26.848
21   dual-bgp  server103    vm  16.04.6 LTS   Ubuntu       x86-64  alive   192.168.121.83 00:22:23.848000 2020-03-09 17:07:26.848
23   dual-bgp  server104    vm  16.04.6 LTS   Ubuntu       x86-64  alive   192.168.121.54 00:22:14.848000 2020-03-09 17:07:26.848
25   dual-bgp    spine01    vm       3.7.12  Cumulus       x86-64  alive  192.168.121.242 00:17:30.848000 2020-03-09 17:07:26.848
27   dual-bgp    spine02    vm       3.7.12  Cumulus       x86-64  alive  192.168.121.105 00:17:29.848000 2020-03-09 17:07:26.848

so there are three entries in latest topmem that don't get shown because their time is older than the last time system table was queried, which is this line in get_valid_df
.query("timestamp_x >= timestamp_y")
if I comment out that one line, then I get

Logging to /tmp/suzieq-gtrmtk3f
  datacenter  hostname               timestamp
0   dual-bgp    exit01 2020-03-09 16:54:20.416
1   dual-bgp    exit02 2020-03-09 17:14:00.064
2   dual-bgp  internet 2020-03-09 17:16:11.136
3   dual-bgp    leaf01 2020-03-09 16:54:20.416
4   dual-bgp    leaf02 2020-03-09 17:14:00.064
5   dual-bgp    leaf03 2020-03-09 17:09:37.920
6   dual-bgp    leaf04 2020-03-09 17:11:48.992
7   dual-bgp   spine01 2020-03-09 17:09:37.920
8   dual-bgp   spine02 2020-03-09 16:54:20.416

in other words, if you look at exit01, it didn't show up in the first topmem show I sent. That is because exit01 ahs a timestamp of 16:54 in topmem but has a timstamp of 17:07 in system.

I looked more, and the reason that the system table has an updated entry again is because I restarted the poller, not that the data had changed. I don't know why topmem wasn't updated after I restarted the poller.

cli, entering start-time in context produces error

I don't understand the time filtering. I don't know what I'm supposed to put in

jpiet> set start-time=1570006401 Error: 1jpiet> topcpu show
Error running command: year 1570006401 is out of range: 1570006401

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
ret = self._build_naive(res, default)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
naive = default.replace(**repl)
ValueError: year 1570006401 is out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "pandas/_libs/tslib.pyx", line 610, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
File "", line 3, in raise_from
dateutil.parser._parser.ParserError: year 1570006401 is out of range: 1570006401

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "pandas/_libs/tslib.pyx", line 617, in pandas._libs.tslib.array_to_datetime
TypeError: invalid string coercion to datetime

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse
ret = self._build_naive(res, default)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive
naive = default.replace(**repl)
ValueError: year 1570006401 is out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/home/jpiet/suzieq/suzieq/cli/sqcmds/topcpuCmd.py", line 54, in show
hostname=self.hostname, columns=self.columns, datacenter=self.datacenter
File "/home/jpiet/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/home/jpiet/suzieq/suzieq/engines/pandas/engineobj.py", line 201, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/home/jpiet/suzieq/suzieq/engines/pandas/engineobj.py", line 126, in get_valid_df
**kwargs
File "/home/jpiet/suzieq/suzieq/engines/pandas/engine.py", line 74, in get_table_df
files = get_latest_files(folder, start, end)
File "/home/jpiet/suzieq/suzieq/utils.py", line 131, in get_latest_files
ssecs = pd.to_datetime(start, infer_datetime_format=True).timestamp() * 1000
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
return func(*args, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 796, in to_datetime
result = convert_listlike(np.array([arg]), box, format)[0]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 463, in _convert_listlike_datetimes
allow_object=True,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1984, in objects_to_datetime64ns
raise e
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1975, in objects_to_datetime64ns
require_iso8601=require_iso8601,
File "pandas/_libs/tslib.pyx", line 465, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslib.pyx", line 688, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslib.pyx", line 822, in pandas._libs.tslib.array_to_datetime_object
File "pandas/_libs/tslib.pyx", line 813, in pandas._libs.tslib.array_to_datetime_object
File "pandas/_libs/tslibs/parsing.pyx", line 225, in pandas._libs.tslibs.parsing.parse_datetime_string
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 657, in parse
six.raise_from(ParserError(e.args[0] + ": %s", timestr), e)
File "", line 3, in raise_from
dateutil.parser._parser.ParserError: year 1570006401 is out of range: 1570006401

cli, AddrCmd fails when columns=hostname filtering

jpiet> address show columns=hostname
Error running command: 'datacenter'
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
    ret = fn(**args_dict)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/AddrCmd.py", line 58, in show
    datacenter=self.datacenter,
  File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
    return self.engine_obj.get(**kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/addr.py", line 53, in get
    **kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 173, in get_valid_df
    table_df.merge(sys_df, on=["datacenter", "hostname"])
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 7349, in merge
    validate=validate,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
81, in merge
    validate=validate,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
626, in __init__
    ) = self._get_merge_keys()
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
988, in _get_merge_keys
    left_keys.append(left._get_label_or_level_values(lk))
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 1774,
in _get_label_or_level_values
    raise KeyError(key)
KeyError: 'datacenter'
------------------------------------------------------------
jpiet>           

We need to capture OSPF convergence times

I'm not sure how best to do this, but the last convergence is in sh ip ospf on FRR/Quagga
if there are multiple events inside our polling interval we'll lose them.

When running test, get a warning for ExtenionArray

I assume this happens at other times to
sts/integration/test_sqcmds.py::test_commands[routesCmd-commands7-size7]
/tmp/pycharm_project_1000/suzieq/tests/integration/test_sqcmds.py:72: FutureWarning: 'ExtensionArray._formatting_values' is deprecated. Specify 'ExtensionArray._formatter' instead.
return getattr(instance, cmd)()

I believe it's just for the tables that have IP addresses which must use ExtentionArray

sqcommand filtering engine using context produces attribute error

This is not what happens in the cli, so I'm not sure if this is a bug or just unclear how to change engine. I think part of the problem is that engine is sometimes used to be the string name of the engine and sometimes is the engine, so we should probably go back and edit all strings to be engine_name

____________________ test_context_engine_filtering[addrCmd] ____________________

setup_nubia = None, svc = 'addrCmd'

@pytest.mark.fast
#@pytest.mark.xfail(reason='bug # ', raises=AttributeError)
@pytest.mark.parametrize('svc', good_svcs)
def test_context_engine_filtering(setup_nubia, svc):
  _test_context_filtering(svc, {'engine': 'pandas'})

tests/integration/test_sqcmds.py:182:


tests/integration/test_sqcmds.py:199: in _test_context_filtering
s2 = _test_command(svc, 'show', None, None)
tests/integration/test_sqcmds.py:62: in _test_command
s = execute_cmd(svc, cmd, arg, filter)
tests/integration/test_sqcmds.py:211: in execute_cmd
instance = instance()
suzieq/cli/sqcmds/addrCmd.py:39: in init
self.addrobj = addrObj(context=self.ctxt)
suzieq/sqobjects/addr.py:23: in init
datacenter, columns, context=context, table='addr')


self = <suzieq.sqobjects.addr.addrObj object at 0x7fba50196950>
engine_name = '', hostname = [], start_time = '', end_time = '', view = 'latest'
datacenter = [], columns = ['default']
context = <suzieq.cli.sq_nubia_context.NubiaSuzieqContext object at 0x7fba501f9bd0>
table = 'addr'

def __init__(self, engine_name: str = '', hostname: typing.List[str] = [],
             start_time: str = '', end_time: str = '',
             view: str = 'latest', datacenter: typing.List[str] = [],
             columns: typing.List[str] = ['default'],
             context=None, table: str = '') -> None:

    if context is None:
        self.ctxt = SQContext(engine_name)
    else:
        self.ctxt = context
        if not self.ctxt:
            self.ctxt = SQContext(engine_name)

    self._cfg = self.ctxt.cfg
    self._schemas = self.ctxt.schemas
    self._table = table
    self._sort_fields = []
    self._cat_fields = []

    if not datacenter and self.ctxt.datacenter:
        self.datacenter = self.ctxt.datacenter
    else:
        self.datacenter = datacenter
    if not hostname and self.ctxt.hostname:
        self.hostname = self.ctxt.hostname
    else:
        self.hostname = hostname

    if not start_time and self.ctxt.start_time:
        self.start_time = self.ctxt.start_time
    else:
        self.start_time = start_time

    if not end_time and self.ctxt.end_time:
        self.end_time = self.ctxt.end_time
    else:
        self.end_time = end_time

    self.view = view
    self.columns = columns

    if engine_name:
        self.engine = get_sqengine(engine_name)
    else:
        self.engine = self.ctxt.engine

    if table:
      self.engine_obj = self.engine.get_object(self._table, self)

E AttributeError: 'str' object has no attribute 'get_object'

cli, all service show columns filter says it requires datacenter

piet> interface show columns=hostname Error: 2Error running command: 'datacenter'

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/interfaceCmd.py", line 64, in show
type=type.split(),
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 201, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 173, in get_valid_df
table_df.merge(sys_df, on=["datacenter", "hostname"])
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 7349, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 81, in merge
validate=validate,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 626, in init
) = self._get_merge_keys()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 988, in _get_merge_keys
left_keys.append(left._get_label_or_level_values(lk))
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 1774, in _get_label_or_level_values
raise KeyError(key)
KeyError: 'datacenter'

most sqcmds when context filtering on valid datacenter return zero data

systemCmd work
addrCmd, macCmd, routesCmd, and vlan have the same errors as when given a bad hostname

FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[addrCmd] - pyarrow.lib.ArrowInvalid: Must pass at least one table
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[arpndCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[bgpCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[interfaceCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[lldpCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[macsCmd] - pyarrow.lib.ArrowInvalid: Must pass at least one table
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[mlagCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[routesCmd] - pandas.core.computation.ops.UndefinedVariableError: name 'prefix' is not defined
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[topcpuCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[topmemCmd] - assert 0 > 0
FAILED tests/integration/test_sqcmds.py::test_context_datacenter_filtering[vlanCmd] - pyarrow.lib.ArrowInvalid: Must pass at least one table

CLI, address show with columns='*' doesn't work correctly

rather than show all the columns, it acts as if it was given bad column names'

piet> address show columns=*
                        ipAddressList               timestamp         ifname   hostname datacenter
0                 [192.168.121.28/24] 2020-01-24 22:28:45.696           eth0     edge01   dual-bgp
1                  [169.254.254.2/30] 2020-01-24 22:28:45.696         eth1.2     edge01   dual-bgp
2                 [169.254.254.10/30] 2020-01-24 22:28:45.696         eth1.4     edge01   dual-bgp
4                  [169.254.253.2/30] 2020-01-24 22:28:45.696         eth2.2     edge01   dual-bgp
5                 [169.254.253.10/30] 2020-01-24 22:28:45.696         eth2.4     edge01   dual-bgp
7                     [10.0.0.100/32] 2020-01-24 22:28:45.696             lo     edge01   dual-bgp
8                [192.168.121.145/24] 2020-01-24 22:28:45.696           eth0     exit01   dual-bgp
9                     [10.0.0.101/32] 2020-01-24 22:28:45.696   internet-vrf     exit01   dual-bgp
10                    [10.0.0.101/32] 2020-01-24 22:28:45.696             lo     exit01   dual-bgp
16                 [169.254.254.1/30] 2020-01-24 22:28:45.696         swp5.2     exit01   dual-bgp
17                 [169.254.254.9/30] 2020-01-24 22:28:45.696         swp5.4     exit01   dual-bgp
19                 [169.254.127.1/31] 2020-01-24 22:28:45.696           swp6     exit01   dual-bgp
20               [192.168.121.199/24] 2020-01-24 22:28:45.696           eth0     exit02   dual-bgp
21                    [10.0.0.102/32] 2020-01-24 22:28:45.696   internet-vrf     exit02   dual-bgp
22                    [10.0.0.102/32] 2020-01-24 22:28:45.696             lo     exit02   dual-bgp
28                 [169.254.253.1/30] 2020-01-24 22:28:45.696         swp5.2     exit02   dual-bgp
29                 [169.254.253.9/30] 2020-01-24 22:28:45.696         swp5.4     exit02   dual-bgp
31                 [169.254.127.3/31] 2020-01-24 22:28:45.696           swp6     exit02   dual-bgp
32               [192.168.121.213/24] 2020-01-24 22:28:45.696           eth0   internet   dual-bgp
33   [10.0.0.253/32, 172.16.253.1/32] 2020-01-24 22:28:45.696             lo   internet   dual-bgp
34                 [169.254.127.0/31] 2020-01-24 22:28:45.696           swp1   internet   dual-bgp
35                 [169.254.127.2/31] 2020-01-24 22:28:45.696           swp2   internet   dual-bgp
39               [192.168.121.247/24] 2020-01-24 22:28:45.696           eth0     leaf01   dual-bgp
40                     [10.0.0.11/32] 2020-01-24 22:28:45.696             lo     leaf01   dual-bgp
42                   [169.254.1.1/30] 2020-01-24 22:28:45.696  peerlink.4094     leaf01   dual-bgp
50                    [172.16.1.1/24] 2020-01-24 22:28:45.696         vlan13     leaf01   dual-bgp
51                    [172.16.2.1/24] 2020-01-24 22:28:45.696         vlan24     leaf01   dual-bgp
55               [192.168.121.127/24] 2020-01-24 22:28:45.696           eth0     leaf02   dual-bgp
56                     [10.0.0.12/32] 2020-01-24 22:28:45.696             lo     leaf02   dual-bgp
58                   [169.254.1.2/30] 2020-01-24 22:28:45.696  peerlink.4094     leaf02   dual-bgp
66                    [172.16.1.1/24] 2020-01-24 22:28:45.696         vlan13     leaf02   dual-bgp
67                    [172.16.2.1/24] 2020-01-24 22:28:45.696         vlan24     leaf02   dual-bgp
71                 [192.168.121.7/24] 2020-01-24 22:28:45.696           eth0     leaf03   dual-bgp
72                     [10.0.0.13/32] 2020-01-24 22:28:45.696             lo     leaf03   dual-bgp
74                   [169.254.1.1/30] 2020-01-24 22:28:45.696  peerlink.4094     leaf03   dual-bgp
82                    [172.16.3.1/24] 2020-01-24 22:28:45.696         vlan13     leaf03   dual-bgp
83                    [172.16.4.1/24] 2020-01-24 22:28:45.696         vlan24     leaf03   dual-bgp
87               [192.168.121.184/24] 2020-01-24 22:28:45.696           eth0     leaf04   dual-bgp
88                     [10.0.0.14/32] 2020-01-24 22:28:45.696             lo     leaf04   dual-bgp
90                   [169.254.1.2/30] 2020-01-24 22:28:45.696  peerlink.4094     leaf04   dual-bgp
98                    [172.16.3.1/24] 2020-01-24 22:28:45.696         vlan13     leaf04   dual-bgp
99                    [172.16.4.1/24] 2020-01-24 22:28:45.696         vlan24     leaf04   dual-bgp
100                 [172.16.1.101/24] 2020-01-24 22:28:45.696          bond0  server101   dual-bgp
101              [192.168.121.225/24] 2020-01-24 22:28:45.696           eth0  server101   dual-bgp
105                 [172.16.2.102/24] 2020-01-24 22:28:45.696          bond0  server102   dual-bgp
106               [192.168.121.90/24] 2020-01-24 22:28:45.696           eth0  server102   dual-bgp
110                 [172.16.3.103/24] 2020-01-24 22:28:45.696          bond0  server103   dual-bgp
111              [192.168.121.240/24] 2020-01-24 22:28:45.696           eth0  server103   dual-bgp
115                 [172.16.4.104/24] 2020-01-24 22:28:45.696          bond0  server104   dual-bgp
116              [192.168.121.243/24] 2020-01-24 22:28:45.696           eth0  server104   dual-bgp
120               [192.168.121.47/24] 2020-01-24 22:28:45.696           eth0    spine01   dual-bgp
121                    [10.0.0.21/32] 2020-01-24 22:28:45.696             lo    spine01   dual-bgp
129              [192.168.121.155/24] 2020-01-24 22:28:45.696           eth0    spine02   dual-bgp
130                    [10.0.0.22/32] 2020-01-24 22:28:45.696             lo    spine02   dual-bgp

the issue is that engine/pandas/addr.py doesn't use get_display_field, which is the code that rightly deals with '*', instead it has it's own code, starting at line 39

        columns = kwargs.get("columns", [])
        if columns:
            del kwargs["columns"]
        else:
            columns = ['default']
        if columns != ["default"]:
            if addrcol not in columns:
                columns.insert(-1, addrcol)
        else:
            columns = ["datacenter", "hostname", "ifname", "state", addrcol,
                       "timestamp"]

cli, selecting another engine just uses pandas anyway

we should remove the option to filter by engine

jpiet> system show engine=foop
datacenter hostname model version vendor architecture status address uptime timestamp
5 dual 192.168.121.113 dead 18289 days 20:31:24.672000 2020-01-28 20:31:24.672
6 dual 192.168.121.119 dead 18289 days 21:08:32.896000 2020-01-28 21:08:32.896
10 dual 192.168.121.146 dead 18289 days 20:31:24.672000 2020-01-28 20:31:24.672
13 dual 192.168.121.171 dead 18289 days 21:08:32.896000 2020-01-28 21:08:32.896
15 dual 192.168.121.175 dead 18289 days 21:26:01.472000 2020-01-28 21:26:01.472
17 dual 192.168.121.199 dead 18289 days 21:26:01.472000 2020-01-28 21:26:01.472
20 dual 192.168.121.250 dead

tests require valid suzieq file

We need to make it so that it doesn't require the specific suzieq file so that we can run tests on arbitrary hosts and so that local suzieq config file doesn't mess up tests.

some cli commands with unique and view=all have a KeyError: 'datacenter'

at least routes and ospf. system works fine

jpiet> route unique columns=timestamp view=all
Error running command: 'datacenter'
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
    ret = fn(**args_dict)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/command.py", line 148, in unique
    df = self.show(**kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/RouteCmd.py", line 55, in show
    datacenter=self.datacenter,
  File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 100, in get
    return self.engine_obj.get(**kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/routes.py", line 10, in get
    df = super().get(**kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 165, in get
    df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
  File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 137, in get_valid_df
    table_df.merge(sys_df, on=["datacenter", "hostname"])
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 7349, in merge
    validate=validate,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
81, in merge
    validate=validate,
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
626, in __init__
    ) = self._get_merge_keys()
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line
988, in _get_merge_keys
    left_keys.append(left._get_label_or_level_values(lk))
  File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/generic.py", line 1774,
in _get_label_or_level_values
    raise KeyError(key)
KeyError: 'datacenter'

cli/sqcmds evpnVni show fails with AttributeError: module 'suzieq.engines.pandas.evpnVni' has no attribute 'EvpnvniObj'

jpiet> evpnVni show
Traceback (most recent call last):
File "suzieq-cli", line 18, in
sys.exit(shell.run())
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 311, in run
return self.start_interactive(args)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/nubia.py", line 211, in start_interactive
io_loop.run()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 145, in run
self.parse_and_evaluate(text)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 93, in parse_and_evaluate
return self.evaluate_command(cmd, args, input)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/interactive.py", line 124, in evaluate_command
result = cmd_instance.run_interactive(cmd, args, raw)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 227, in run_interactive
instance, remaining_args = self._create_subcommand_obj(args_dict)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 196, in _create_subcommand_obj
return self._fn(**kwargs), remaining
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/EvpnVniCmd.py", line 40, in init
self.evpnVniobj = evpnVniObj(context=self.ctxt)
File "/home/jpiet/suzieq/suzieq/sqobjects/evpnVni.py", line 23, in init
datacenter, columns, context=context, table='evpnVni')
File "/home/jpiet/suzieq/suzieq/sqobjects/basicobj.py", line 87, in init
self.engine_obj = self.engine.get_object(self._table, self)
File "/home/jpiet/suzieq/suzieq/engines/pandas/engine.py", line 191, in get_object
eobj = getattr(module, "{}Obj".format(objname.title()))
AttributeError: module 'suzieq.engines.pandas.evpnVni' has no attribute 'EvpnvniObj'

SqPandasEngine.get_table_df() has broken code

I don't know how to get the code to execute, but this, starting at line 114 is clearly broken, it tries to use files before it is assigned and doesn't have a colon after files.

                jobs = [
                    exe.submit(self.read_pq_file, f, fields, query_str)
                    for f in files
                ]

it's in a section that starts out with if use_get_files, so we must never be using use_get_files. I don't know what that does or how to test it.

ospf show has a column lastChangeTime that has integers not time, so it's not useful

jpiet> ospf show
   datacenter hostname      vrf ifname state      peerIP  lastChangeTime  numChanges               timestamp
0   dual-ospf   exit01  default   swp1  full   10.0.0.21   1584469000622           5 2020-03-17 18:29:30.240
1   dual-ospf   exit01  default   swp2  full   10.0.0.22   1584469000622           5 2020-03-17 18:29:30.240
2   dual-ospf   exit02  default   swp1  full   10.0.0.21   1584469006599           5 2020-03-17 18:29:30.240
3   dual-ospf   exit02  default   swp2  full   10.0.0.22   1584469001599           4 2020-03-17 18:29:30.240
4   dual-ospf   leaf01  default   swp1  full   10.0.0.21   1584469005600           5 2020-03-17 18:29:30.240
5   dual-ospf   leaf01  default   swp2  full   10.0.0.22   1584469001600           5 2020-03-17 18:29:30.240
6   dual-ospf   leaf02  default   swp1  full   10.0.0.21   1584469003631           5 2020-03-17 18:31:41.312
7   dual-ospf   leaf02  default   swp2  full   10.0.0.22   1584468998631           5 2020-03-17 18:31:41.312
8   dual-ospf   leaf03  default   swp1  full   10.0.0.21   1584469003634           5 2020-03-17 18:31:41.312
9   dual-ospf   leaf03  default   swp2  full   10.0.0.22   1584468997634           5 2020-03-17 18:31:41.312
10  dual-ospf   leaf04  default   swp1  full   10.0.0.21   1584469005623           5 2020-03-17 18:29:30.240
11  dual-ospf   leaf04  default   swp2  full   10.0.0.22   1584469000623           5 2020-03-17 18:29:30.240
12  dual-ospf  spine01  default   swp1  full   10.0.0.11   1584469005259           5 2020-03-17 18:27:19.168
13  dual-ospf  spine01  default   swp2  full   10.0.0.12   1584469006259           5 2020-03-17 18:27:19.168
14  dual-ospf  spine01  default   swp3  full   10.0.0.13   1584469006259           5 2020-03-17 18:27:19.168
15  dual-ospf  spine01  default   swp4  full   10.0.0.14   1584469006259           4 2020-03-17 18:27:19.168
16  dual-ospf  spine01  default   swp5  full  10.0.0.102   1584469006259           5 2020-03-17 18:27:19.168
17  dual-ospf  spine01  default   swp6  full  10.0.0.101   1584469005259           5 2020-03-17 18:27:19.168
18  dual-ospf  spine02  default   swp1  full   10.0.0.11   1584469001620           5 2020-03-17 18:29:30.240
19  dual-ospf  spine02  default   swp2  full   10.0.0.12   1584469001620           5 2020-03-17 18:29:30.240
20  dual-ospf  spine02  default   swp3  full   10.0.0.13   1584469001620           5 2020-03-17 18:29:30.240
21  dual-ospf  spine02  default   swp4  full   10.0.0.14   1584469001620           5 2020-03-17 18:29:30.240
22  dual-ospf  spine02  default   swp5  full  10.0.0.102   1584469001620           5 2020-03-17 18:29:30.240
23  dual-ospf  spine02  default   swp6  full  10.0.0.101   1584469001620           5 2020-03-17 18:29:30.240

in cli, routes show with invalid hostname fails with UndefinedVariableError exception

jpiet> routes show hostname=jk Error: 1Error running command: name 'prefix' is not defined

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 188, in resolve
return self.resolvers[key]
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/init.py", line 916, in getitem
return self.missing(key) # support subclasses that define missing
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/collections/init.py", line 908, in missing
raise KeyError(key)
KeyError: 'prefix'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 199, in resolve
return self.temps[key]
KeyError: 'prefix'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/routesCmd.py", line 60, in show
datacenter=self.datacenter,
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/routes.py", line 19, in get
df = super().get(**kwargs).query('prefix != ""')
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3199, in query
res = self.eval(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 3315, in eval
return _eval(expr, inplace=inplace, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/eval.py", line 322, in eval
parsed_expr = Expr(expr, engine=engine, parser=parser, env=env, truediv=truediv)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 830, in init
self.terms = self.parse()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 847, in parse
return self._visitor.visit(self.expr)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 447, in visit_Module
return self.visit(expr, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 450, in visit_Expr
return self.visit(node.value, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 747, in visit_Compare
return self.visit(binop)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 563, in visit_BinOp
op, op_class, left, right = self._maybe_transform_eq_ne(node)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 482, in _maybe_transform_eq_ne
left = self.visit(node.left, side="left")
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 441, in visit
return visitor(node, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 577, in visit_Name
return self.term_type(node.id, self.env, **kwargs)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 78, in init
self._value = self._resolve_name()
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/ops.py", line 95, in _resolve_name
res = self.env.resolve(self.local_name, is_local=self.is_local)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/computation/scope.py", line 201, in resolve
raise compu.ops.UndefinedVariableError(key, is_local)
pandas.core.computation.ops.UndefinedVariableError: name 'prefix' is not defined

sqcmds, routesCmd set context start_time gives very different column output

In [68]: ctx.start_time=''

In [69]: s1 = routesCmd.routesCmd().show()
/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/bin/ipython:1: FutureWarning: 'ExtensionArray._formatting_values' is deprecated. Specify 'ExtensionArray._formatter' instead.
#!/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/bin/python3
datacenter hostname vrf prefix nexthopIps ... weights protocol source metric timestamp0 dual-bgp edge01 default IPv4Network('0.0.0.0/0') [192.168.121.1] ... [1] 20 2020-01-24 22:28:45.6961 dual-bgp edge01 default IPv4Network('10.0.0.101/32') [169.254.254.1, 169.254.254.9] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.6962 dual-bgp edge01 default IPv4Network('10.0.0.102/32') [169.254.253.1, 169.254.253.9] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.6963 dual-bgp edge01 default IPv4Network('10.0.0.11/32') [169.254.253.1, 169.254.254.1] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.6964 dual-bgp edge01 default IPv4Network('10.0.0.12/32') [169.254.253.1, 169.254.254.1] ... [1, 1] 186 10.0.0.100 20 2020-01-24 22:28:45.696.. ... ... ... ... ... ... ... ... ... ... ...231 dual-bgp spine02 default IPv4Network('172.16.3.0/24') [169.254.0.1, 169.254.0.1] ... [1, 1] bgp 20 2020-01-24 22:28:45.696232 dual-bgp spine02 default IPv4Network('172.16.4.0/24') [169.254.0.1, 169.254.0.1] ... [1, 1] bgp 20 2020-01-24 22:28:45.696233 dual-bgp spine02 mgmt IPv4Network('0.0.0.0/0') [] ... [1] 4278198272 2020-01-24 22:28:45.696234 dual-bgp spine02 mgmt IPv4Network('127.0.0.0/8') [] ... [1] kernel 127.0.0.1 20 2020-01-24 22:28:45.696235 dual-bgp spine02 mgmt IPv4Network('192.168.121.0/24') [] ... [1] kernel 192.168.121.155 20 2020-01-24 22:28:45.696
[236 rows x 11 columns]

In [70]: ctx.start_time=1570006401

In [71]: s2 = topcpuCmd.topcpuCmd().show()
datacenter hostname timestamp
0 dual-bgp edge01 2020-01-24 22:28:45.696
1 dual-bgp edge01 2020-01-24 22:28:45.696
2 dual-bgp edge01 2020-01-24 22:28:45.696
3 dual-bgp edge01 2020-01-24 22:28:45.696
4 dual-bgp edge01 2020-01-24 22:28:45.696
.. ... ... ...
122 dual-bgp spine02 2020-01-24 22:30:56.768
123 dual-bgp spine02 2020-01-24 22:30:56.768
124 dual-bgp spine02 2020-01-24 22:30:56.768
125 dual-bgp spine02 2020-01-24 22:30:56.768
126 dual-bgp spine02 2020-01-24 22:30:56.768

[127 rows x 3 columns]

counts in the various commands don't seem to be of the same thing

are we getting different data fro table show than for other commands?
routes as an example

         table              first_time             latest_time  intervals  latest rows  all rows  datacenters  devices
0        arpnd 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         84           74      2010            1       14
1          bgp 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         66           32      3102            1        9
2           fs 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         85          229      7459            1       14
3   ifCounters 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         87          138     60895            1       14
4   interfaces 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         59          138      5331            1       14
5         lldp 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         54           44      1242            1       10
6         macs 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992         50           39       636            1        5
7         mlag 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         31            4       154            1        4
8       routes 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         78           24      9062            1       14
9       system 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992         25           14       225            1       14
10        time 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         66           14       523            1       14
11      topcpu 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992         85           14     20831            1       14
12      topmem 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992         86            9     15373            1        9
13        vlan 2020-03-04 20:19:27.872 2020-03-04 23:25:08.992         16           16       220            1        4
14       TOTAL 2020-03-04 20:19:27.872 2020-03-04 23:27:20.064         87          789    127063            1       14
jpiet> routes summarize                                                                                                                                                        Error: 2       datacenter hostname      vrf     prefix nexthopIps      oifs weights protocol source metric                   timestamp
count         229      229      229        229        229       229     229      229    229    229                         229
unique          1       12        3         27        108       114      96        4     35      2                           3
top      dual-bgp   exit02  default  0.0.0.0/0         []  [swp5.4]     [1]      bgp            20  2020-03-04 23:25:08.992000
freq          229       36      171         22         56        26     134      151    169    219                         206
first           -        -        -          -          -         -       -        -      -      -  2020-03-04 23:22:57.920000
last            -        -        -          -          -         -       -        -      -      -  2020-03-04 23:27:20.064000
jpiet> routes summarize view=all
       datacenter hostname      vrf     prefix nexthopIps    oifs weights protocol source metric                   timestamp
count       56251    56251    56251      56251      56251   56251   56251    56251  56251  56251                       56251
unique          1       14        3         27       4031    4035    4017        4     39      2                          78
top      dual-bgp   edge01  default  0.0.0.0/0         []  [eth0]     [1]      bgp            20  2020-03-04 23:09:51.488000
freq        56251    17200    51709       6439      13441    8728   32465    20642  31479  55542                        4120
first           -        -        -          -          -       -       -        -      -      -  2020-03-04 20:19:27.872000
last            -        -        -          -          -       -       -        -      -      -  2020-03-04 23:27:20.064000

are some of these using filtering by active and some not?
also does latest mean the same thing?

when missing ~/.suzieq/suzieq-cfg.yml get confusing error message

should have a better message tell you you are missing config file.

you get
(suzieq) jpiet@t5:~/suzieq$ python3 sq-poller.py --f -H dual
Traceback (most recent call last):
File "sq-poller.py", line 168, in
logger.setLevel(cfg.get("logging-level", "WARNING").upper())

cli, if there is no data in a table and do a table show, get a FileNotFoundError

In [33]: ospfCmd.ospfCmd().show()

FileNotFoundError Traceback (most recent call last)
in
----> 1 ospfCmd.ospfCmd().show()

~/suzieq/suzieq/cli/sqcmds/ospfCmd.py in show(self, ifname, vrf, state, type)
70 columns=self.columns,
71 datacenter=self.datacenter,
---> 72 type=type,
73 )
74 self.ctxt.exec_time = "{:5.4f}s".format(time.time() - now)

~/suzieq/suzieq/sqobjects/ospf.py in get(self, **kwargs)
32 raise AttributeError('No analysis engine specified')
33
---> 34 return self.engine_obj.get(**kwargs)
35
36 def summarize(self, **kwargs):

~/suzieq/suzieq/engines/pandas/ospf.py in get(self, **kwargs)
30 del kwargs["type"]
31
---> 32 df = self.get_valid_df(table, sort_fields, **kwargs)
33 return df
34

~/suzieq/suzieq/engines/pandas/engineobj.py in get_valid_df(self, table, sort_fields, **kwargs)
124 end_time=self.iobj.end_time,
125 sort_fields=sort_fields,
--> 126 **kwargs
127 )
128

~/suzieq/suzieq/engines/pandas/engine.py in get_table_df(self, cfg, schemas, **kwargs)
59 folder += "/datacenter={}/".format(v)
60
---> 61 fcnt = self.get_filecnt(folder)
62
63 use_get_files = (

~/suzieq/suzieq/engines/pandas/engine.py in get_filecnt(self, path)
176 def get_filecnt(self, path="."):
177 total = 0
--> 178 for entry in os.scandir(path):
179 if entry.is_file():
180 total += 1

FileNotFoundError: [Errno 2] No such file or directory: './tests/data/basic_dual_bgp/parquet-out/ospfNbr'

cli arpnd, address, and interface fail

jpiet> interface show
Error running command: local variable 'add_flds' referenced before assignment

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/InterfaceCmd.py", line 67, in show
type=type.split(),
File "/tmp/pycharm_project_1000/suzieq/suzieq/sqobjects/basicobj.py", line 125, in get
return self.engine_obj.get(**kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 201, in get
df = self.get_valid_df(self.iobj._table, sort_fields, **kwargs)
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engineobj.py", line 126, in get_valid_df
**kwargs
File "/tmp/pycharm_project_1000/suzieq/suzieq/engines/pandas/engine.py", line 173, in get_table_df
add_flds.add('active')
UnboundLocalError: local variable 'add_flds' referenced before assignment

I kind of understand why this if failing and might even be able to fix it.

However, I don't understand how tests are passing and I don't understand why any other commands work

in cli, sysmtem show columns=hostname fails with KeyError: 'bootupTimestamp'

jpiet> system show columns=hostname
Error running command: 'bootupTimestamp'

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'bootupTimestamp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/nubia/internal/cmdbase.py", line 374, in run_interactive
ret = fn(**args_dict)
File "/tmp/pycharm_project_1000/suzieq/suzieq/cli/sqcmds/systemCmd.py", line 63, in show
pd.to_datetime(df['bootupTimestamp']*1000,
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/frame.py", line 2995, in getitem
indexer = self.columns.get_loc(key)
File "/home/jpiet/.local/share/virtualenvs/suzieq-zI29E9ll/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'bootupTimestamp'

jpiet>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.