Giter Site home page Giter Site logo

brimdata / brimcap Goto Github PK

View Code? Open in Web Editor NEW
73.0 7.0 8.0 5.48 MB

Convert pcap files into richly-typed ZNG summary logs (Zeek, Suricata, and more)

License: BSD 3-Clause "New" or "Revised" License

Makefile 1.77% Go 98.23%
brim-desktop pcap suricata zeek

brimcap's Introduction

brimcap CI

Image of brimcap analyze

A command line utility for converting pcaps into the flexible, searchable Zed data formats as seen in the Zui desktop app and Zed commands.

Quickstart

  1. Install brimcap
  2. Have a pcap handy (or download a sample pcap)
  3. Run brimcap analyze
    brimcap analyze sample.pcap > sample.zng
    
  4. Explore with zq
    zq -z 'zeek:=count(has(_path)), alerts:=count(has(event_type=="alert"))' sample.zng
    

Usage with Zui desktop app

brimcap is bundled with the Zui desktop app. Whenever a pcap is imported into Zui, the app takes the following steps:

  1. brimcap analyze is invoked to generate logs from the pcap.

  2. The logs are imported into a newly-created pool in Zui's Zed lake.

  3. brimcap index is invoked to populate a local pcap index that allows for quick extraction of flows via Zui's Packets button, which the app performs by invoking brimcap search.

If Zui is running, you can perform these same operations from your shell, which may prove useful for automation or batch import of many pcaps to the same pool. The Custom Brimcap Config article shows example command lines along with other advanced configuration options. When used with Zui, you should typically use the brimcap binary found in Zui's zdeps directory (as described in the article), since this version should be API-compatible with that version of Zui and its Zed backend.

Brimcap Queries

Included in this repo is a queries.json file with some helpful queries for getting started and exploring Zeek and Suricata analyzed data within the Zui app.

To import these queries:

  1. Download the queries.json file to your local system
  2. In Zui, click the + menu in the upper-left corner of the app window and select Import Queries...
  3. Open the downloaded file in the file picker utility

The loaded queries will appear in the "QUERIES" tab of Zui's left sidebar as a new folder named Brimcap.

Standalone Install

If you're working with brimcap separate from the Zui app, prebuilt packages can be found in the releases section of the brimcap GitHub repo.

Unzip the artifact and add the brimcap directory to your $PATH environment variable.

export PATH="$PATH:/Path/To/brimcap"

Included Analyzers

brimcap includes special builds of Zeek and Suricata that were created by the core development team at Brim Data. These builds are preconfigured to provide a good experience out-of-the-box for generating logs from pcaps using brimcap. If you wish to use your own customized Zeek/Suricata or introduce other pcap analysis tools, this is described in the Custom Brimcap Config article.

Build From Source

To build from source, Go 1.21 or later is required.

To build the brimcap package, clone this repo and run make build:

git clone https://github.com/brimdata/brimcap
cd brimcap
make build

make build will download the prebuilt/preconfigured Zeek and Suricata artifacts, compile the brimcap binary and package them into build/dist.

The executables will be located here:

./build/dist/brimcap
./build/dist/zeek/zeekrunner
./build/dist/suricata/suricatarunner

Having a problem?

Please browse the wiki to review common problems and helpful tips before opening an issue.

Join the Community

Join our Public Slack workspace for announcements, Q&A, and to trade tips!

brimcap's People

Contributors

brim-bot avatar github-actions[bot] avatar jameskerr avatar mason-fish avatar mattnibs avatar nwt avatar philrz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

brimcap's Issues

Phase out "brimcap load"

For reasons of supportability, we've reached consensus as a team that brimcap load as a single command should be phased out. Users will instead be guided to use the "a la carte" zapi / brimcap analyze / brimcap index combination as necessary for loading their data into a Pool.

I think the phase-out work has two parts:

  1. Update any Brim/Brimcap docs to replace brimcap load references with their "a la carte" equivalents
  2. Make the actual Brimcap code changes to remove the command

Since we have beta users who have already been starting to use and enjoy brimcap load in its current form, I'll make sure I communicate the change clearly when posting new test builds on Slack.

Change brimcap load space to pool

With zed lake, spaces are now named pools. Change the space flag in brimcap load "-s" to "-p" as well as other help documentation to reference pools instead of lakes.

support compressed pcap ingest

This issue was originally created before Brimcap existed and the specific draft proposal below for implementation is probably no longer relevant. However, since the Brim+Brimcap workflow still includes "dragging a pcap into the app", the question of what could be done if the pcap is compressed still seems relevant.


Add support to the pcap ingest api in zqd to support compressed pcaps. Since we need an uncompressed pcap to support slicing, the implementation for this should create an uncompressed form of the pcap that is stored in the space's data directory, along with the pcap index. The uncompressed pcap should get deleted when the space itself is deleted.

For Brim users with large compressed pcaps, this will cause some increased storage space, but I think this is where the slice-to-pcap feature is most useful.

Improve error/workflow for attempts to re-load the same pcap

If I've previously done a brimcap load of a pcap of a particular filename, and attempt it again, the error I see as a user:

$ brimcap load -s wrccdc -root ~/brimcap-root ~/wrccdc.pcap 
error writing brimcap root: symlink /Users/phil/wrccdc.pcap /Users/phil/brimcap-root/wrccdc.pcap: file exists

The fact the Brimcap root consists of symbolic links is an implementation detail that the user probably shouldn't need to be aware of in order to understand what went wrong here. A few things come to mind:

  1. The error message could be improved to something like "a capture file named wrccdc.pcap already exists in the brimcap root"
  2. Perhaps we need a command (brimcap rm?) that unlinks files from within the Brimcap root
  3. So the user can review what's in the root (e.g. to see if the conflicting wrccdc.pcap is the same one they're trying to load, or just to review what's in there before deleting stuff) perhaps they also need a way to list information about what's already in the root (brimcap ls?)
  4. There's the potential for a user to load different files of the same name from different locations in the filesystem. @mattnibs and I talked about this offline and he acknowledged that at some point we could do something like have the contents of the Brimcap root be unique (e.g. filenames based on hashes) with metadata that points to filesystem locations. That said, at the moment I do find it mildly convenient to be able to ls -l the contents of the Brimcap root and see what's in there, so this would probably further make the case for something like the brimcap ls proposed above so the user can see a processed summary of what's in the root.

Space migration failure on Windows

Repro is with nightly build https://storage.googleapis.com/brimsec-releases/brim/v0.25.0-prerelease-806860c4.0/windows/Brim-Setup-0.25.0-prerelease-806860c4.0.exe (so, Brim commit 806860c talking to Zed commit dc82704 and using Brimcap tagged v0.0.3).

Having having witnessed the Space migration tool working ok on macOS, I happened to try it on Windows and it failed.

C:\> %LOCALAPPDATA%\Programs\Brim\resources\app.asar.unpacked\zdeps\brimcap migrate -zqd=%APPDATA%\Brim\data\spaces -root=%APPDATA%\Brim\data\brimcap-root
{"msg":"migrating 1 spaces"}
{"space":"wrccdc.2018-03-23.010014000000000.pcap","msg":"migration starting"}
{"space":"wrccdc.2018-03-23.010014000000000.pcap","msg":"migrating pcap"}
{"space":"wrccdc.2018-03-23.010014000000000.pcap","msg":"migrating data"}
{"space":"wrccdc.2018-03-23.010014000000000.pcap","msg":"data migration completed"}
{"space":"wrccdc.2018-03-23.010014000000000.pcap","error":"remove C:\\Users\\Phil\\AppData\\Roaming\\Brim\\data\\spaces\\sp_1sAO7VuaVeXTWyqmeQuZz993BNa\\all.zng: The process cannot access the file because it is being used by another process."}
{"type":"error","error":"remove C:\\Users\\Phil\\AppData\\Roaming\\Brim\\data\\spaces\\sp_1sAO7VuaVeXTWyqmeQuZz993BNa\\all.zng: The process cannot access the file because it is being used by another process."}

Allow "brimcap load" of log files in non-auto-detected formats

Right now brimcap load seems dependent on the log outputs of analyzers being auto-detect-able. This works fine for the main cases of Zeek and Suricata since their default outputs are Zeek TSV and NDJSON, respectively. However, while working on #72 I happened to try working with a NetFlow analyzer's CSV output. Using these configs with Brimcap v0.0.3, we can see how it fails because we don't currently auto-detect CSV.

$ cat nfdump-wrapper-csv.sh 
#!/bin/bash
TMPFILE=$(mktemp)
cat - > "$TMPFILE"
nfpcapd -r "$TMPFILE" -l .
rm "$TMPFILE"
for file in nfcapd.*
do
  nfdump -r $file -o csv | ghead -n -3 > ${file}.csv
done

$ cat nfdump-csv.yml
analyzers:
  - cmd: nfdump-wrapper-csv.sh
    globs: ["*.csv"]
    
$ brimcap load -root "$HOME/Library/Application Support/Brim/data/brimcap-root" -config nfdump-csv.yml -p testpool ~/pcap/wrccdc.pcap 
100.0% 500.0MB/500.0MB records=0 
Post "http://localhost:9867/pool/1sArPVhV4gBbiH5B1E8NytXJ34G/log": format detection error
	tzng: line 1: bad format
	zeek: line 1: bad types/fields definition in zeek header
	zjson: line 1: invalid character 's' in literal true (expecting 'r')
	zson: identifier "ts" must be enum and requires decorator
	zng: zng type ID out of range
	parquet: auto-detection not supported
	zst: auto-detection not supported

If we wanted to support this, I can think of two ways to proceed.

  1. Allow the specification of input format in brimcap load (similar to the zq -i options)
  2. Add auto-detection support for all formats at the Zed layer (brimdata/zed#2517)

brimcap load: index pcap

brimcap load should produce a pcap file for the analyzed pcap. The index name will be the name of the pcap file + .idx. The brimcap load command requires a -root arg that specifies where the pcap index file should exist. It should also include a symlink the indexed pcap file.

Update Brimcap-bundled Zeek to new release

As of GA Brimcap tagged v1.2.0, the bundled Zeek is based on Zeek v3.2.1, which is quite old at this point. As of August, 2022, the current Zeek release is v5.0.0. To become current, we'd need to update our port at https://github.com/brimdata/zeek to take advantage of the new functionality, have the benefit of current bug/security fixes, etc.

Apply a packet filter upstream of log generation

A community user inquired:

How do I use bpf filter for a lot of ips in the brim?
I have a whitelist ip and want to filter those all ip logs
The feature can reduce the sum of logs of big pcap files.

Indeed, as the user says, a bpf filter is one common way to do this kind of filtering in other tools. In Wireshark for instance, there's an optional bpf "capture filter" that specifies traffic to be included/excluded when capturing traffic off a live interface.

brimcap launch command

Add brimcap launch command which will take as an arg (1) the name of the pcapfile to load and (2) the 5 tuple used to describe connection that the user is searching for. brimcap launch will look in the brimcap root to find the pcapfile.

brimcap analyze: unexpected pretty print behavior

The stats output for brimcap has two different modes: 1) status line and 2) json

Currently it is doing some unintuitive behavior when choosing which status behavior to choose:

  • If input is a terminal and brimcap analyze output is going to a terminal, brimcap runs in status line mode (it should actually have no stats output)
  • If input is a terminal and brimcap analyze output is not a terminal (e.g. a file), brimcap runs in json mode (it should run in stats line mode)

Improve "brimcap analyze" error messaging & debug

Since part of what we're trying to achieve with Brimcap is to make it easier for users to bring their own custom Zeek/Suricata, it'll be important that we surface details with problems in such tools when they're invoked via brimcap analyze during pcap processing.

To cite a specific example, here's what a macOS user sees at the moment with brimcap commit 688a7f5 and Homebrew-installed Zeek v4.0.0 and Suricata v6.0.2 when analyzing https://archive.wrccdc.org/pcaps/2018/wrccdc.2018-03-23.010014000000000.pcap.gz:

$ brimcap analyze -z ../wrccdc.pcap > /dev/null
1 warnings occurred while parsing log data:
    /var/folders/yn/jbkxxkpd4vg142pc3_bd_krc0000gn/T/brimcap-2408807547/eve.json: duplicate field subject: x1

There are known issues at play here, specifically that Suricata v6 is generating bad output (#13) and Zed could stand to provide more detail about where the "duplicate field" problem arose in the data (brimdata/zed#2452). But putting those aside, some things that would have improved the user experience if it had arisen in the wild:

  1. Provide a way for the user to see the stdout and stderr of the Zeek/Suricata processes invoked via brimcap analyze so the user knows whether those pieces ran healthy. In this case they did, but users might encounter any number of speedbumps if they're iteratively tweaking configs or adding customizations to these systems.
  2. Make it clear if the choke happened downstream of the invoked processes, such as in this case where it happened in the Zed shaping step.
  3. Have the ability to leave behind the logs generated by Zeek/Suricata so the user can examine them. In this case, the user could have used zq or other JSON processing tools on the raw EVE JSON output to isolate which lines had the duplicate field.
  4. Show the command lines that brimcap analyze used when invoking these processes, so the user can easily repeat this in their debug steps.

In summary, I expect our brimcap analyze support statement will be something like "if it doesn't 'just work', repeat the individual steps performed by brimcap analyze and see where it goes wrong". In this regard, having a debug mode that's as transparent as possible will be very helpful.

Show/label separate errors & output for each analyzer process & Zed processing

In the verification notes at #16 (comment) there's a step where my brimcap analyze encounters an error:

$ brimcap analyze -Z -config simple.yml ssl.pcapng > ssl-simple.zson
{"type":"error","error":"duplicate field subject"}

A user that's new to Brimcap may not be able to deduce that this particular error happened during the Zed processing of the logs generated by the analyzer processes, as opposed to an error from the error processes themselves (i.e. Zeek or Suricata). Even if only in some kind of debug mode, it would seem ideal to have a way to observe the output from each of these separately.

I would suggest that this include a way to show the output even for steps that completed without error, since there may be situations where an analyzer processes exits cleanly but produced bad log output, and the output of the analyzer process might be the only way to know what went wrong. To cite a specific example, when running Suricata by hand, one may observe this "Flow emergency mode" message:

$ suricata -r ~/pcap/wrccdc.pcap 
19/4/2021 -- 16:39:29 - <Notice> - This is Suricata version 6.0.2 RELEASE running in USER mode
19/4/2021 -- 16:39:37 - <Notice> - all 13 packet processing threads, 4 management threads initialized, engine started.
19/4/2021 -- 16:39:46 - <Notice> - Flow emergency mode entered...
19/4/2021 -- 16:39:49 - <Notice> - Signal Received.  Stopping engine.
19/4/2021 -- 16:39:53 - <Notice> - Pcap-file module read 1 files, 1650919 packets, 473586644 bytes

$ echo $?
0

We know from experience that when Suricata enters this mode during packet processing, it can generate excessive numbers of garbage events. We take steps in the config for the Suricata we bundle with Brimcap to prevent the "Flow emergency mode" from being entered, but a user that's running their own custom Suricata config may need to know this is happening so they can adjust their config.

Hang (deadlock?) without error message during analyzer failure

As part of my work on #64, I've created the following wrapper script to invoke a custom Suricata install, which uses jq to work around brimdata/zed#2523:

$ cat suricata-wrapper.sh 
#!/bin/bash
suricata -r /dev/stdin
cat eve.json | jq -c . > deduped-eve.json

I invoke this via Brimcap config YAML:

$ cat suricata.yml 
analyzers:
  - cmd: suricata-wrapper.sh
    globs: ["deduped*.json"]
    shaper: |
      type alert = {
        timestamp: time,
        event_type: bstring,
        src_ip: ip,
        src_port: port=(uint16),
        dest_ip: ip,
        dest_port: port=(uint16),
        vlan: [uint16],
        proto: bstring,
        app_proto: bstring,
        alert: {
          severity: uint16,
          signature: bstring,
          category: bstring,
          action: bstring,
          signature_id: uint64,
          gid: uint64,
          rev: uint64,
          metadata: {
            signature_severity: [bstring],
            former_category: [bstring],
            attack_target: [bstring],
            deployment: [bstring],
            affected_product: [bstring],
            created_at: [bstring],
            performance_impact: [bstring],
            updated_at: [bstring],
            malware_family: [bstring],
            tag: [bstring]
          }
        },
        flow_id: uint64,
        pcap_cnt: uint64,
        tx_id: uint64,
        icmp_code: uint64,
        icmp_type: uint64,
        tunnel: {
          src_ip: ip,
          src_port: port=(uint16),
          dest_ip: ip,
          dest_port: port=(uint16),
          proto: bstring,
          depth: uint64
        },
        community_id: bstring
      }
      filter event_type=alert | put . = shape(alert) | rename ts=timestamp

As would be expected, this fails if the jq binary isn't installed. I'm running on Linux in this case.

$ /opt/Brim/resources/app.asar.unpacked/zdeps/zed api new testpool
testpool: pool created

$ /opt/Brim/resources/app.asar.unpacked/zdeps/brimcap load -root "$HOME/.config/Brim/data/brimcap-root" -config suricata.yml -p testpool ~/wrccdc.pcap 
100.0% 500.0MB/500.0MB records=0 
Post "http://localhost:9867/pool/1s5nwutR6WQUyCWlV5oQynelwdi/log": suricata-wrapper.sh exited with code 127
stdout:
4/5/2021 -- 16:39:27 - <Notice> - This is Suricata version 6.0.2 RELEASE running in USER mode
4/5/2021 -- 16:39:27 - <Notice> - all 5 packet processing threads, 4 management threads initialized, engine started.
4/5/2021 -- 16:39:36 - <Notice> - Signal Received.  Stopping engine.
4/5/2021 -- 16:39:43 - <Notice> - Pcap-file module read 1 files, 1650919 packets, 473586644 bytes
stderr:
/home/phil/brimcap/examples/suricata-wrapper.sh: line 3: jq: command not found

However, this was initially tough to spot, because my original Brimcap YAML config also called out to a Zeek wrapper:

$ cat zeek-wrapper.sh 
#!/bin/bash
zeek -C -r - --exec "event zeek_init() { Log::disable_stream(PacketFilter::LOG); Log::disable_stream(LoadedScripts::LOG); }" local

$ diff zeek-suricata.yml suricata.yml 
2d1
<   - cmd: zeek-wrapper.sh

When I ran Brimcap using that YAML, it effectively hangs without ever finishing, acting perhaps as if deadlocked. After waiting several minutes I tried to stop it via Control-C but it did not respond, so I then successfully stopped it via Control-\. This does indeed produce a bunch of errors that mention lock/futex, which leads me to suspect deadlock.

$ /opt/Brim/resources/app.asar.unpacked/zdeps/zed api new testpool2
testpool2: pool created

$ /opt/Brim/resources/app.asar.unpacked/zdeps/brimcap load -root "$HOME/.config/Brim/data/brimcap-root" -config zeek-suricata.yml -p testpool2 ~/wrccdc.pcap 
100.0% 500.0MB/500.0MB records=392641 
^\SIGQUIT: quit
PC=0x4732e1 m=0 sigcode=128

goroutine 0 [idle]:
runtime.futex(0x14303f0, 0x80, 0x0, 0x0, 0x0, 0x7ffdbdde6982, 0x14302a0, 0x7ffdbdc4f7a8, 0x7ffdbdc4f7b8, 0x40dcbf, ...)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/sys_linux_amd64.s:579 +0x21
runtime.futexsleep(0x14303f0, 0x0, 0xffffffffffffffff)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/os_linux.go:44 +0x46
runtime.notesleep(0x14303f0)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/lock_futex.go:159 +0x9f
runtime.mPark()
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/proc.go:1340 +0x39
runtime.stopm()
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/proc.go:2257 +0x92
runtime.findrunnable(0xc000048800, 0x0)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/proc.go:2916 +0x72e
runtime.schedule()
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/proc.go:3125 +0x2d7
runtime.park_m(0xc002042480)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/proc.go:3274 +0x9d
runtime.mcall(0x0)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/asm_amd64.s:327 +0x5b

goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc0020480f0)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0020480e8)
	/opt/hostedtoolcache/go/1.16.3/x64/src/sync/waitgroup.go:130 +0x65
github.com/brimdata/brimcap/ztail.(*Tailer).Close(0xc002048080, 0x0, 0xcd67a0)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:181 +0x65
github.com/brimdata/brimcap/analyzer.(*analyzer).Close(0xc0000342c0, 0x7fb0a2b4d008, 0xc0000342c0)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/analyzer/analyzer.go:167 +0xd3
github.com/brimdata/zed/zio.CloseReaders(0xc000368000, 0x2, 0x2, 0x0, 0x0)
	/home/runner/go/pkg/mod/github.com/brimdata/[email protected]/zio/zio.go:164 +0xd3
github.com/brimdata/brimcap/analyzer.(*combiner).Close(0xc00035e200, 0x0, 0x0)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/analyzer/combiner.go:82 +0xbd
github.com/brimdata/brimcap/cmd/brimcap/load.(*Command).Exec(0xc0002c81c0, 0xc000032110, 0x1, 0x1, 0xf6a680, 0xc0035e8150)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/cmd/brimcap/load/command.go:129 +0x871
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc0000811a0, 0xc000032110, 0x1, 0x1, 0x0, 0x0)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/cmd/brimcap/root/command.go:72 +0x1af
github.com/brimdata/zed/pkg/charm.path.run(0xc0002abf20, 0x2, 0x2, 0xc000032110, 0x1, 0x1, 0xc0002abf20, 0x2)
	/home/runner/go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f
github.com/brimdata/zed/pkg/charm.(*Spec).ExecRoot(0x1423800, 0xc0000320a0, 0x8, 0x8, 0xffffffff, 0xc00008c058)
	/home/runner/go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:63 +0x1e7
main.main()
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/cmd/brimcap/main.go:18 +0x74

goroutine 50 [syscall, 1 minutes]:
os/signal.signal_recv(0x0)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/sigqueue.go:168 +0xa5
os/signal.loop()
	/opt/hostedtoolcache/go/1.16.3/x64/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
	/opt/hostedtoolcache/go/1.16.3/x64/src/os/signal/signal.go:151 +0x45

goroutine 51 [select, 1 minutes]:
github.com/brimdata/zed/pkg/signalctx.New.func1(0xc000080480, 0xc0002ac4b0, 0xc0002e0240, 0xf79cb8, 0xc00007e080)
	/home/runner/go/pkg/mod/github.com/brimdata/[email protected]/pkg/signalctx/signalctx.go:25 +0xad
created by github.com/brimdata/zed/pkg/signalctx.New
	/home/runner/go/pkg/mod/github.com/brimdata/[email protected]/pkg/signalctx/signalctx.go:24 +0x12f

goroutine 35 [select]:
github.com/brimdata/zed/pkg/display.(*Display).Run(0xc00016e000)
	/home/runner/go/pkg/mod/github.com/brimdata/[email protected]/pkg/display/display.go:50 +0xd4
created by github.com/brimdata/brimcap/cli/analyzecli.(*Display).Run
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/cli/analyzecli/display.go:54 +0x145

goroutine 32 [chan send]:
github.com/brimdata/brimcap/ztail.(*Tailer).tailFile.func1(0xc002048080, 0xc0000440c0, 0xc003742f40, 0x1d)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:140 +0x22d
created by github.com/brimdata/brimcap/ztail.(*Tailer).tailFile
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:120 +0x19b

goroutine 25 [semacquire]:
sync.runtime_Semacquire(0xc0020480f0)
	/opt/hostedtoolcache/go/1.16.3/x64/src/runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0020480e8)
	/opt/hostedtoolcache/go/1.16.3/x64/src/sync/waitgroup.go:130 +0x65
github.com/brimdata/brimcap/ztail.(*Tailer).start(0xc002048080)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:86 +0x1aa
created by github.com/brimdata/brimcap/ztail.New
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:49 +0x165

goroutine 16 [chan send]:
github.com/brimdata/brimcap/ztail.(*Tailer).tailFile.func1(0xc002048080, 0xc000044100, 0xc00003c3c0, 0x1e)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:140 +0x22d
created by github.com/brimdata/brimcap/ztail.(*Tailer).tailFile
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:120 +0x19b

goroutine 60 [chan send]:
github.com/brimdata/brimcap/ztail.(*Tailer).tailFile.func1(0xc002048080, 0xc000368220, 0xc00003d200, 0x1f)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:140 +0x22d
created by github.com/brimdata/brimcap/ztail.(*Tailer).tailFile
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:120 +0x19b

goroutine 30 [chan send]:
github.com/brimdata/brimcap/ztail.(*Tailer).tailFile.func1(0xc002048080, 0xc0020342e0, 0xc003041900, 0x1d)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:140 +0x22d
created by github.com/brimdata/brimcap/ztail.(*Tailer).tailFile
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:120 +0x19b

goroutine 67 [chan send]:
github.com/brimdata/brimcap/ztail.(*Tailer).tailFile.func1(0xc002048080, 0xc0000449a0, 0xc003548500, 0x1d)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:140 +0x22d
created by github.com/brimdata/brimcap/ztail.(*Tailer).tailFile
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:120 +0x19b

goroutine 82 [chan send]:
github.com/brimdata/brimcap/ztail.(*Tailer).tailFile.func1(0xc002048080, 0xc0030f5780, 0xc0020458c0, 0x1e)
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:140 +0x22d
created by github.com/brimdata/brimcap/ztail.(*Tailer).tailFile
	/home/runner/.npm/_cacache/tmp/git-clone-66762646/ztail/ztail.go:120 +0x19b

rax    0xca
rbx    0x14302a0
rcx    0x4732e3
rdx    0x0
rdi    0x14303f0
rsi    0x80
rbp    0x7ffdbdc4f780
rsp    0x7ffdbdc4f738
r8     0x0
r9     0x0
r10    0x0
r11    0x286
r12    0x0
r13    0x26239
r14    0x80c000000000
r15    0x0
rip    0x4732e1
rflags 0x286
cs     0x33
fs     0x0
gs     0x0

802.11 protocol analysis

Before opening a new issue, please make sure you've reviewed the troubleshooting guide:
https://github.com/brimdata/brimcap/wiki/Troubleshooting

Describe the bug
Failed to upload pcap with wireless data 802.11n https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=Http.cap

To Reproduce
Try to upload https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=Http.cap file to Brim.

Expected behavior
The summary data generated should be produced

Screenshots
Screenshot 2021-06-24 at 15 39 20

Desktop Info

  • OS: macOS 11.4
  • Brim Version: 0.24.0

Have Brimcap use new endpoint for posting data

As of brimdata/zui#1708, zapi load hits the newer API endpoint that ultimately causes Brim to notify the user that new data has arrived in the Pool and offers them a Refresh button so they can immediately see it. However, brimcap load is currently still hitting the old endpoint. We've discussed having brimcap load start using the new endpoint so users will have a consistent experience regardless of their choice of CLI tooling.

Linux cooked-mode capture (SLL) support

Before opening a new issue, please make sure you've reviewed the troubleshooting guide:
https://github.com/brimsec/brim/wiki/Troubleshooting

Describe the bug
Loading a *.pcap file yielded this error

To Reproduce
Try to load the *.pcap file again

Expected behavior
The summary data generated by zeek should be produced

Screenshots
image

Desktop Info
[Please complete the following information]

  • OS: Windows 10
  • Brim Version: 0.24.0

ranger.Envelope.Merge: ensure uniform offset distribution

The solution to brimdata/zed#1039 introduces a curious behavior for generated pcap indexes: For the indexes of large pcap files the difference between adjacent X values starts out very wide then narrows as one iterate through the Bins. This will result in larger pcap scans (i.e. slow searches) for hits at the beginning of the file and smaller scans (i.e. faster searches) towards the end. Consensus was that the difference in search times probably won't be noticeable enough to warrant introducing a fancier algorithm.

This ticket is to revisit the change to ranger.Envelope and find a solution that generates merged Envelopes with more uniform distance between adjacent offsets.

"brimcap analyze" hangs on a pcap that produces no Suricata logs

Repro is with Brimcap commit 357bd57, but I also confirmed this issue was with us at commit 6406f89, so it seems it's not unique to the recent refactor of brimcap analyze (#110).

The ultimate user problem is that they tried to drag a pcap into Brim which neither Zeek nor Suricata can produce meaningful logs from. The attached test pcap
cap_00001_20210622092340.pcap.gz (after uncompressing) reproduces the issue. Using Brim commit 976d840 with its package.json pointing at Brimcap commit 357bd57, the following video shows the current user-facing result, which is:

  1. The pcap load never seems to finish
  2. If the user gets sick of waiting and quits Brim, the brimcap process is left behind and must be killed manually
Repro.mp4

The inability to parse is ultimately an orthogonal Zeek/Suricata problem and is separately tracked in #107. However, knowing that un-parse-able pcaps are likely to come up in practice, the purpose of this issue to ensure that we can fail on them gracefully so we don't have the kind of hanging & orphaned process like we just saw.

Knowing that our intent is to move to a brimcap analyze approach, I then reproduced it at the CLI, seeing it hang there as well.

Analyze-Hang.mp4

The dump it showed after I gave up and Ctrl-\'ed it:

$ brimcap analyze -z cap_00001_20210622092340.pcap 
  0.1% 64KiB/1MiB37KiB434B records=0 
^C^\SIGQUIT: quit
PC=0x7fff2046acde m=0 sigcode=0

goroutine 0 [idle]:
runtime.pthread_cond_wait(0x2060220, 0x20601e0, 0x0)
	/usr/local/opt/go/libexec/src/runtime/sys_darwin.go:384 +0x39
runtime.semasleep(0xffffffffffffffff, 0xc000046000)
	/usr/local/opt/go/libexec/src/runtime/os_darwin.go:63 +0x8d
runtime.notesleep(0x205ffd0)
	/usr/local/opt/go/libexec/src/runtime/lock_sema.go:181 +0xdb
runtime.mPark()
	/usr/local/opt/go/libexec/src/runtime/proc.go:1340 +0x39
runtime.stoplockedm()
	/usr/local/opt/go/libexec/src/runtime/proc.go:2495 +0x6e
runtime.schedule()
	/usr/local/opt/go/libexec/src/runtime/proc.go:3103 +0x48c
runtime.park_m(0xc000001980)
	/usr/local/opt/go/libexec/src/runtime/proc.go:3318 +0x9d
runtime.mcall(0x106ef36)
	/usr/local/opt/go/libexec/src/runtime/asm_amd64.s:327 +0x5b

goroutine 1 [chan send (nil chan)]:
github.com/gosuri/uilive.(*Writer).Stop(...)
	/Users/phil/.go/pkg/mod/github.com/gosuri/[email protected]/writer.go:119
github.com/brimdata/brimcap/cli/analyzecli.(*statusLineDisplay).End(0xc00039a420)
	/Users/phil/work/brimcap/cli/analyzecli/display.go:75 +0x66
github.com/brimdata/brimcap/cmd/brimcap/analyze.(*Command).Run(0xc0002b2100, 0xc000032070, 0x1, 0x1, 0x1b85ec0, 0xc0002d2000)
	/Users/phil/work/brimcap/cmd/brimcap/analyze/command.go:88 +0x494
github.com/brimdata/zed/pkg/charm.path.run(0xc0002c4450, 0x2, 0x2, 0xc000032070, 0x1, 0x1, 0xc0002c4450, 0x2)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f
github.com/brimdata/zed/pkg/charm.(*Spec).ExecRoot(0x20535e0, 0xc000032050, 0x3, 0x3, 0xffffffff, 0xc00009c058)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:63 +0x1e7
main.main()
	/Users/phil/work/brimcap/cmd/brimcap/main.go:19 +0x74

goroutine 34 [syscall]:
os/signal.signal_recv(0x1b918e0)
	/usr/local/opt/go/libexec/src/runtime/sigqueue.go:165 +0x9d
os/signal.loop()
	/usr/local/opt/go/libexec/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
	/usr/local/opt/go/libexec/src/os/signal/signal.go:151 +0x45

rax    0x104
rbx    0xab53e00
rcx    0x7ffeefbff4a8
rdx    0x500
rdi    0x2060220
rsi    0x80100000900
rbp    0x7ffeefbff550
rsp    0x7ffeefbff4a8
r8     0x0
r9     0xa0
r10    0x0
r11    0x246
r12    0x20601e0
r13    0x16
r14    0x80100000900
r15    0x500
rip    0x7fff2046acde
rflags 0x247
cs     0x7
fs     0x0
gs     0x0

Knowing as I do that brimcap analyze is just calling the built-in zeekrunner and suricatarunner, I also repro'ed at that layer, and I think I see what's going on.

I think Zeek is doing fine. It produces a few useless logs that reflect little beyond its inability to parse, but at minimum the weird.log that gets produced could have been turned into ZNG/ZSON just fine.

$ cat ../cap_00001_20210622092340.pcap | ~/work/brimcap/build/dist/zeek/zeekrunner 
WARNING: No Site::local_nets have been defined.  It's usually a good idea to define your local networks.

$ ls -l
total 24
-rw-r--r--  1 phil  staff  276 Jun 30 14:45 capture_loss.log
-rw-r--r--  1 phil  staff  791 Jun 30 14:45 stats.log
-rw-r--r--  1 phil  staff  333 Jun 30 14:45 weird.log

I suspect Suricata is what's giving the problem. Its output reflects its inability to parse the pcap, but unfortunately it then shows an exit code of 0 and a zero-length eve.json.

$ cat ../cap_00001_20210622092340.pcap | ~/work/brimcap/build/dist/suricata/suricatarunner 
30/6/2021 -- 14:45:51 - <Notice> - This is Suricata version 5.0.3 RELEASE running in USER mode
30/6/2021 -- 14:45:51 - <Info> - CPUs/cores online: 12
30/6/2021 -- 14:45:51 - <Info> - No 'host-mode': suricata is in IDS mode, using default setting 'sniffer-only'
30/6/2021 -- 14:45:51 - <Info> - eve-log output device (regular) initialized: eve.json
30/6/2021 -- 14:45:51 - <Info> - 1 rule files processed. 22608 rules successfully loaded, 0 rules failed
30/6/2021 -- 14:45:51 - <Info> - Threshold config parsed: 0 rule(s) found
30/6/2021 -- 14:45:51 - <Info> - 22611 signatures processed. 1134 are IP-only rules, 3887 are inspecting packet payload, 17392 inspect application layer, 103 are decoder event only
30/6/2021 -- 14:45:57 - <Error> - [ERRCODE: SC_ERR_UNIMPLEMENTED(88)] - datalink type 127 not (yet) supported in module PcapFile.
30/6/2021 -- 14:45:57 - <Warning> - [ERRCODE: SC_ERR_PCAP_DISPATCH(20)] - Failed to init pcap file -, skipping
30/6/2021 -- 14:45:57 - <Notice> - all 1 packet processing threads, 2 management threads initialized, engine started.
30/6/2021 -- 14:45:57 - <Error> - [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - pcap file reader thread failed to initialize
30/6/2021 -- 14:45:57 - <Notice> - Signal Received.  Stopping engine.
30/6/2021 -- 14:45:57 - <Info> - time elapsed 0.018s
30/6/2021 -- 14:45:57 - <Info> - Alerts: 0
30/6/2021 -- 14:45:57 - <Info> - cleaning up signature grouping structure... complete

$ echo $?
0

$ ls -l
total 0
-rw-r--r--  1 phil  staff  0 Jun 30 14:45 eve.json

Is it possible the successful exit of the analyzer process is somehow not playing well with the zero-length log file, such that maybe we're hung waiting on output that's never going to arrive? If so, maybe we need some special handling to catch this corner case.

Inline shapers vs. pointers to files that contain shaper code

While acting as a new user of the brimcap analyze -config YAML, I made the mistake of thinking the shaper parameter was the name of a file containing my Zed shaper code, as opposed to it being the shaper code itself. I felt like this was an innocent mistake since there's other contexts where shapers are treated as files, such as the -I option to zed query, or the way that when we mention reference shaper configs in our docs we point at files in our repos.

If my would-be "shaper" had been somehow invalid and had caused a Zed parse error, I might have found my mistake sooner. Unfortunately, it ended up being valid Zed, effectively doing a text search for the string that's the pathname I provided. It matched nothing, effectively sending my logs to /dev/null. This gave me the mistaken impression that maybe the contents of my shaper script file were fundamentally broken and/or that there was a bug in Brimcap, so I burned a fair amount of time before figuring out my mistake.

A few thoughts for consideration:

  1. If we do nothing else, I can just make sure we emphasize this strongly in the user docs.
  2. The fact that degenerate Zed forms a quiet, no-op shaper could be seen as a general hazard of Zed itself. If there's a way we can identify what it means to be a "shaper" as opposed to any ol' Zed that might legitimately contain just a search, perhaps we could parse it and return errors in contexts like this that only expect a shaper.
  3. Perhaps Brimcap itself could be enhanced such that the shaper parameter supports both in-line Zed as well as files, and the way each approach is specified in the YAML could be self-documenting in a way that would make it very difficult to repeat my mistake.

Cancel of Space migration leaves behind an empty pool

Repro is in Brim commit 43b0ed5 via nightly macOS build [https://storage.googleapis.com/brimsec-releases/brim/v0.25.0-prerelease-43b0ed51.0/macos/Brim-0.25.0-prerelease-43b0ed51.0.dmg](https://storage.googleapis.com/brimsec-releases/brim/v0.25.0-prerelease-43b0ed51.0/macos/Brim-0.25.0-prerelease-43b0ed51.0.dmg), which uses Brimcap commit 82f0140.

Before starting the first video, I've already been running GA Brim tagged v0.24.0 and have imported data spread across five Spaces that's going to need to be migrated. In this first video I launch the new Brim at commit 43b0ed5 and let the migration start, but then click the vertical "..." and click Cancel when it happens to be in the middle of migrating my Space called "all.pcap". The informative pop-up accurately reflects that the migration was canceled, and clicking the "all.pcap" Space we see no data.

Cancel.mp4

The second video picks up after I've closed and reopened the app. As I've still got Spaces that have not completed migration, I once again accept the prompt to let the migration start. The pop-up gives the impression that the space "all.pcap" for which migration was cancelled mid-way on the first pass has been migrated, but at the end the pop-up says "Some Spaces not migrated". Clicking through them all, we can see that the Pool for "all.pcap" is still empty.

Continue.mp4

Finally, in the third video, I relaunch one last time and once again accept the prompt to migrate, but it just rushes to the "Some Spaces not migrated" and I'm once again left with the empty "all.pcap". At this point I stumble onto the workaround that if I delete the empty Pool for "all.pcap", relaunch, then let the migration complete one last time, now I truly do have all my Spaces migrated, including that "all.pcap" one. Now I can relaunch and am no longer prompted about Space migration. It's not shown in the video, but I've confirmed that the data/spaces/ directory has been deleted at this point, which is the expected end state after Brimcap has successfully completed all the Space migrations.

Delete-Finish.mp4

In conclusion, in a pinch we could document this in the wiki article and just set expectations that these need to be deleted in order for them to complete. I'm hoping that cancel-in-progress will not be a common user operation, and this Space migration is just a one-time thing, after all. OTOH, if it would be easy for Brimcap to just clean up the empty Pool in the event of a cancel, that might be even better.

pcap_path should be made absolute

Repro is with Brimcap commit f9d1309.

While doing some early work with Brimcap and the Brim app, I noticed that if I provide a non-absolute path to the pcap file I want to load, that's what ends up as the pcap_path in the index file. Example:

$ brimcap load -root ~/brimcap-root -s foo hello.pcapng 
100.0% 12.40KB/12.40KB records=12 

$ cat ~/brimcap-root/idx-Qw9A8BEMqN87ZMI8R2hnlYKwfCrBBYupSIr9URJGvJM.json | jq .
...
  ],
  "pcap_path": "hello.pcapng"
}

That effectively make it impossible for the pcap file to be found again, such as when the app invokes a brimcap search.

I know this problem is unlikely to happen in the "happy path" since I expect the app will always provide absolute paths in its assembled brimcap load command lines. However, we've already had interest from community users in running brimcap load directly at the CLI as part of automated workflows, so we should probably make sure we turn these into absolute paths before committing them to the file.

Output progress updates regarding pcap processing

In the era when pcap processing was handled via zqd, progress updates were made available so the Brim app could show the user how long import was taking and show partial results during import. We expect we can provide equivalent functionality in the Brimcap era. @mccanne's suggested approach is to deliver these in the form of updates over stdout from brimcap load.

Upgrade with zero Spaces still triggers the Space migration pop-up

Repro is with Brim commit 207cb69 on macOS via scratch artifact https://storage.googleapis.com/brimsec-releases/brim/v0.25.0-prerelease-207cb691.0/macos/Brim-0.25.0-prerelease-207cb691.0.dmg.

If I start from a GA Brim v0.24.0 with no Spaces in it and then update to a modern Brim, the Space migration pop-up still appears. If clicked, it has the effect of deleting the contents of the Space-data-free spaces/ directory, just like it does after having finished migrating a bunch of actual Spaces.

Repro.mp4

This is one that I'd be content to just document in the wiki article since the behavior seems fairly unsurprising/inoffensive to me. I suspect most users of older releases will probably have some Spaces still hanging around from prior sessions, and even if they didn't and raised an eyebrow at seeing what they think is an unnecessary pop-up, whether they click Migrate and are happy to see it go away or click to the article and confirm that it's expected behavior, it's a one-time thing that's easy to get past. So I'm just opening this issue to document that it happens and see if anyone has a reaction, but I'm otherwise prepared to just mention it in the article.

Article describing the reference Zed shaper for Suricata

To help users understand why we unbundled Brimcap, it'll be helpful to have docs that walk through how the pieces fit together. Since the Zeek logs are Zeek TSV and have all the typing/schema info already, they can be taken for granted... but the Suricata part relies on the Zed shaper and hence would make a great example. I could write an article (maybe start a wiki on the Brimcap repo?) that shows how the shaper is currently capturing only the Alerts & doing it under a single/wide schema and describe the trade-offs of that approach vs. letting through the many schema variations that would otherwise be created. This would give users a starting point if they wanted to try their own variations, such as letting through more of the Suricata event types and therefore confront whether they want to dive into doing their own shaping of those.

brimcap load/analyze progressing to 1.0% and then hanging

Repro is with Brimcap commit 357bd57 and Zed commit 389b120.

While attempting to verify the fix for #71, I stumbled into what looks like a new issue. To repro, I start a zed lake serve and then attempt to load this wrccdc pcap (after uncompressing) via:

$ brimcap load -root ~/brimcap-root -p wrccdc.pcap ~/pcap/wrccdc.pcap

As shown in the attached video, the progress meter seems to proceed very slowly, and shortly after it gets to 1.0%, the record/byte counts stop incrementing and it hangs indefinitely. I eventually kill it by hitting Ctrl-\, and the stack dump is pasted below after the video.

Repro.mp4
$ brimcap load -root ~/brimcap-root -p wrccdc.pcap ~/pcap/wrccdc.pcap 
  1.0% 476MiB858KiB604B/476MiB858KiB604B records=670823 
^\SIGQUIT: quit
PC=0x7fff2046acde m=0 sigcode=0

goroutine 0 [idle]:
runtime.pthread_cond_wait(0x2060220, 0x20601e0, 0x0)
	/usr/local/opt/go/libexec/src/runtime/sys_darwin.go:384 +0x39
runtime.semasleep(0xffffffffffffffff, 0xc00004b000)
	/usr/local/opt/go/libexec/src/runtime/os_darwin.go:63 +0x8d
runtime.notesleep(0x205ffd0)
	/usr/local/opt/go/libexec/src/runtime/lock_sema.go:181 +0xdb
runtime.mPark()
	/usr/local/opt/go/libexec/src/runtime/proc.go:1340 +0x39
runtime.stoplockedm()
	/usr/local/opt/go/libexec/src/runtime/proc.go:2495 +0x6e
runtime.schedule()
	/usr/local/opt/go/libexec/src/runtime/proc.go:3103 +0x48c
runtime.park_m(0xc000001c80)
	/usr/local/opt/go/libexec/src/runtime/proc.go:3318 +0x9d
runtime.mcall(0x106ef36)
	/usr/local/opt/go/libexec/src/runtime/asm_amd64.s:327 +0x5b

goroutine 1 [semacquire, 1 minutes]:
sync.runtime_Semacquire(0xc0002fc640)
	/usr/local/opt/go/libexec/src/runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0002fc638)
	/usr/local/opt/go/libexec/src/sync/waitgroup.go:130 +0x65
golang.org/x/sync/errgroup.(*Group).Wait(0xc0002fc630, 0xc0002fc690, 0x1b98238)
	/Users/phil/.go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:40 +0x31
github.com/brimdata/brimcap/cmd/brimcap/load.(*Command).Run(0xc0002e6000, 0xc0000320d0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/load/command.go:111 +0x6ea
github.com/brimdata/zed/pkg/charm.path.run(0xc0002c6410, 0x2, 0x2, 0xc0000320d0, 0x1, 0x1, 0xc0002c6410, 0x2)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f
github.com/brimdata/zed/pkg/charm.(*Spec).ExecRoot(0x20535e0, 0xc000032080, 0x6, 0x6, 0xffffffff, 0xc00009e058)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:63 +0x1e7
main.main()
	/Users/phil/work/brimcap/cmd/brimcap/main.go:19 +0x74

goroutine 18 [syscall, 1 minutes]:
os/signal.signal_recv(0x0)
	/usr/local/opt/go/libexec/src/runtime/sigqueue.go:165 +0x9d
os/signal.loop()
	/usr/local/opt/go/libexec/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
	/usr/local/opt/go/libexec/src/os/signal/signal.go:151 +0x45

goroutine 7 [select, 1 minutes]:
os/signal.NotifyContext.func1(0xc000093880)
	/usr/local/opt/go/libexec/src/os/signal/signal.go:288 +0x8f
created by os/signal.NotifyContext
	/usr/local/opt/go/libexec/src/os/signal/signal.go:287 +0x1cf

goroutine 19 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0xac6f258, 0x72, 0xffffffffffffffff)
	/usr/local/opt/go/libexec/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc0000cec98, 0x72, 0x1000, 0x1000, 0xffffffffffffffff)
	/usr/local/opt/go/libexec/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/opt/go/libexec/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc0000cec80, 0xc000128000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc0000cec80, 0xc000128000, 0x1000, 0x1000, 0x103bedc, 0xc00007cc38, 0x1068860)
	/usr/local/opt/go/libexec/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000126000, 0xc000128000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/net/net.go:183 +0x91
net/http.(*persistConn).Read(0xc0002ea5a0, 0xc000128000, 0x1000, 0x1000, 0xc00009e960, 0xc00007cd40, 0x1006c55)
	/usr/local/opt/go/libexec/src/net/http/transport.go:1922 +0x77
bufio.(*Reader).fill(0xc000118120)
	/usr/local/opt/go/libexec/src/bufio/bufio.go:101 +0x108
bufio.(*Reader).Peek(0xc000118120, 0x1, 0x0, 0x1, 0x4, 0x1, 0x3)
	/usr/local/opt/go/libexec/src/bufio/bufio.go:139 +0x4f
net/http.(*persistConn).readLoop(0xc0002ea5a0)
	/usr/local/opt/go/libexec/src/net/http/transport.go:2083 +0x1a8
created by net/http.(*Transport).dialConn
	/usr/local/opt/go/libexec/src/net/http/transport.go:1743 +0xc77

goroutine 20 [select]:
io.(*pipe).Read(0xc000118360, 0xc0006d6000, 0x8000, 0x8000, 0xbc, 0x18a0140, 0x1fddb80)
	/usr/local/opt/go/libexec/src/io/pipe.go:57 +0xcb
io.(*PipeReader).Read(0xc000152108, 0xc0006d6000, 0x8000, 0x8000, 0xbc, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/io/pipe.go:134 +0x4c
io.copyBuffer(0xbe310f0, 0xc0002ae200, 0x1b88220, 0xc000152108, 0xc0006d6000, 0x8000, 0x8000, 0xc00007bc38, 0x100cd05, 0x18ee100)
	/usr/local/opt/go/libexec/src/io/io.go:423 +0x12c
io.Copy(...)
	/usr/local/opt/go/libexec/src/io/io.go:382
net/http.(*transferWriter).doBodyCopy(0xc0002c8000, 0xbe310f0, 0xc0002ae200, 0x1b88220, 0xc000152108, 0x12f61f2, 0xc000126000, 0xc000129000)
	/usr/local/opt/go/libexec/src/net/http/transfer.go:409 +0x6a
net/http.(*transferWriter).writeBody(0xc0002c8000, 0x1b885c0, 0xc000126010, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/net/http/transfer.go:356 +0x531
net/http.(*Request).write(0xc000148300, 0x1b85940, 0xc00012a000, 0x0, 0xc00037e030, 0x0, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/net/http/request.go:697 +0x7c6
net/http.(*persistConn).writeLoop(0xc0002ea5a0)
	/usr/local/opt/go/libexec/src/net/http/transport.go:2385 +0x1a7
created by net/http.(*Transport).dialConn
	/usr/local/opt/go/libexec/src/net/http/transport.go:1744 +0xc9c

goroutine 50 [select, 1 minutes]:
net/http.(*persistConn).roundTrip(0xc0002ea5a0, 0xc0006c6000, 0x0, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/net/http/transport.go:2610 +0x765
net/http.(*Transport).roundTrip(0xc0000c9b80, 0xc000148200, 0xc0000f4420, 0x160, 0x150)
	/usr/local/opt/go/libexec/src/net/http/transport.go:592 +0xacb
net/http.(*Transport).RoundTrip(0xc0000c9b80, 0xc000148200, 0xc0000c9b80, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/net/http/roundtrip.go:17 +0x35
net/http.send(0xc000148200, 0x1b88420, 0xc0000c9b80, 0x0, 0x0, 0x0, 0xc000152120, 0xf8, 0x1, 0x0)
	/usr/local/opt/go/libexec/src/net/http/client.go:251 +0x454
net/http.(*Client).send(0xc0002b9590, 0xc000148200, 0x0, 0x0, 0x0, 0xc000152120, 0x0, 0x1, 0xc000122247)
	/usr/local/opt/go/libexec/src/net/http/client.go:175 +0xff
net/http.(*Client).do(0xc0002b9590, 0xc000148200, 0x0, 0x0, 0x0)
	/usr/local/opt/go/libexec/src/net/http/client.go:717 +0x45f
net/http.(*Client).Do(...)
	/usr/local/opt/go/libexec/src/net/http/client.go:585
github.com/go-resty/resty/v2.(*Client).execute(0xc0002e8000, 0xc0000f4000, 0x0, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/go-resty/resty/[email protected]/client.go:791 +0x2e7
github.com/go-resty/resty/v2.(*Request).Execute(0xc0000f4000, 0x19e657b, 0x4, 0xc00003c1e0, 0x25, 0x0, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/go-resty/resty/[email protected]/request.go:622 +0x139
github.com/go-resty/resty/v2.(*Request).Send(...)
	/Users/phil/.go/pkg/mod/github.com/go-resty/resty/[email protected]/request.go:597
github.com/brimdata/zed/api/client.(*Connection).stream(0xc00000e630, 0xc0000f4000, 0x0, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/api/client/connection.go:108 +0x96
github.com/brimdata/zed/api/client.(*Connection).Add(0xc00000e630, 0x1b98238, 0xc00012a0c0, 0x363296e1d36f6a0d, 0xfc75938e76a9ad35, 0x11126758, 0x1b88220, 0xc000152108, 0x0, 0x0, ...)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/api/client/connection.go:357 +0x21f
github.com/brimdata/brimcap/cmd/brimcap/load.(*Command).post(0xc0002e6000, 0x1b98238, 0xc00012a0c0, 0xc000152108, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/load/command.go:119 +0xb2
github.com/brimdata/brimcap/cmd/brimcap/load.(*Command).Run.func3(0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/load/command.go:109 +0x45
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc0002fc630, 0xc0002fc690)
	/Users/phil/.go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x59
created by golang.org/x/sync/errgroup.(*Group).Go
	/Users/phil/.go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x66

rax    0x104
rbx    0x11fe7e00
rcx    0x7ffeefbff488
rdx    0x400
rdi    0x2060220
rsi    0x60100000700
rbp    0x7ffeefbff530
rsp    0x7ffeefbff488
r8     0x0
r9     0xa0
r10    0x0
r11    0x246
r12    0x20601e0
r13    0x16
r14    0x60100000700
r15    0x400
rip    0x7fff2046acde
rflags 0x247
cs     0x7
fs     0x0
gs     0x0

This was not happening as of Brimcap commit 6406f89 that came right before. In that case the progress bar incremented to 100.0% and exited, with the byte counts at that point looking approximately the same as when the meter for the newer Brimcap was capped out at 1.0%.

6406f89.mp4

Option to analyze/load local paths, not just stdin

While drafting the "Custom Brimcap Configuration" article in #72, I found myself having to to create tiny wrapper scripts to deal with the expectation that a Brimcap analyzer expects its pcap input to be streamed on stdin. So for instance, my config YAML looked like:

analyzers:
  - cmd: /usr/local/bin/zeek-wrapper.sh
  - cmd: /usr/local/bin/suricata-wrapper.sh

And those wrapper scripts looked like:

$ cat zeek-wrapper.sh
#!/bin/bash
exec /opt/zeek/bin -C -r - --exec "event zeek_init() { Log::disable_stream(PacketFilter::LOG); Log::disable_stream(LoadedScripts::LOG); }" local

$ cat suricata-wrapper.sh 
#!/bin/bash -e
exec /usr/local/bin/suricata -r /dev/stdin

If the user's intent is to just run brimcap load or brimcap analyze on pcap file paths on their local workstation (as I expect will be most common), this extra layer of indirection isn't buying them much. What follows is just a straw man proposal, but I imagined we could add some kind of option in the YAML so the full analyzer command line could be brought in, but with some kind of substitution of the provided file path, e.g.:

substitute the provided file path, e.g.:

analyzers:
  - cmd: /opt/zeek/bin -C -r %PCAPPATH% --exec "event zeek_init() { Log::disable_stream(PacketFilter::LOG); Log::disable_stream(LoadedScripts::LOG); }" local
    inputmode: filepath
  - cmd: /usr/local/bin/suricata -r %PCAPPATH%
    inputpmode: filepath

The possible advantages I see with offering this approach:

  1. It keeps the config consolidated by avoiding the proliferation of wrapper scripts
  2. For analyzers that aren't prepared to accept input on stdin (such as the NetFlow example shown in the same article, or off-the-shelf Suricata on Windows, for which we maintain a separate build exclusively to add the stdin support), the user would avoid needing to create wrapper scripts that push stdin to a tmpfile just to pass it off to the analyzer

I bounced some of this off @mattnibs, and he had some valid rebuttals about why we'd not want to make this our only approach. One of the advantages he pointed out about being stream-focused is that it offers the user the ability to analyze pcaps large enough that they'd be unwieldy to download in full before analysis. For instance, if my Brim app is running locally, this is a way to turn an S3-stored pcap into Zeek+Suricata logs and load those logs directly to a Pool in the Zed Lake behind my app, all without an explicit download of the pcap to to a local file:

$ aws s3 cp s3://brim-sampledata/wrccdc.pcap - | brimcap analyze - | zapi load -p wrccdc -
1tgSXaWvlzFDG4dcKfeI2nWo3Ax committed

He also noted the efficiency of a single pcap stream being forked to multiple analyzers, rather than each having to open and analyze a file separately.

All that said, I do still see value in avoiding the proliferation of wrapper scripts if a user is truly working with local pcaps and doesn't need the full efficiency benefits of the streamed approach, so I'm filing this one to possibly reconsider in the future.

Failure to symlink on Windows: "A required privilege is not held by the client"

Currently brimcap load seems to fail on Windows. While testing the v0.0.1 release artifact, I attempted my first load and found:

C:\Users\Phil\Downloads\brimcap-v0.0.1.windows-amd64\brimcap>.\brimcap.exe load -s wrccdc -root c:\brimcap-root \users\phil\Desktop\wrccdc.2018-03-23.010014000000000.pcap
error writing brimcap root: symlink C:\users\phil\Desktop\wrccdc.2018-03-23.010014000000000.pcap c:\brimcap-root\wrccdc.2018-03-23.010014000000000.pcap: A required privilege is not held by the client.

I did some sleuthing and found articles like https://www.wintellect.com/non-admin-users-can-now-create-symlinks-windows-10/ that seem to imply that on current Windows desktops it's not a given that this would work. I then confirmed that if I right-clicked my Command Prompt to "Run as Administrator", the command did indeed work ok. But I'm guessing it would not be reasonable to expect users to only run Brimcap (or Brim itself, if it's the one that will be invoking Brimcap) in admin mode.

Homebrew-installed Zeek v4.0.0 on macOS lacks GeoIP support

For the Zeek artifacts we build ourselves, we've been linking against libmaxminddb so we can include the https://github.com/brimdata/geoip-conn package and hence provide some geolocation data in the Zeek logs generated from pcaps. However, part of what we're trying to achieve with Brimcap is to make it easier for users to bring their own custom Zeek/Suricata, so we're likely to provide some per-platform guidance regarding this (#14).

One problem I've noticed in this area is that the Homebrew-installed Zeek v4.0.0 currently lacks the ability to run the geoip-conn package via zkg install. It installs ok, but when run:

1583774873.399273 error in /usr/local/Cellar/zeek/4.0.0_1/share/zeek/site/packages/./geoip-conn/./geoip-conn.zeek, line 37: Zeek was not configured for GeoIP support (lookup_location(Conn::c$id$orig_h))

I bumped into this same problem a while back with the Zeek installs for Linux and managed to see it addressed via zeek/zeek#1086. I just hadn't thought to check/pursue the macOS angle at the time. I'm actually uncertain who even has influence over those Homebrew installs, so for now I've just revived a thread on the Zeek public Slack with the Devs that helped last time to see if they have a recommendation for how to proceed. If it can't be addressed in a timely manner, we can just highlight it in the guidance proposed in #14.

Legacy Space migration

@mattnibs, @jameskerr and I had a discussion regarding the migration proposed in brimdata/zui#1567. The plan we agreed to would include a command in Brimcap (brimcap migrate?) that the Brim app could invoke when it sees a non-empty data/spaces/ directory. Looping through all the legacy Spaces in the data/spaces/ directory, on each it would perform the equivalent of a zed lake load on its all.zng to make it into a new pool containing the same records. For the Spaces that include a pcap, @mattnibs seemed to think he could re-use the existing pcap index from the legacy Space to populate the appropriate entry in the Brimcap root, effectively maintaining the ability to pivot between the logs and the pcap. After each Space migration is complete, the tool would return a message to the app as a coarse progress update.

"brimcap help" causes fatal error: stack overflow

The recent update of the Zed pointer makes brimcap -h start showing the help similar to how it's done in the Zed CLI tooling, and that's great. But for those who still have muscle memory for the old way, now there's a stack dump.

$ brimcap -version
Version: v0.0.2-5-g3b98bfc

$ brimcap help
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0xc0207883d8 stack=[0xc020788000, 0xc040788000]
fatal error: stack overflow

runtime stack:
runtime.throw(0x19dde70, 0xe)
	/usr/local/opt/go/libexec/src/runtime/panic.go:1117 +0x72
runtime.newstack()
	/usr/local/opt/go/libexec/src/runtime/stack.go:1069 +0x7ed
runtime.morestack()
	/usr/local/opt/go/libexec/src/runtime/asm_amd64.s:458 +0x8f

goroutine 1 [running]:
runtime.heapBitsSetType(0xc010a92d00, 0x40, 0x40, 0x1949aa0)
	/usr/local/opt/go/libexec/src/runtime/mbitmap.go:815 +0xc05 fp=0xc0207883e8 sp=0xc0207883e0 pc=0x10189e5
runtime.mallocgc(0x40, 0x1949aa0, 0x203001, 0xc010a92cc0)
	/usr/local/opt/go/libexec/src/runtime/malloc.go:1096 +0x5c5 fp=0xc020788470 sp=0xc0207883e8 pc=0x100f325
runtime.newobject(0x1949aa0, 0x19cb829)
	/usr/local/opt/go/libexec/src/runtime/malloc.go:1177 +0x38 fp=0xc0207884a0 sp=0xc020788470 pc=0x100f818
flag.(*FlagSet).Var(0xc010a941e0, 0x1b72048, 0xc010a94250, 0x19ced8f, 0x7, 0x19e6f4c, 0x16)
	/usr/local/opt/go/libexec/src/flag/flag.go:861 +0x6d fp=0xc020788558 sp=0xc0207884a0 pc=0x10f24ad
flag.(*FlagSet).BoolVar(...)
	/usr/local/opt/go/libexec/src/flag/flag.go:630
github.com/brimdata/brimcap/cli.(*Flags).SetFlags(0xc010a94250, 0xc010a941e0)
	/Users/phil/work/brimcap/cli/cli.go:23 +0x72 fp=0xc0207885a0 sp=0xc020788558 pc=0x17fd9f2
github.com/brimdata/brimcap/cmd/brimcap/root.New(0x1b68be0, 0xc010a94180, 0xc010a941e0, 0x19ccb54, 0x6, 0x19e3c95, 0x13)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:52 +0x52 fp=0xc0207885f8 sp=0xc0207885a0 pc=0x1801e52
github.com/brimdata/zed/pkg/charm.parse(0x20258a0, 0xc012f58b50, 0x1, 0x1, 0x1b68be0, 0xc010a94180, 0x1052045, 0xc00047c000, 0x0, 0xc020788760, ...)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/instance.go:60 +0x2e5 fp=0xc0207886e8 sp=0xc0207885f8 pc=0x168f7e5
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a94180, 0xc012f58b50, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:53 +0x73 fp=0xc020788770 sp=0xc0207886e8 pc=0x168d5f3
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a94180, 0xc012f58b00, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc0207887d8 sp=0xc020788770 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457d8, 0x1, 0x1, 0xc012f58b00, 0x1, 0x1, 0xc0109457d8, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020788878 sp=0xc0207887d8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a940c0, 0xc012f58b00, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020788900 sp=0xc020788878 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a940c0, 0xc012f58ac0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020788968 sp=0xc020788900 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457d0, 0x1, 0x1, 0xc012f58ac0, 0x1, 0x1, 0xc0109457d0, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020788a08 sp=0xc020788968 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a94000, 0xc012f58ac0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020788a90 sp=0xc020788a08 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a94000, 0xc012f58a70, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020788af8 sp=0xc020788a90 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457c8, 0x1, 0x1, 0xc012f58a70, 0x1, 0x1, 0xc0109457c8, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020788b98 sp=0xc020788af8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87f20, 0xc012f58a70, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020788c20 sp=0xc020788b98 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87f20, 0xc012f58a30, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020788c88 sp=0xc020788c20 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457c0, 0x1, 0x1, 0xc012f58a30, 0x1, 0x1, 0xc0109457c0, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020788d28 sp=0xc020788c88 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87e60, 0xc012f58a30, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020788db0 sp=0xc020788d28 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87e60, 0xc012f589e0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020788e18 sp=0xc020788db0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457b8, 0x1, 0x1, 0xc012f589e0, 0x1, 0x1, 0xc0109457b8, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020788eb8 sp=0xc020788e18 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87da0, 0xc012f589e0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020788f40 sp=0xc020788eb8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87da0, 0xc012f589a0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020788fa8 sp=0xc020788f40 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457b0, 0x1, 0x1, 0xc012f589a0, 0x1, 0x1, 0xc0109457b0, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789048 sp=0xc020788fa8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87ce0, 0xc012f589a0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc0207890d0 sp=0xc020789048 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87ce0, 0xc012f58950, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789138 sp=0xc0207890d0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457a8, 0x1, 0x1, 0xc012f58950, 0x1, 0x1, 0xc0109457a8, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc0207891d8 sp=0xc020789138 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87c20, 0xc012f58950, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020789260 sp=0xc0207891d8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87c20, 0xc012f58910, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc0207892c8 sp=0xc020789260 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109457a0, 0x1, 0x1, 0xc012f58910, 0x1, 0x1, 0xc0109457a0, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789368 sp=0xc0207892c8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87b60, 0xc012f58910, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc0207893f0 sp=0xc020789368 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87b60, 0xc012f588c0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789458 sp=0xc0207893f0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945798, 0x1, 0x1, 0xc012f588c0, 0x1, 0x1, 0xc010945798, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc0207894f8 sp=0xc020789458 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87aa0, 0xc012f588c0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020789580 sp=0xc0207894f8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87aa0, 0xc012f58880, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc0207895e8 sp=0xc020789580 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945790, 0x1, 0x1, 0xc012f58880, 0x1, 0x1, 0xc010945790, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789688 sp=0xc0207895e8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a879e0, 0xc012f58880, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020789710 sp=0xc020789688 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a879e0, 0xc012f58830, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789778 sp=0xc020789710 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945788, 0x1, 0x1, 0xc012f58830, 0x1, 0x1, 0xc010945788, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789818 sp=0xc020789778 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87920, 0xc012f58830, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc0207898a0 sp=0xc020789818 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87920, 0xc012f587f0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789908 sp=0xc0207898a0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945780, 0x1, 0x1, 0xc012f587f0, 0x1, 0x1, 0xc010945780, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc0207899a8 sp=0xc020789908 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87860, 0xc012f587f0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020789a30 sp=0xc0207899a8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87860, 0xc012f587a0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789a98 sp=0xc020789a30 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945778, 0x1, 0x1, 0xc012f587a0, 0x1, 0x1, 0xc010945778, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789b38 sp=0xc020789a98 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a877a0, 0xc012f587a0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020789bc0 sp=0xc020789b38 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a877a0, 0xc012f58760, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789c28 sp=0xc020789bc0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945770, 0x1, 0x1, 0xc012f58760, 0x1, 0x1, 0xc010945770, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789cc8 sp=0xc020789c28 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a876e0, 0xc012f58760, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020789d50 sp=0xc020789cc8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a876e0, 0xc012f58710, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789db8 sp=0xc020789d50 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945768, 0x1, 0x1, 0xc012f58710, 0x1, 0x1, 0xc010945768, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789e58 sp=0xc020789db8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87620, 0xc012f58710, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc020789ee0 sp=0xc020789e58 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87620, 0xc012f586d0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc020789f48 sp=0xc020789ee0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945760, 0x1, 0x1, 0xc012f586d0, 0x1, 0x1, 0xc010945760, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc020789fe8 sp=0xc020789f48 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87560, 0xc012f586d0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078a070 sp=0xc020789fe8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87560, 0xc012f58680, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078a0d8 sp=0xc02078a070 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945758, 0x1, 0x1, 0xc012f58680, 0x1, 0x1, 0xc010945758, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078a178 sp=0xc02078a0d8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a874a0, 0xc012f58680, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078a200 sp=0xc02078a178 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a874a0, 0xc012f58640, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078a268 sp=0xc02078a200 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945750, 0x1, 0x1, 0xc012f58640, 0x1, 0x1, 0xc010945750, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078a308 sp=0xc02078a268 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a873e0, 0xc012f58640, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078a390 sp=0xc02078a308 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a873e0, 0xc012f585f0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078a3f8 sp=0xc02078a390 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945748, 0x1, 0x1, 0xc012f585f0, 0x1, 0x1, 0xc010945748, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078a498 sp=0xc02078a3f8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87320, 0xc012f585f0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078a520 sp=0xc02078a498 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87320, 0xc012f585b0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078a588 sp=0xc02078a520 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945740, 0x1, 0x1, 0xc012f585b0, 0x1, 0x1, 0xc010945740, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078a628 sp=0xc02078a588 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87260, 0xc012f585b0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078a6b0 sp=0xc02078a628 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87260, 0xc012f58560, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078a718 sp=0xc02078a6b0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945738, 0x1, 0x1, 0xc012f58560, 0x1, 0x1, 0xc010945738, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078a7b8 sp=0xc02078a718 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a871a0, 0xc012f58560, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078a840 sp=0xc02078a7b8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a871a0, 0xc012f58520, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078a8a8 sp=0xc02078a840 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945730, 0x1, 0x1, 0xc012f58520, 0x1, 0x1, 0xc010945730, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078a948 sp=0xc02078a8a8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a870e0, 0xc012f58520, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078a9d0 sp=0xc02078a948 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a870e0, 0xc012f584d0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078aa38 sp=0xc02078a9d0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945728, 0x1, 0x1, 0xc012f584d0, 0x1, 0x1, 0xc010945728, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078aad8 sp=0xc02078aa38 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a87020, 0xc012f584d0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078ab60 sp=0xc02078aad8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a87020, 0xc012f58490, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078abc8 sp=0xc02078ab60 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945720, 0x1, 0x1, 0xc012f58490, 0x1, 0x1, 0xc010945720, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078ac68 sp=0xc02078abc8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a86f60, 0xc012f58490, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078acf0 sp=0xc02078ac68 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a86f60, 0xc012f58440, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078ad58 sp=0xc02078acf0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945718, 0x1, 0x1, 0xc012f58440, 0x1, 0x1, 0xc010945718, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078adf8 sp=0xc02078ad58 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a86ea0, 0xc012f58440, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078ae80 sp=0xc02078adf8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a86ea0, 0xc012f58400, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078aee8 sp=0xc02078ae80 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945710, 0x1, 0x1, 0xc012f58400, 0x1, 0x1, 0xc010945710, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078af88 sp=0xc02078aee8 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a86de0, 0xc012f58400, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078b010 sp=0xc02078af88 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a86de0, 0xc012f583b0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078b078 sp=0xc02078b010 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945708, 0x1, 0x1, 0xc012f583b0, 0x1, 0x1, 0xc010945708, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078b118 sp=0xc02078b078 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a86d20, 0xc012f583b0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078b1a0 sp=0xc02078b118 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a86d20, 0xc012f58370, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078b208 sp=0xc02078b1a0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc010945700, 0x1, 0x1, 0xc012f58370, 0x1, 0x1, 0xc010945700, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078b2a8 sp=0xc02078b208 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a86c60, 0xc012f58370, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078b330 sp=0xc02078b2a8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a86c60, 0xc012f58320, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078b398 sp=0xc02078b330 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109456f8, 0x1, 0x1, 0xc012f58320, 0x1, 0x1, 0xc0109456f8, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078b438 sp=0xc02078b398 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a86ba0, 0xc012f58320, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078b4c0 sp=0xc02078b438 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a86ba0, 0xc012f582e0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078b528 sp=0xc02078b4c0 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109456f0, 0x1, 0x1, 0xc012f582e0, 0x1, 0x1, 0xc0109456f0, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078b5c8 sp=0xc02078b528 pc=0x16904ef
github.com/brimdata/zed/pkg/charm.(*Spec).Exec(0x20258a0, 0x1b68be0, 0xc010a86ae0, 0xc012f582e0, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/charm.go:57 +0xe9 fp=0xc02078b650 sp=0xc02078b5c8 pc=0x168d669
github.com/brimdata/brimcap/cmd/brimcap/root.(*Command).Run(0xc010a86ae0, 0xc012f58290, 0x1, 0x1, 0x0, 0x0)
	/Users/phil/work/brimcap/cmd/brimcap/root/command.go:78 +0x145 fp=0xc02078b6b8 sp=0xc02078b650 pc=0x18020e5
github.com/brimdata/zed/pkg/charm.path.run(0xc0109456e8, 0x1, 0x1, 0xc012f58290, 0x1, 0x1, 0xc0109456e8, 0x1)
	/Users/phil/.go/pkg/mod/github.com/brimdata/[email protected]/pkg/charm/path.go:11 +0x8f fp=0xc02078b758 sp=0xc02078b6b8 pc=0x16904ef

Excessively detailed units shown during analysis

This is basically just a cosmetic thing and hence not urgent, but I noticed it in passing and figured I'd file. Repro is with Brimcap tagged v0.0.4.

During brimcap analyze, the "bytes" units shown in the progress meter appear to separately break out the mega/kilo/bytes portions. Since they're all smashed together, to me it becomes kind of impossible to parse visually and just becomes eye candy while I watch the percent-based meter. An improvement on this might be to show the units at one scale relative to the size of the pcap, i.e. if the pcap is <1 GiB, show just MiB units.

$ brimcap analyze -o out.zng wrccdc.pcap 
100.0% 476MiB858KiB604B/476MiB858KiB604B records=582079 
Repro.mp4

Add zed auto update

Similarly to what we do with brim, the commit sha for zed should be updated whenever brimdata/zed main gets updated. The full suite of tests should be run and we should be notified if a failure occurs.

Option to disable tail'ed processing of an analyzer's logs

Repro is with Brimcap commit 1fa5fc4 and https://archive.wrccdc.org/pcaps/2018/wrccdc.2018-03-23.010014000000000.pcap.gz (uncompressed) as my test data.

In my verifications steps #16 (comment), I first used this unsuccessful approach to try to work around https://redmine.openinfosecfoundation.org/issues/4106, mistakenly thinking that all I needed to do was leave behind only valid logs to be subject to Zed processing.

$ cat /tmp/mysuricata 
#!/bin/bash
suricata -r /dev/stdin
cat eve.json | jq -c . > deduped-eve.json
shopt -s extglob
rm !("deduped-eve.json")

$ brimcap analyze -Z -config variant.yml ~/pcap/wrccdc.pcap > wrccdc.zson
{"type":"error","error":"duplicate field subject"}

@mattnibs explained to me what went wrong here. The "ztail" functionality in Brimcap starts performing Zed processing on the logs generated by the analyzer processes even before those processes are finished, since this allows users to potentially perform early querying on partial output. Because of this, Brimcap ended up choking on the partially-built eve.json (which contains the duplicate field names) before my wrapper script had a chance to delete it.

This led me to learn about and start using the globs parameter in the Brimcap config YAML such that the ztail would only tail the deduped-eve.json file, so I was all set. However, having gone through the experience, I now recognized it would still be convenient to have a way to disable this ztail behavior entirely when processing an analyzer's generated logs, for two reasons I can think of:

  1. Whereas the post-processing I was doing here with jq lent itself to output you could still "tail", some kinds of post-processing may not be (e.g. they might rely on making an entire pass through a generated log after the complete output is present)
  2. Some users may know they don't want to query partial results and therefore don't want to burn CPU cycles on the incremental Zed processing and instead just wait until all logs are finished being output

Offer per-platform guidance for running custom Zeek/Suricata

With the direction things are heading in #11, it's looking like (at least for a while) we'll be publishing per-platform Brimcap artifacts that include embedded Zeek/Suricata binaries that reliably turn pcaps into logs that will import easily into a Space and be well-presented in Brim. However, part of what we're trying to achieve with Brimcap is to make it easier for users to bring their own custom Zeek/Suricata (and potentially additional pcap processing tools). Therefore, it'll be helpful for us to provide some guidance to get users started on that path.

We already have some coverage in the README in the way we point macOS users at brew, and no doubt we can do the same with apt or yum on Linux. However, when simply "playing user" ourselves in attempting to use Brimcap in this manner, we've already bumped into a couple problems that users would be likely to hit if we just hand-wave at brew/apt and call it a day, e.g. #4 which affects macOS users that install via brew, and #13 that affects both macOS and Linux. The question of running zkg via virtualenv vs. sudo pip3 install also may be worth touching on. I'm guessing we'll bump into more topics over time as users try it out and give us feedback. We can probably cover these all through some combination of per-platform-style README docs and maybe some helper install scripts that execute the necessary commands on out-of-the-box macOS/Linux systems to install the versions in the same way our CI does when we test. For Windows, we'll probably want to set expectations that they'll be limited in how much they can customize, and we can point to the open Zeek issue that's tracking the potential for formal Windows support in case they want to weigh in their with a ๐Ÿ‘ย  or contribute to the effort.

Beyond known bugs and limitations, we'll probably also want to add other guidance regarding customization, e.g. what to expect if they modify the Suricata shaper to let through all records other than event_type of alert, or let through time-free Zeek logs like loaded_scripts. I think this would provide a decent intro to the concept of shaping and could be cross-referenced with whatever we write for #8.

Suricata Query Library entry add/adjust

A community user spoke up on Slack with some feedback on the current Suricata entries in the Query Library.

event_type=alert | count() by alert.signature can be a useful [addition]
...
I want to see which signatures triggered

Also:

Suricata Alerts by Source and Destination is helpful but I personally would change it to event_type=alert | alerts=union(alert.signature) by src_ip, dest_ip or create a new query for it

Tools to maintain the Brimcap root

The entries in the Brimcap root are JSON files with opaque names that contain pointers to the filesystem location of loaded pcaps. I expect users may appreciate having a way to remind themselves of what's been imported, remove old/unneeded ones, and so forth, without having to create their own tooling to parse the JSON files. Following the pattern of what we've done with other CLI tooling, I could imagine this being provided by commands like brimcap ls and brimcap rm.

Add "top queried domains" to Query Library

Inspired by the community inquiry tracked in brimdata/zed#2092, it seems we could benefit from adding something similar to the default Query Library. Right now the only query we have tagged as dns is "Unique DNS Queries" (_path=="dns" | count() by query | sort -r). This other one seems like it would be handy to give a similar summary at the domain level:

_path=="dns" | count() by domain:=join(split(query,".")[-2:],".") | sort -r

An example output with test data https://archive.wrccdc.org/pcaps/2018/wrccdc.2018-03-23.010014000000000.pcap.gz:

image

Alas, while a useful table, it reveals another example of where you can't successfully "Pivot to logs" from this table to isolate the intended results, hence making another case for brimdata/zui#1420. But we might want to add it regardless just because the summary is useful by itself.

Community ID Zeek package install fails atop Homebrew-installed Zeek v4.0.0 on macOS

This is basically just a tracking issue on our end to point at corelight/zeek-community-id#15.

The tl;dr is that if we advise Brimcap users to install their own off-the-shelf Zeek (as opposed to one we build that comes with Community ID already installed) and install the Community ID package so they can easily pivot between Suricata alerts and Zeek events in the logs generated from pcaps, they're likely to hit ย corelight/zeek-community-id#15 if they install Zeek via brew and zkg via pip3 instead of compiling Zeek themselves. Depending on if/when that gets addressed and what direction we go with Brimcap, we may want to highlight this in macOS guidance in a README.

Run Suricata rules update periodically

My understanding (please correct me if I'm wrong) is that pre-Brimap, the code at https://github.com/brimdata/zed/blob/13557d99e68eec10f36093367c97e948fc5487fa/ppl/cmd/zqd/listen/command.go#L232-L249 made it such that the suricataupdater effectively ran once each time the app was launched, since zqd took care of it. Since zqd is no longer handling pcap processing, we want to make sure the rules update is triggered periodically in some other way.

I'm not sure I have a perfect proposal. We could run it every time Suricata is invoked (i.e. a side effect every time a brimcap analyze or brimcap load is run), since the updater is smart enough to not spam the update server unnecessarily (if you run it multiple times close together, you see a message like "Last download less than 15 minutes ago. Not downloading https://rules.emergingthreats.net/open/suricata-5.0.3/emerging.rules.tar.gz.")) However, it still ends up re-processing the rules anyway, which takes 8 seconds on my laptop, which doesn't seem great if users might be running brimcap load programmatically lots of times (one user has already expressed an intent to load "thousands" of pcaps to the same pool, for instance). In terms of what Suricata users likely do in practice, I'm guessing a cron job is the most mature way to address this, but for a Brim user who sees this as an extension of a desktop app, creating cron jobs and scheduled tasks seems heavyweight. So maybe it makes sense to recreate something similar to what we had before where it runs occasionally as a side effect of other user activity.

As long as the plan of record is to bundle Brimcap with the Brim app and hence the expected paths to the Brimcap binaries are well-known, perhaps it would be reasonable to have the app just invoke the suricataupdater directly at launch time to get to the functional equivalent of what was happening before when zqd handled it.

Maybe a fancier Brimcap-side solution we could consider one day might use some logic in the zed run approach to have YAML describing checks of the last modified time of a particular path (i.e. the rules file) and automatically invokes a specified process (the suricataupdater in this case) if it's beyond a certain age. That way they could brimcap load to their heart's content, and once in a while the update would happen quietly without them even having to be aware.

Suricata v6 generates JSON objects with duplicate fields

This is basically just a tracking issue on our end to point at [https://redmine.openinfosecfoundation.org/issues/4106].(https://redmine.openinfosecfoundation.org/issues/4106.)

When starting to work on Brimcap by using brew/apt-installed Suricata, we ran into the symptom described in brimdata/zed#2452. I've since found an open Suricata issue https://redmine.openinfosecfoundation.org/issues/4106 describing the same symptom. It had been stuck in the mud for several months because the Suricata developers were waiting on a pcap that could reproduce the problem. Since we could easily repro it using public data, I chimed in with the details, so hopefully they can move it forward. I'll hold this one open so we know to caution users about it if it's not addressed in a timely manner.

Article describing how to recreate custom Zeek/Suricata configs

brimdata/zui#1593 describes our intent to replace the "a la carte" Brim custom Zeek/Suricata Runner prefs with one that will allow the user to point to a Brimcap config YAML. Since Brimcap is new to these users, I should write an article that describes in brief how they can recreate a YAML config equivalent to what they had before. We can link to the article from the notification that users will see in the app upon upgrade (brimdata/zui#1594), and we can also link to it from next to the Preference setting (similar to what we did before with the "docs" link next to the Zeek Runner pref.)

While I'm at it, I should update the Brim article at https://github.com/brimdata/brim/wiki/Zeek-Customization to emphasize how that's only relevant to releases v0.24.0 and earlier, and similarly reference to the Brimcap doc as the place to learn about the new approach.

Since the config YAML needs to include a shaper, this probably also represents a first pass of coverage for #8.

Fill out help commands

We need better help command information for the following subcommands:

  • analyze
  • launch
  • load
  • search
  • the root command

"brimcapd" server

At the moment Brimcap only allows for populating and querying a local "Brimcap root". This means that if a Brim app is connected to a remote lake and accesses a pool that was created by loading a pcap via Brimcap at that remote side, when they click the Packets button, their local Brimcap root will still be queried and the flow will not be found. If the user is savvy enough to run brimcap index locally against the same pcap to populate their personal Brimcap root, that would make the Packets button work as expected. But this is probably asking too much of users.

When contemplating this feature gap, we recognized there's room for something like a "brimcapd server" such that the local Brimcap could do a remote "search" by connecting to the remote brimcapd, which could then extract the relevant flow and return it over the network to be displayed locally in Wireshark.

Exhaustive pcap testing

Issues like the one fixed in brimdata/zed#462 remind us that we're likely to encounter pcap oddities in the wild. Even if we can't anticipate every corner case and have perfectly-crafted error handling for each one, ideally we'd be able to handle them gracefully.

To weed out known problems in advance, one thing we could do is throw as much diverse test data at it as we can. Some known pcap sources:

Here's some of my own testing ideas for consideration:

  • It seems a no-brainer would be to make sure brimcap analyze and brimcap index run on them without complaint.
  • Use tshark to extract each udp/tcp flow from the original pcap (tcp.stream eq N etc.) and acquire each flow's 5-tuple and timestamp/duration information, then use that to construct a brimcap search command line that tries to extract the equivalent flow from the original pcap file, then confirm it succeeds and the mini pcap for the flow we extracted matches the one extracted via tshark

Expose "brimcap info"

brimdata/zed#1354 provided the start of a useful informational/debug tool for working with packet capture files, and this is now provided by brimcap. For instance, as of brimcap tag v1.2.0:

$ brimcap -version
Version: v1.2.0

$ brimcap info ifconfig.pcapng 
Pcap type:         pcapng
Pcap Version:      1.0
Number of packets: 11
Interface 0:
    Description:       Wi-Fi
    Link type:         Ethernet
    Time resolution:   10^-6
    Packet size limit: 524288

When this issue was opened, packet processing was being done via the zqd backend process, so the original thought was to expose this info via an endpoint that the app could query. However, the app now invokes the brimcap binary directly for packet processing needs, so this issue can be closed.

Warn/prevent a user from loading the same pcap to the same Pool multiple times

As of GA Brimcap tagged v0.0.2, it's possible to brimcap load the same pcap multiple times to the same Space. Example:

$ zapi new -k archivestore wrccdc
wrccdc: space created

$ brimcap load -s wrccdc -root ~/brimcap-root ~/wrccdc.pcap
100.0% 500.0MB/500.0MB records=550398 

$ brimcap load -s wrccdc -root ~/brimcap-root ~/wrccdc.pcap
100.0% 500.0MB/500.0MB records=670823 

$ brimcap load -s wrccdc -root ~/brimcap-root ~/wrccdc.pcap
100.0% 500.0MB/500.0MB records=451812

At the end of this, there's only one pcap index in the Brimcap root, which seems reasonable.

~/brimcap-root$ ls -l
total 832
-rw-------  1 phil  staff  424848 Apr 20 08:35 idx-aG5tLflou0WVcgKcj8VWAd30gui9d3SitZcqUahrKO4.json

~/brimcap-root$ cat idx-aG5tLflou0WVcgKcj8VWAd30gui9d3SitZcqUahrKO4.json
...,"pcap_path":"/Users/phil/wrccdc.pcap"}

However, there's duplicate events in Brim. This is unsurprising because lakes do not attempt to deduplicate, but it would make it messy to work with the logs.

It seems like repeat loading of the same pcap to the same Space would almost always be a mistake, so the user would probably appreciate it if we were able to warn/prevent this, such as by pausing and confirming it's what they truly want to do. Perhaps we could offer some kind of -force option if the user truly wants to do it no matter what.

Data ingestion crashes if system libmagic or /usr/bin/file is too new at Suricata analysis

Describe the bug
/lib/brim/resources/app/zdeps/suricata/suricatarunner currently uses a pre-included libmagic. This looks in /etc/magic and /usr/share/misc/ for magic/mgc, with no way to configure it. This is problematic for two reasons:

  1. Some linux distributions install magic.mgc in /usr/share/file/misc/magic.mgc instead of /usr/share/misc/magic.mgc, of which is specified at compile time for the file binary which builds system-level libmagic. As a result, when finishing ingesting data and running through suricata to generate alerts, brim will suddenly close the dataset with the following error:
Unable to generate full summary logs from PCAP 

Detail: /usr/lib/brim/resources/app/zdeps/suricata/suricatarunner exited with status 1: /etc/magic, 0: Warning: using regular magic file `/usr/share/misc/magic'

20/3/2021 -- 15:14:25 - <Error> - [ERRCODE: SC_ERR_MAGIC_LOAD(197)] - magic_load failed: could not find any valid magic files!

This is because it is in /usr/share/file/misc/magic.mgc, but there is no way to configure suricata in brim to look there without manually re-compiling the zdep with a different libmagic.
2. A somewhat okay workaround is to add a symlink to the magic file. For example:

/usr/share/misc/magic.mgc -> /usr/share/file/misc/magic.mgc
1 root root      30 Mar 20 15:19 magic.mgc -> /usr/share/file/misc/magic.mgc

This can technically work, however some distributions of linux have newer versions of file and libmagic. In this case, the following error can occur, if magic.mgc is too-new for the currently included suricata:

Unable to generate full summary logs from PCAP 

Detail: /usr/lib/brim/resources/app/zdeps/suricata/suricatarunner exited with status 1: 20/3/2021 -- 15:20:28 - <Error> - [ERRCODE: SC_ERR_MAGIC_LOAD(197)] - magic_load failed: File 5.32 supports only version 14 magic files. `/usr/share/misc/magic.mgc' is version 16

This happens because magic.mgc is compiled with file 5.39 which generates version 16 magic, but the included suricata uses file 5.32 and only supports version 14 and below.
To Reproduce

  1. Install file configured to store magic in /usr/share/file/misc, for example like this PKGBUILD that Arch Linux uses: https://github.com/archlinux/svntogit-packages/blob/packages/file/trunk/PKGBUILD
  2. Install brim by source or a binary .deb download.
  3. Ingest a pcap in brim.
  4. Wait for the entire pcap progress bar to load, then observe the error. Will be one of the two mentioned above.

Expected behavior
Suricata alerts are generated, and the pcap results are fully visible. Downloading an older version of file on the system in a fakeroot, compiling a new magic.mgc with version 14, and placing it in /etc/share/misc/ is the only way that I've been able to get things to work as expected.

Screenshots
Data ingestion, for example this pcap (small window because included data is confidential), works file all the way until the progress bar gets to the end (fully loaded, then suricatarunner starts):

Then, if magic is not stored in /etc/magic or /etc/share/misc/magic by system config, the following error occurs:

If you generate a magic file there or add a symlink to the correct magic file, you still get an error if the system-wide libmagic or file is too new, since it generates version 16 rather than version 14:

Desktop Info

  • OS: Arch Linux, with file 5.39 installed.
  • Brim Version 0.24.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.