Giter Site home page Giter Site logo

add breadth first search option about fd HOT 26 CLOSED

sharkdp avatar sharkdp commented on May 3, 2024 2
add breadth first search option

from fd.

Comments (26)

rain-1 avatar rain-1 commented on May 3, 2024 6

Can this be reconsidered? I find the BFS feature very valuable because sometimes I know a file is in one of the folders nearby but find with DFS goes down rabbit holes that take a very long time.

from fd.

rain-1 avatar rain-1 commented on May 3, 2024 5

I made a standalone tool to do BFS https://github.com/rain-1/bfs

from fd.

tavianator avatar tavianator commented on May 3, 2024 2

Tricks with caching are possible, but that also increases implementation complexity because you get the joys of needing to tackle cache invalidation. Maybe it's easier than I think. Usually it isn't. :-)

Luckily cache invalidation is only the second-hardest problem in computer science ;)

from fd.

amosbird avatar amosbird commented on May 3, 2024 1

here is an example with fzf https://asciinema.org/a/3q81ow5y0lis5t11fyjir42o3

from fd.

tmccombs avatar tmccombs commented on May 3, 2024 1

probably not. At least not unless the implementation didn't add a lot of code or other complexity to fd, and was still performant.

from fd.

snowman avatar snowman commented on May 3, 2024 1

Caveat: This would NOT work as expect.

suppose fd -td returns:

foo/bar/baz/
fooooooooooooooooo/ # This goes to last instead of first when use `sort -n`

paths_breadth_first() {
  while IFS= read -r line; do
    dirn=${line%/*}         ## dirname(line)
    echo ${#dirn},$line     ## len(dirn),line
  done | sort -n | cut -d ',' -f 2-
}
$ fd -td | paths_breadth_first | fzf

Added this to .zshrc:

function paths_breadth_first() {
  while IFS= read -r line; do
    dirn=${line%/*}         ## dirname(line)
    echo ${#dirn},$line     ## len(dirn),line
  done | sort -n | cut -d ',' -f 2-
}
function d2() {
  dir_name="$(fd --strip-cwd-prefix -td | paths_breadth_first | fzf)"

  if [ -d "$dir_name" ]; then
     cd "$dir_name"
  fi
}

from fd.

sharkdp avatar sharkdp commented on May 3, 2024

Hi, thank you for the feedback!

Could you elaborate on what you mean by "interactively navigating directories" and how a breadth-first traversal would help?

from fd.

BurntSushi avatar BurntSushi commented on May 3, 2024

@amosbird Why do you need breadth first search for that? Seems like a max depth option would do just fine?

from fd.

amosbird avatar amosbird commented on May 3, 2024

@BurntSushi bfs will make the directory output sorted more naturally, with parent directories at the top of the list.

from fd.

sharkdp avatar sharkdp commented on May 3, 2024

Thank you very much for the suggestion, but I'm going to close this issue (for now).

Here's why:

  1. in my opinion, "interactively navigating directories" is a fairly specific use-case for fd. I haven't seen any similar feature in related tools or find
  2. I'd like to keep fd simple. Adding a breadth-first option would probably require another flag (unless this would be the default)
  3. I'm probably going to switch to a parallel directory traversal (see #32, #41), in which the output will probably not be sorted anyway
  4. It's currently really hard to implement this, as it is not supported by the ignore crate (to my knowledge)

I'm still happy to discuss further, if anything should come up.

from fd.

BurntSushi avatar BurntSushi commented on May 3, 2024

@sharkdp walkdir doesn't support breadth first search either, since it's typically not what you want and ends up using resources proportional to the width of directories instead of depth (depth tends to be much smaller). If you wanted breadth first search, you'd have to roll your own.

from fd.

michaeleisel avatar michaeleisel commented on May 3, 2024

What about a flag where it collects the results before printing anything, then prints them out sorted by depth? Often, I'm most interested in results that are of a smaller depth, but I don't want to have to tweak --max-depth.

from fd.

sharkdp avatar sharkdp commented on May 3, 2024

I think that a functionality like this could be easily implemented in a wrapper script. I currently don't see this as part of fd.

from fd.

sharkdp avatar sharkdp commented on May 3, 2024

@rain-1 Please see #599, #734 and bfs by @tavianator

from fd.

rain-1 avatar rain-1 commented on May 3, 2024

@sharkdp I've had a look at those two and don't really understand what conclusion to draw from it.

My question is basically, would adding a BFS feature to this tool be acceptable if someone implemented it?

from fd.

michaeleisel avatar michaeleisel commented on May 3, 2024

Note that https://github.com/tavianator/bfs is a fast fd/find replacement that does breadth-first search by default. In its own benchmarks, it also beats fd for total time to complete a search. So, it may be the best of both worlds.

from fd.

BurntSushi avatar BurntSushi commented on May 3, 2024

@michaeleisel where are the steps to reproduce those benchmarks?

from fd.

michaeleisel avatar michaeleisel commented on May 3, 2024

Here's the post I read on it, not sure how deep it goes into test reproduction: https://tavianator.com/2023/bfs_3.0.html

I'll also say that, even if fd turned out to still generally be faster to complete whole searches, as long as bfs is still in the same ballpark in that regard, I would consider bfs to be higher-performance because it would be faster on the whole to achieve what I want, which is to find files of interest that are typically fairly shallow in the file hierarchy from where I'm searching.

from fd.

BurntSushi avatar BurntSushi commented on May 3, 2024

Yes I read that post. There are no steps to reproduce the benchmark.

I'm the author of the ignore crate that actually does the directory traversal. It used to do breadth first, but I switched it to depth first because it improved things a ton in at least a few cases. But there's no free lunch. There are undoubtedly some cases where breadth first is better.

My point is, I'm looking for real actionable benchmarks instead of just vague claims that one is faster/better than the other.

See: BurntSushi/ripgrep#1554

from fd.

michaeleisel avatar michaeleisel commented on May 3, 2024

FWIW, fd's benchmarks at https://github.com/sharkdp/fd-benchmarks appear equally vague, because they don't specify how to create the files for the searches. But it seems reasonable to set up a more reproducible benchmark between these two projects, if only to understand how they compare in terms of time to complete searches. Unfortunately, I don't see how to quantify the benefit of breadth-first search vs. depth-first search even with lots of benchmarks, because it's based on the user's experiences and what they see. It just has to be a judgment call I think, but I do see people in this thread and others with feelings similar to mine on the subjective tradeoff.

@tavianator FYI

from fd.

BurntSushi avatar BurntSushi commented on May 3, 2024

Yes, user experience is important. For example, breadth first search leads to much higher memory usage because the ignore crate respects gitignore/ignore files. With breadth first search in wide directories each containing ignore files, the ignore crate keeps all of those matchers live similtaneously in memory. A depth first search tends to have a lot less in memory similtaneously.

Of course, you can disable ignore filtering and yet you're still left with depth first search even though it no longer has the memory usage benefit with respect to ignore rule matchers. So now you start down paths like "well let's "just" implement both and let the user decide or pick the one that is probably best automatically." And then you start needing to weigh inplementation complexity against user features.

That's all to say, I understand there is a balance to be struck here. One of those things is performance. That can be quantified. So when you start talking about one technique being faster than the other, that's one part of the balancing act. Not all of it. But it is a part that can be reasonably quantified.

Hence why I asked how to reproduce the benchmarks.

from fd.

tavianator avatar tavianator commented on May 3, 2024

The benchmarks from the blog post use my entire home directory to have as many files as possible. I can't really share that of course. But the command lines are shown in the hyperfine output if you want to try with your own directory tree.

I also have a more comprehensive set of benchmarks you can see in some PRs like tavianator/bfs#107. Those use publicly available git repos like the Linux repo and the Android source tree. I haven't published the scripts for those yet, but you can see the command lines in the output as well. They also only compare bfs to itself, but once I publish the scripts I'll include find and fd for reference.

@BurntSushi you're of course right that BFS uses more memory than DFS in most cases. I remember the case that made you switch was the entire cargo package repository right? Sort of a worst case with extremely high fanout and most directories having a .gitignore to keep in memory. At the time I tried BFS on the same corpus and the memory use wasn't excessive, but of course it doesn't support .gitignore so not a fair comparison.

I am currently working on a Rust version of the traversal strategy in bfs, and I'll definitely try the cargo corpus with it. I have some ideas on how to limit memory consumption if necessary, e.g. a limited size cache for .gitignore rules that can be re-parsed if they get evicted.

One last thing: the performance of fd on my system is probably anomalous. For most people it handily beats findutils.

from fd.

BurntSushi avatar BurntSushi commented on May 3, 2024

The benchmarks from the blog post use my entire home directory to have as many files as possible. I can't really share that of course. But the command lines are shown in the hyperfine output if you want to try with your own directory tree.

Yes, I understand that. It isn't reproducible because I don't have your home directory. What I'm asking for is something I can run on my own machine that reproduces your results. Otherwise it's too easy to goof and think you're measuring the same thing when you aren't. I've looked at a lot of these kinds of examples with various issues reported against ripgrep, and it's not too rare for those scenarios to involve something "weird." Your home directory, for example, may or may not be representative and may or may not be a good target to optimize for. Without reproductions, this sort of holistic thinking is difficult to do.

At the time I tried BFS on the same corpus and the memory use wasn't excessive, but of course it doesn't support .gitignore so not a fair comparison.

Right yeah, it was those matchers using up most of the memory. Tricks with caching are possible, but that also increases implementation complexity because you get the joys of needing to tackle cache invalidation. Maybe it's easier than I think. Usually it isn't. :-)

I also have a more comprehensive set of benchmarks you can see in some PRs like tavianator/bfs#107. Those use publicly available git repos like the Linux repo and the Android source tree. I haven't published the scripts for those yet, but you can see the command lines in the output as well. They also only compare bfs to itself, but once I publish the scripts I'll include find and fd for reference.

Sounds good. If and when a precise set of commands are available to run those benchmarks then I'd be happy to take a quick look and do some analysis within the context of the ignore crate.

from fd.

tavianator avatar tavianator commented on May 3, 2024

Sounds good. If and when a precise set of commands are available to run those benchmarks then I'd be happy to take a quick look and do some analysis within the context of the ignore crate.

I just pushed a branch for my benchmarks. Not totally done yet but you can try it out like this:

$ git clone "https://github.com/tavianator/tailfin"
$ git clone "https://github.com/tavianator/bfs"
$ cd bfs
$ git checkout benchmarks
$ ../tailfin/tailfin run bench/bench.sh 3.0.1 --linux --find --fd

The first run will clone https://github.com/torvalds/linux so it may take a while. The given command line will build bfs 3.0.1 and compare it to whatever find and fd are on your path. Remove the relevant arguments if you're not interested in those results.

Output will go to the console and also get saved in a directory like results/2023/08/16/17:05:45.

Here's my results/2023/08/16/17:05:45/runs/1/bench.md

Complete traversal

Linux v6.4 source tree

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/linux -false 31.5 ± 4.4 26.6 39.5 1.00
find bench/corpus/linux -false 99.6 ± 11.0 87.5 113.3 3.16 ± 0.56
fd -u '^$' bench/corpus/linux 221.6 ± 10.5 185.1 228.2 7.03 ± 1.03

Early termination

Linux v6.4 source tree

Depth 5

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/linux -name $(shuf -n1 $FILES) -print -quit 24.4 ± 3.3 18.1 30.4 1.00
find bench/corpus/linux -name $(shuf -n1 $FILES) -print -quit 55.0 ± 39.6 7.6 104.5 2.26 ± 1.66
fd -usg1 $(shuf -n1 $FILES) bench/corpus/linux 186.1 ± 36.9 128.6 242.6 7.64 ± 1.84

Depth 10

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/linux -name $(shuf -n1 $FILES) -print -quit 35.0 ± 3.5 30.7 40.3 1.00
find bench/corpus/linux -name $(shuf -n1 $FILES) -print -quit 75.8 ± 7.2 71.5 94.8 2.16 ± 0.30
fd -usg1 $(shuf -n1 $FILES) bench/corpus/linux 158.1 ± 17.8 132.7 192.8 4.51 ± 0.68

from fd.

BurntSushi avatar BurntSushi commented on May 3, 2024

Very nice, thank you! I'm able to reproduce that. Other than the impressive timings from bfs, I think the other most interesting thing here is that find is faster than fd, despite the fact that the former is single threaded. IIRC, ripgrep used to be to complete a search of a directory tree (like a checkout of the Linux kernel) faster than find could enumerate the files. But that's just a memory.

From briefly profiling, it looks like fd/ripgrep are spending a lot of time in lock contention.

Probably the next step is doing a deeper dive to see if anything about GNU find has changed to make it faster. And experimenting with different traversal approaches in ignore.

from fd.

tavianator avatar tavianator commented on May 3, 2024

I just pushed an update to that branch with some nice features

The syntax changed a bit, here's some examples

$ tailfin run bench/bench.sh --default 3.0.1 --find --fd
All default benchmarks against bfs 3.0.1, find, and fd
$ tailfin run bench/bench.sh --complete --fd
Complete traversal of every corpus with just fd
$ tailfin run bench/bench.sh --complete="linux chromium" --find --fd
Complete traversal of Linux and Chromium with find and fd
Results of tailfin run bench/bench.sh --default 3.0.1 --find --fd

Complete traversal

rust 1.71.0 (44530 files)

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/rust -false 21.8 ± 2.6 18.2 26.8 1.00
find bench/corpus/rust -false 61.8 ± 12.1 51.4 87.4 2.83 ± 0.65
fd -u '^$' bench/corpus/rust 186.3 ± 12.9 166.8 207.7 8.53 ± 1.19

linux v6.4 (85538 files)

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/linux -false 34.1 ± 3.4 28.2 40.3 1.00
find bench/corpus/linux -false 102.7 ± 15.2 92.4 135.8 3.01 ± 0.54
fd -u '^$' bench/corpus/linux 228.3 ± 5.8 218.0 238.9 6.69 ± 0.70

llvm llvmorg-16.0.6 (139235 files)

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/llvm-project -false 48.7 ± 3.5 43.6 54.3 1.00
find bench/corpus/llvm-project -false 222.8 ± 12.3 208.7 236.4 4.57 ± 0.41
fd -u '^$' bench/corpus/llvm-project 282.2 ± 5.7 272.2 288.6 5.79 ± 0.43

chromium 118.0.5954.1 (465865 files)

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/chromium -false 110.8 ± 8.4 99.7 132.0 1.00
find bench/corpus/chromium -false 612.9 ± 9.9 601.6 624.7 5.53 ± 0.43
fd -u '^$' bench/corpus/chromium 641.8 ± 3.2 637.4 647.6 5.79 ± 0.44

Early termination

chromium 118.0.5954.1 (depth 16)

Depth 5

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/chromium -name $(shuf -n1 $FILES) -print -quit 47.7 ± 7.3 28.0 57.3 1.00
find bench/corpus/chromium -name $(shuf -n1 $FILES) -print -quit 231.9 ± 197.1 13.8 734.7 4.86 ± 4.20
fd -usg1 $(shuf -n1 $FILES) bench/corpus/chromium 488.2 ± 122.9 184.0 600.3 10.24 ± 3.02

Depth 10

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/chromium -name $(shuf -n1 $FILES) -print -quit 101.2 ± 4.0 93.7 108.9 1.00
find bench/corpus/chromium -name $(shuf -n1 $FILES) -print -quit 479.2 ± 17.6 455.3 507.7 4.74 ± 0.26
fd -usg1 $(shuf -n1 $FILES) bench/corpus/chromium 326.0 ± 27.8 283.9 387.4 3.22 ± 0.30

Depth 15

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/chromium -name $(shuf -n1 $FILES) -print -quit 111.3 ± 9.8 102.1 144.7 1.08 ± 0.35
find bench/corpus/chromium -name $(shuf -n1 $FILES) -print -quit 102.9 ± 31.8 34.6 144.4 1.00
fd -usg1 $(shuf -n1 $FILES) bench/corpus/chromium 571.2 ± 32.3 498.3 608.2 5.55 ± 1.74

Printing paths

Without colors

linux v6.4

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/linux 35.7 ± 10.8 25.2 51.4 1.00
find bench/corpus/linux 98.7 ± 0.5 98.0 99.8 2.77 ± 0.84
fd -u --search-path bench/corpus/linux 227.9 ± 3.4 223.2 234.9 6.39 ± 1.94

With colors

linux v6.4

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 bench/corpus/linux -color 205.2 ± 9.3 194.9 228.7 1.00
fd -u --search-path bench/corpus/linux --color=always 231.1 ± 3.6 226.0 238.2 1.13 ± 0.05

Search strategies

dfs

linux v6.4

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 -S dfs bench/corpus/linux 32.8 ± 7.4 28.6 53.2 1.00

ids

linux v6.4

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 -S ids bench/corpus/linux 198.5 ± 9.0 185.2 216.0 1.00

eds

linux v6.4

Command Mean [ms] Min [ms] Max [ms] Relative
bfs-3.0.1 -S eds bench/corpus/linux 79.7 ± 7.4 69.0 93.8 1.00

from fd.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.