Age | Commit message (Collapse) | Author | Files | Lines |
|
New output API takes over (most of) the buffering responsibilites from
the scanning code, simplifying that code and allowing for a few minor
optimizations, such as not buffering at all when scanning into RAM and
buffering directly into JSON strings when exporting.
This is mostly yak shaving, hopefully allowing for further performance
improvements to be implemented later. The new API could also be extended
to support parallel scanning of disjoint trees, in case current approach
isn't working out too well.
Also re-added the progress UI and improved propagating read errors.
A visible side effect of the new API is that the progress UI now
displays the most recent directory being scanned rather than individual
files. Not a big loss, I hope?
|
|
And it's not looking well; this implementation seems to be 3x slower in
the hot cache scenario with -J8, which is a major regression. There's
way too much lock contention and context switching.
Haven't tested with actual disk I/O yet and I've not yet measured how
much parallelism this approach will actually get us in practice, nor
whether the disk access patterns of this approach make a whole lot of
sense. Maybe this low-memory approach will not work out and I'll end up
rewriting this to scan disjoint subtrees after all.
TODO:
- Validate how much parallelism we can actually get with this algorithm
- Lots of benchmarking and tuning (and most likely some re-architecting)
- Re-implement exclude pattern matching
- Document -J option
- Make OOM handling thread-safe
|
|
|
|
Broken in 7d2905952d956801050baaed08eb092fb22f661f
|
|
That *usually* doesn't take longer than a few milliseconds, but it can
take a few seconds for some extremely large dirs, on very slow computers
or with optimizations disabled. Better display a message than make it
seem as if ncdu has stopped doing anything.
|
|
And also adjust the graph width calculation to do a better job when the
largest item is smaller than the number of columns used for the graph,
which would previously draw either nothing (if size = 0) or a full bar
(if size > 0).
Fixes #172.
|
|
Fixes #181, now also for Zig.
|
|
|
|
|
|
|
|
|
|
Fixes #185
|
|
Fixes #183
|
|
|
|
I'm tagging this as a "stable" 2.0 release because the 2.0-beta#
numbering will get confusing when I'm working on new features and fixes.
It's still only usable for people who can use the particular Zig version
that's required (0.9.0 currently) and it will certainly break on
different Zig versions. But once you have a working binary for a
supported arch, it's perfectly stable.
|
|
|
|
|
|
...by making sure that Context.parents is properly initialized to null
when not scanning to RAM.
Fixes #179.
|
|
Port of 96a923192726f4ce77b5168a17f7a8355e6f2238
|
|
Bit pointless to make these options nullable when you never assign null
to them.
|
|
|
|
|
|
Introduced in 53d3e4c112a475ecbaae42cc1e58d42b986d76fc
|
|
|
|
Not going to bloat the help output with all those settings...
|
|
Saves about 15k on the binary size. It does allocate a bit more, but it
also frees the memory this time.
|
|
|
|
|
|
+ reorder manpage a bit, since the scan options tend to be more relevant
than all those UI options.
Again, these are mainly useful with a config file.
|
|
Might as well keep it. The quick-config menu popup idea can always be
implemented later on, we're not running out of keys quite yet.
|
|
The --enable-* options also work for imported files, this fixes #120.
Most other options are not super useful on its own, but these will be
useful when there's a config file.
|
|
That was an oversight. Especially useless when there's no option to
disable -x.
|
|
Same thing as commit 376aad0d350657d959a3ac1713a4a86b20ae30d1 in the C
version.
|
|
|
|
|
|
|
|
|
|
As aluded to in the previous commit. This approach keeps track of hard
links information much the same way as ncdu 1.16, with the main
difference being that the actual /counting/ of hard link sizes is
deferred until the scan is complete, thus allowing the use of a more
efficient algorithm and amortizing the counting costs.
As an additional benefit, the links listing in the information window
now doesn't need a full scan through the in-memory tree anymore.
A few memory usage benchmarks:
1.16 2.0-beta1 this commit
root: 429 162 164
backup: 3969 1686 1601
many links: 155 194 106
many links2*: 155 602 106
(I'm surprised my backup dir had enough hard links for this to be an
improvement)
(* this is the same as the "many links" benchmarks, but with a few
parent directories added to increase the tree depth. 2.0-beta1 doesn't
like that at all)
Performance-wise, refresh and delete operations can still be improved a
bit.
|
|
While this simplifies the code a bit, it's a regression in the sense
that it increases memory use.
This commit is yak shaving for another hard link counting approach I'd
like to try out, which should be a *LOT* less memory hungry compared to
the current approach. Even though it does, indeed, add an extra cost of
these parent node pointers.
|
|
|
|
|
|
|
|
|
|
It's a bit ugly, but appears to work. I've not tested the 32bit arm
version, but the others run.
The static binaries are about twice as large as the ncdu 1.x
counterparts.
|
|
I had planned to checkout out async functions here so I could avoid
recursing onto the stack alltogether, but it's still unclear to me how
to safely call into libc from async functions so let's wait for all that
to get fleshed out a bit more.
|
|
The rewrite is now on feature-parity with ncdu 1.x. What remains is
bugfixing and polishing.
|
|
|
|
+ a failed initial attempt at producing static binaries.
|
|
Which is slightly simpler and should provide a minor performance
improvement.
|
|
Sticking to "compiletime-known" error types will essentially just bring
in *every* possible error anyway, so might as well take advantage of
@errorName.
|