Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
...by making sure that Context.parents is properly initialized to null
when not scanning to RAM.
Fixes #179.
|
|
Port of 96a923192726f4ce77b5168a17f7a8355e6f2238
|
|
Bit pointless to make these options nullable when you never assign null
to them.
|
|
|
|
|
|
Introduced in 53d3e4c112a475ecbaae42cc1e58d42b986d76fc
|
|
|
|
Not going to bloat the help output with all those settings...
|
|
Saves about 15k on the binary size. It does allocate a bit more, but it
also frees the memory this time.
|
|
|
|
|
|
+ reorder manpage a bit, since the scan options tend to be more relevant
than all those UI options.
Again, these are mainly useful with a config file.
|
|
Might as well keep it. The quick-config menu popup idea can always be
implemented later on, we're not running out of keys quite yet.
|
|
The --enable-* options also work for imported files, this fixes #120.
Most other options are not super useful on its own, but these will be
useful when there's a config file.
|
|
That was an oversight. Especially useless when there's no option to
disable -x.
|
|
Same thing as commit 376aad0d350657d959a3ac1713a4a86b20ae30d1 in the C
version.
|
|
|
|
|
|
|
|
|
|
As aluded to in the previous commit. This approach keeps track of hard
links information much the same way as ncdu 1.16, with the main
difference being that the actual /counting/ of hard link sizes is
deferred until the scan is complete, thus allowing the use of a more
efficient algorithm and amortizing the counting costs.
As an additional benefit, the links listing in the information window
now doesn't need a full scan through the in-memory tree anymore.
A few memory usage benchmarks:
1.16 2.0-beta1 this commit
root: 429 162 164
backup: 3969 1686 1601
many links: 155 194 106
many links2*: 155 602 106
(I'm surprised my backup dir had enough hard links for this to be an
improvement)
(* this is the same as the "many links" benchmarks, but with a few
parent directories added to increase the tree depth. 2.0-beta1 doesn't
like that at all)
Performance-wise, refresh and delete operations can still be improved a
bit.
|
|
While this simplifies the code a bit, it's a regression in the sense
that it increases memory use.
This commit is yak shaving for another hard link counting approach I'd
like to try out, which should be a *LOT* less memory hungry compared to
the current approach. Even though it does, indeed, add an extra cost of
these parent node pointers.
|
|
|
|
|
|
|
|
|
|
It's a bit ugly, but appears to work. I've not tested the 32bit arm
version, but the others run.
The static binaries are about twice as large as the ncdu 1.x
counterparts.
|
|
I had planned to checkout out async functions here so I could avoid
recursing onto the stack alltogether, but it's still unclear to me how
to safely call into libc from async functions so let's wait for all that
to get fleshed out a bit more.
|
|
The rewrite is now on feature-parity with ncdu 1.x. What remains is
bugfixing and polishing.
|
|
|
|
+ a failed initial attempt at producing static binaries.
|
|
Which is slightly simpler and should provide a minor performance
improvement.
|
|
Sticking to "compiletime-known" error types will essentially just bring
in *every* possible error anyway, so might as well take advantage of
@errorName.
|
|
|
|
|
|
This complicated the scan code more than I had anticipated and has a
few inherent bugs with respect to calculating shared hardlink sizes.
Still, the merge approach avoids creating a full copy of the subtree, so
that's another memory usage related win compared to the C version.
On the other hand, it does leak memory if nodes can't be reused.
Not quite as well tested as I should have, so I'm sure there's bugs.
|
|
Two differences compared to the C version:
- You can now select individual paths in the listing, pressing enter
will open the selected path in the browser window.
- Creating this listing is much slower and requires, in the worst case,
a full traversal through the in-memory tree. I've tested this without
the same-dev and shared-parent optimizations (i.e. worst case) on an
import with 30M files and performance was still quite acceptable - the
listing completed in a second - so I didn't bother adding a loading
indicator. On slower systems and even larger trees this may be a
little annoying, though.
(also, calling nonl() apparently breaks detection of the return key,
neither \n nor KEY_ENTER are emitted for some reason)
|
|
Doesn't display the item's path anymore (seems rather redundant) but
adds a few more other fields.
|
|
The good news is: apart from this little thing, everything seems to just
work(tm) on FreeBSD. Think I had more trouble with C because of minor
header file differences.
|
|
I had used them as a HashSet with mutable keys already in order to avoid
padding problems. This is not always necessary anymore now that Zig's
new HashMap uses separate arrays for keys and values, but I still need
the HashSet trick for the link_count nodes table, as the key itself
would otherwise have padding.
|
|
It still feels kind of sluggish, but not entirely sure how to improve
it.
|
|
Under the assumption that there are no external references to files
mentioned in the dump, i.e. a file's nlink count matches the number of
times the file occurs in the dump.
This machinery could also be used for regular scans, when you want to
scan an individual directory without caring about external hard links.
Maybe that should be the default, even? Not sure...
|
|
|
|
|
|
In a similar way to the C version of ncdu: by wrapping malloc(). It's
simpler to handle allocation failures at the source to allow for easy
retries, pushing the retries up the stack will complicate code somewhat
more. Likewise, this is a best-effort approach to handling OOM,
allocation failures in ncurses aren't handled and display glitches may
occur when we get an OOM inside a drawing function.
This is a somewhat un-Zig-like way of handling errors and adds
scary-looking 'catch unreachable's all over the code, but that's okay.
|
|
Performance is looking great, but the code is rather ugly and
potentially buggy. Also doesn't handle hard links without an "nlink"
field yet.
Error handling of the import code is different from what I've been doing
until now. That's intentional, I'll change error handling of other
pieces to call ui.die() directly rather than propagating error enums.
The approach is less testable but conceptually simpler, it's perfectly
fine for a tiny application like ncdu.
|
|
|
|
I plan to add more display options, but ran out of keys to bind.
Probably going for a quick-select menu thingy so that we can keep the
old key bindings for people accustomed to it.
The graph width algorithm is slightly different, but I think this one's
a minor improvement.
|
|
The exported file format is fully compatible with ncdu 1.x, but has a
few minor differences. I've backported these changes in
ca51d4ed1a0f61042fc43d2a7ae8732351431654
|