ZTree.com  | ZEN  | About...  

 Index   Back

Is getting size on disk quick?   [Wish]

By: Nuno       
Date: Jan 23,2019 at 11:23
In Response to: Is getting size on disk quick? (Peter Shute)

> > Please try out the Alt-Sort by e.g. 'accessed'.
> > The Date/Time column in FW changes according to your choice and
> > displays the selected date/time values, e.g. last-accessed for all
> > files !!
> I like the idea of being able to switch between size and size on disk
> the same way we can switch between the date stamp types, but I wonder if
> it's as simple. I suspect that Ztree retrieves and stores all the date
> stamp types during logging, and that it has little or no effect on the
> logging speed.
> This might not be true of retrieving the size on disk, so it might not
> be practical to collect it during logging. I suspect some of the extra
> information displayed by Alt-Info, eg owner, might take some time to
> retrieve, because sometimes I see it take a moment to come up.
> If this is the case for size on disk, adding it to Alt-Info might be
> the only practical option.
> I'm trying to remember whether XtreeNet used to log it all, and
> whether it was optional.

Hi Peter,

This thread started, actually, as a "Compressed Size", not really a "Size on disk", like Windows explorer shows.
For compressed size, there is an API "GetCompressedFileSize", which gives exactly that.
SizeOnDisk seems to have no direct public API to get it.
To get (something similar to) it, it would at least involve one single API call (GetDiskFreeSpace, for example) to get the cluster size on that disk, and use that value to round-up all the Sizes obtained by the GetFileSizeEx / GetCompressedFileSize "normal" APIs.
Nevertheless, it might still not be exact, since there could be very small files that could be placed inside MFT entries themselves. Also, the MFT still occupies some space, and we don't have knowledge on how it behaves.
There are some information on the web, but there isn't a specific answer, because there are many dependencies on what space a filesystem actually needs to store one and each and every file.
And, there are still other "issues" regarding OneDrive "offline" files, that seem to report some file-size but, before downloading, they could report a Size-On-Disk as zero.

But, to keep it simple, as it should, we could just calculate SizeOnDisk based on the
- "GetCompressedFileSize", which may give interesting differences for "big-and-compressible" files (compressed by NTFS), and...
- rounding-up to cluster size, which could give also interesting metrics on folders with some thousands, or millions of small files.
- (optional) rounding to 0 (zero) very small files (<512 bytes), for files that may be stored inside MFT (Note: it may not be 512. That "depends")...

I didn't tested it, but I would suppose that calling one extra API (GetCompressedFileSize), for each file will incur some time (and will occupy some memory space), but maybe It won't incur much in I/O overhead, because of read-cache.
Also, ZTree already knows if files are compressed (based on the 'C' attribute), so it could just (and only) call "GetCompressedFileSize", which is documented as returning the same as "GetFileSize" for files that are not compressed.

For some optimization regarding memory... maybe ZTree would only store "delta" between sizes but of course - during sorting - it would pay some penalty on cpu.


Thread locked

Messages in this Thread

95,067 Postings in 11,984 Threads, 350 registered users, 77 users online (0 registered, 77 guests)
Index | Admin contact |   Forum Time: Aug 8, 2020 - 10:01 pm EDT  |  Hits:32,969,314  (24,144 Today )
RSS Feed