-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Binary needed, ICL compile line needed, standalone unpacker needed #7
Comments
Thanks for the interesting results, I am happy to see BriefLZ is doing okay. Regarding ICL, if it is compatible with the command line options of MSVC cl, I imagine you could do set CC=icl
nmake -f Makefile.vc in the example folder to build blzpack. You could also simply list all the source files -- something like: icl /O2 /I..\include blzpack.c parg.c ..\src\brieflz.c ..\src\depack.c ..\src\depacks.c Replace |
Thank you, glad that I avoided automated 'make' and such, I totally rely on manual (command line) compilation. The x86 and x64 compiles are attached:
The results for the two Kenkyusha are under way... |
Having looked into your code, very clean-n-simple, my appreciation for BriefLZ reaches new heights, thanks for your work, no matter how much time the compression takes I am going... By the way, one suggestion from me, consider dedicating an old laptop with working battery and just throw enwik9 at your supertoy, leave it ... aloof, hee-hee for a year if you must and shock the [de]compression community :P For 3 days my i5-7200u has been crunching the 190MB testset, progress indicator is needed, so I added one, printing the 'cur' from brieflz_ssparse.h does the feedback.:
Hope you don't mind my new compile allowing benchmarking decompression rates, all lines which I added end in //Kaze, the fragment in blzpack.c in decompress_file function:
Oh, and here comes an excerpt of the two concatenated Kenkyusha, hate when the reader is not allowed to feel the testdatafile:
As for ongoing 190MB the current results are:
Will add the decompression speeds after finishing the -x -b200 ...
The actual invocations:
|
Thank you again for all the details. One thing I noticed in your blzpack modification is that there are two freads, one to read the block header, and one to read the compressed data. You put the line setting clocksWITHOUTfread between these two. If you want to get the time of decompression only you will have to move it down after the second fread. |
> If you want to get the time of decompression only you will have to move it down after the second fread. Ugh, dummy me, a stupid overlook, now moved right above the actual decompression, edited the previous post and reuploaded the .zip. |
After nearly 10 days of crunching, had to terminate the --optimal -b200m, mostly because of the lack of progress indicator - this is unacceptable, so I have to rerun it with my Oct 30 compile.
As for the decompression rate:
And for a good measure, Razor and Zstd are added:
Razor proves its worth over and over again:
Still wanna make a comprehensive i.e unabridged roster with many options and decompressors, don't know when. What could be the first testdatafile... do you have some in mind... my first choice would be The Complete Dostoyevskiy ~40MB, the filesize should be >> L3. |
Hi Jørgen,
seeing your new stronger modes gladdened my eyes, your impressive --optimal mode will enter my everincoming roster of textual decompressors, speaking of decompression, please consider making a standalone decompressor such as 'PKUnzipJR' - to be easier to compile and benchmark, my wish is to compile it with ICL 64bit.
I saw an unofficial binary on encode.ru, quickly tried it with one small textual corpus - the subtitles of all 4 seasons of Lexx superseries:
AFAIK, the best compression ratio and best decompression speed is held by Hamid's LzTurbo 29, with no entropy coding! On i5-7200u the abridged showdown looks like this:
The quick micro-roster:
The TurboBench abridged roster:
Note: For Oodle - 1,085,952 oo2core_6_win64.dll was used.
The lzbench mini-roster:
The Zstd -22..22 modes roster:
If you are interested, I have one superb corpus - a definitive one for Japanese language - revealing how strong one parser is in ~190MB depths, could post how BriefLZ performs...
The text was updated successfully, but these errors were encountered: