[Rpm-ecosystem] Initial pre-alpha version of zchunk available for testing and comments

Jonathan Dieter jdieter at gmail.com
Thu Mar 22 09:55:10 UTC 2018

I've got a working zchunk library, complete with some utilities at
https://github.com/jdieter/zchunk, but I wanted to get some feedback
before I went much further.  It's only dependencies are libcurl and
(optionally, but very heavily recommended) libzstd.

There are test files in https://www.jdieter.net/downloads/zchunk-test,
and the dictionary I used is in https://www.jdieter.net/downloads.

What works:
 * Creating zchunk files (using zck)
 * Reading zchunk files (using unzck)
 * Downloading zchunk files (using zckdl)

What doesn't:
 * Resuming zchunk downloads
 * Using any of the tools to overwrite a file
 * Automatic maximum ranges in request detection
 * Streaming chunking in the library

The main thing I want to ask for advice on is the last item on that
last list.  Currently, every piece of data send to zck_compress() is
treated as a new chunk.

I'd prefer to have zck_compress() just keep streaming data and have a
zck_end_chunk() function that ends the current chunk, but zstd doesn't
support streamed compression with a dict in its dynamic library.  You
have to use zstd's static library to get that function (because it's
not seen as stable yet).

Any suggestions on how to deal with this?  Should I require the static
library, write my own wrapper that buffers the streamed data until
zck_end_chunk() is called, or just require each chunk to be sent in its


More information about the Rpm-ecosystem mailing list