Your IP : 18.227.134.95
�
��bgxk � � � d dl mZ d dlZd dlZd dlZd dlZd dlmZmZm Z m
Z
mZ d dlm
Z
mZmZmZmZ d dlmZmZ d dlmZ dZ d dlmZ d Zn# e$ r Y nw xY wd
Z G d� de� � Z G d
� de� � Z G d� d� � Z G d� de� � Z G d� de� � Z! G d� de� � Z" G d� d� � Z# G d� d� � Z$dS )� )�BytesION)�msb_size�stream_copy�apply_delta_data�connect_deltas�delta_types)�allocate_memory� LazyMixin�make_sha�write�close)� NULL_BYTE�
BYTE_SPACE)�force_bytesF)�apply_deltaT) �DecompressMemMapReader�FDCompressedSha1Writer�DeltaApplyReader�
Sha1Writer�FlexibleSha1Writer�ZippedStoreShaWriterr �FDStream�
NullStreamc � � e Zd ZdZdZdZdd�Zd� Zd� Zd� Z e
dd
�� � Zd� Zd� Z
d
� Z eedd� � fd�Zdd�ZdS )r a� Reads data in chunks from a memory map and decompresses it. The client sees
only the uncompressed data, respective file-like read calls are handling on-demand
buffered decompression accordingly
A constraint on the total size of bytes is activated, simulating
a logical file within a possibly larger physical memory area
To read efficiently, you clearly don't want to read individual bytes, instead,
read a few kilobytes at least.
**Note:** The chunk-size should be carefully selected as it will involve quite a bit
of string copying due to the way the zlib is implemented. Its very wasteful,
hence we try to find a good tradeoff between allocation time and number of
times we actually allocate. An own zlib implementation would be good here
to better support streamed reading - it would only need to keep the mmap
and decompress it into chunks, that's all ... )�_m�_zip�_buf�_buflen�_br�_cws�_cwe�_s�_close�_cbr�_phii Nc �� � || _ t j � � | _ d| _ d| _ |�|| _ d| _ d| _ d| _ d| _
d| _ || _ dS )z|Initialize with mmap for stream reading
:param m: must be content data - use new if you have object data and no sizeNr F)
r �zlib�
decompressobjr r r r"