Back to index

python3.2  3.2.2
Public Member Functions | Public Attributes | Static Public Attributes | Private Member Functions | Private Attributes
gzip.GzipFile Class Reference
Inheritance diagram for gzip.GzipFile:
Inheritance graph
[legend]
Collaboration diagram for gzip.GzipFile:
Collaboration graph
[legend]

List of all members.

Public Member Functions

def __init__
def filename
def __repr__
def write
def read
def peek
def closed
def close
def flush
def fileno
def rewind
def readable
def writable
def seekable
def seek
def readline
def __new__
def register
def __instancecheck__
def __subclasscheck__

Public Attributes

 mode
 extrabuf
 extrasize
 extrastart
 name
 min_readsize
 compress
 fileobj
 offset
 mtime
 crc
 size
 writebuf
 bufsize
 decompress

Static Public Attributes

 myfileobj = None
int max_read_chunk = 10

Private Member Functions

def _check_closed
def _init_write
def _write_gzip_header
def _init_read
def _read_gzip_header
def _unread
def _read
def _add_read_data
def _read_eof

Private Attributes

 _new_member

Detailed Description

The GzipFile class simulates most of the methods of a file object with
the exception of the readinto() and truncate() methods.

Definition at line 104 of file gzip.py.


Constructor & Destructor Documentation

def gzip.GzipFile.__init__ (   self,
  filename = None,
  mode = None,
  compresslevel = 9,
  fileobj = None,
  mtime = None 
)
Constructor for the GzipFile class.

At least one of fileobj and filename must be given a
non-trivial value.

The new class instance is based on fileobj, which can be a regular
file, a StringIO object, or any other object which simulates a file.
It defaults to None, in which case filename is opened to provide
a file object.

When fileobj is not None, the filename argument is only used to be
included in the gzip file header, which may includes the original
filename of the uncompressed file.  It defaults to the filename of
fileobj, if discernible; otherwise, it defaults to the empty string,
and in this case the original filename is not included in the header.

The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', or 'wb',
depending on whether the file will be read or written.  The default
is the mode of fileobj if discernible; otherwise, the default is 'rb'.
Be aware that only the 'rb', 'ab', and 'wb' values should be used
for cross-platform portability.

The compresslevel argument is an integer from 1 to 9 controlling the
level of compression; 1 is fastest and produces the least compression,
and 9 is slowest and produces the most compression.  The default is 9.

The mtime argument is an optional numeric timestamp to be written
to the stream when compressing.  All gzip compressed streams
are required to contain a timestamp.  If omitted or None, the
current time is used.  This module ignores the timestamp when
decompressing; however, some programs, such as gunzip, make use
of it.  The format of the timestamp is the same as that of the
return value of time.time() and of the st_mtime member of the
object returned by os.stat().

Definition at line 114 of file gzip.py.

00114 
00115                  compresslevel=9, fileobj=None, mtime=None):
00116         """Constructor for the GzipFile class.
00117 
00118         At least one of fileobj and filename must be given a
00119         non-trivial value.
00120 
00121         The new class instance is based on fileobj, which can be a regular
00122         file, a StringIO object, or any other object which simulates a file.
00123         It defaults to None, in which case filename is opened to provide
00124         a file object.
00125 
00126         When fileobj is not None, the filename argument is only used to be
00127         included in the gzip file header, which may includes the original
00128         filename of the uncompressed file.  It defaults to the filename of
00129         fileobj, if discernible; otherwise, it defaults to the empty string,
00130         and in this case the original filename is not included in the header.
00131 
00132         The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', or 'wb',
00133         depending on whether the file will be read or written.  The default
00134         is the mode of fileobj if discernible; otherwise, the default is 'rb'.
00135         Be aware that only the 'rb', 'ab', and 'wb' values should be used
00136         for cross-platform portability.
00137 
00138         The compresslevel argument is an integer from 1 to 9 controlling the
00139         level of compression; 1 is fastest and produces the least compression,
00140         and 9 is slowest and produces the most compression.  The default is 9.
00141 
00142         The mtime argument is an optional numeric timestamp to be written
00143         to the stream when compressing.  All gzip compressed streams
00144         are required to contain a timestamp.  If omitted or None, the
00145         current time is used.  This module ignores the timestamp when
00146         decompressing; however, some programs, such as gunzip, make use
00147         of it.  The format of the timestamp is the same as that of the
00148         return value of time.time() and of the st_mtime member of the
00149         object returned by os.stat().
00150 
00151         """
00152 
00153         # guarantee the file is opened in binary mode on platforms
00154         # that care about that sort of thing
00155         if mode and 'b' not in mode:
00156             mode += 'b'
00157         if fileobj is None:
00158             fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
00159         if filename is None:
00160             if hasattr(fileobj, 'name'): filename = fileobj.name
00161             else: filename = ''
00162         if mode is None:
00163             if hasattr(fileobj, 'mode'): mode = fileobj.mode
00164             else: mode = 'rb'
00165 
00166         if mode[0:1] == 'r':
00167             self.mode = READ
00168             # Set flag indicating start of a new member
00169             self._new_member = True
00170             # Buffer data read from gzip file. extrastart is offset in
00171             # stream where buffer starts. extrasize is number of
00172             # bytes remaining in buffer from current stream position.
00173             self.extrabuf = b""
00174             self.extrasize = 0
00175             self.extrastart = 0
00176             self.name = filename
00177             # Starts small, scales exponentially
00178             self.min_readsize = 100
00179             fileobj = _PaddedFile(fileobj)
00180 
00181         elif mode[0:1] == 'w' or mode[0:1] == 'a':
00182             self.mode = WRITE
00183             self._init_write(filename)
00184             self.compress = zlib.compressobj(compresslevel,
00185                                              zlib.DEFLATED,
00186                                              -zlib.MAX_WBITS,
00187                                              zlib.DEF_MEM_LEVEL,
00188                                              0)
00189         else:
00190             raise IOError("Mode " + mode + " not supported")
00191 
00192         self.fileobj = fileobj
00193         self.offset = 0
00194         self.mtime = mtime
00195 
00196         if self.mode == WRITE:
00197             self._write_gzip_header()

Here is the caller graph for this function:


Member Function Documentation

def abc.ABCMeta.__instancecheck__ (   cls,
  instance 
) [inherited]
Override for isinstance(instance, cls).

Definition at line 158 of file abc.py.

00158 
00159     def __instancecheck__(cls, instance):
00160         """Override for isinstance(instance, cls)."""
00161         # Inline the cache checking
00162         subclass = instance.__class__
00163         if subclass in cls._abc_cache:
00164             return True
00165         subtype = type(instance)
00166         if subtype is subclass:
00167             if (cls._abc_negative_cache_version ==
00168                 ABCMeta._abc_invalidation_counter and
00169                 subclass in cls._abc_negative_cache):
00170                 return False
00171             # Fall back to the subclass check.
00172             return cls.__subclasscheck__(subclass)
00173         return any(cls.__subclasscheck__(c) for c in {subclass, subtype})

Here is the call graph for this function:

def abc.ABCMeta.__new__ (   mcls,
  name,
  bases,
  namespace 
) [inherited]

Definition at line 116 of file abc.py.

00116 
00117     def __new__(mcls, name, bases, namespace):
00118         cls = super().__new__(mcls, name, bases, namespace)
00119         # Compute set of abstract method names
00120         abstracts = {name
00121                      for name, value in namespace.items()
00122                      if getattr(value, "__isabstractmethod__", False)}
00123         for base in bases:
00124             for name in getattr(base, "__abstractmethods__", set()):
00125                 value = getattr(cls, name, None)
00126                 if getattr(value, "__isabstractmethod__", False):
00127                     abstracts.add(name)
00128         cls.__abstractmethods__ = frozenset(abstracts)
00129         # Set up inheritance registry
00130         cls._abc_registry = WeakSet()
00131         cls._abc_cache = WeakSet()
00132         cls._abc_negative_cache = WeakSet()
00133         cls._abc_negative_cache_version = ABCMeta._abc_invalidation_counter
00134         return cls

Here is the call graph for this function:

def gzip.GzipFile.__repr__ (   self)

Definition at line 206 of file gzip.py.

00206 
00207     def __repr__(self):
00208         fileobj = self.fileobj
00209         if isinstance(fileobj, _PaddedFile):
00210             fileobj = fileobj.file
00211         s = repr(fileobj)
00212         return '<gzip ' + s[1:-1] + ' ' + hex(id(self)) + '>'

def abc.ABCMeta.__subclasscheck__ (   cls,
  subclass 
) [inherited]
Override for issubclass(subclass, cls).

Definition at line 174 of file abc.py.

00174 
00175     def __subclasscheck__(cls, subclass):
00176         """Override for issubclass(subclass, cls)."""
00177         # Check cache
00178         if subclass in cls._abc_cache:
00179             return True
00180         # Check negative cache; may have to invalidate
00181         if cls._abc_negative_cache_version < ABCMeta._abc_invalidation_counter:
00182             # Invalidate the negative cache
00183             cls._abc_negative_cache = WeakSet()
00184             cls._abc_negative_cache_version = ABCMeta._abc_invalidation_counter
00185         elif subclass in cls._abc_negative_cache:
00186             return False
00187         # Check the subclass hook
00188         ok = cls.__subclasshook__(subclass)
00189         if ok is not NotImplemented:
00190             assert isinstance(ok, bool)
00191             if ok:
00192                 cls._abc_cache.add(subclass)
00193             else:
00194                 cls._abc_negative_cache.add(subclass)
00195             return ok
00196         # Check if it's a direct subclass
00197         if cls in getattr(subclass, '__mro__', ()):
00198             cls._abc_cache.add(subclass)
00199             return True
00200         # Check if it's a subclass of a registered class (recursive)
00201         for rcls in cls._abc_registry:
00202             if issubclass(subclass, rcls):
00203                 cls._abc_cache.add(subclass)
00204                 return True
00205         # Check if it's a subclass of a subclass (recursive)
00206         for scls in cls.__subclasses__():
00207             if issubclass(subclass, scls):
00208                 cls._abc_cache.add(subclass)
00209                 return True
00210         # No dice; update negative cache
00211         cls._abc_negative_cache.add(subclass)
00212         return False

Here is the call graph for this function:

def gzip.GzipFile._add_read_data (   self,
  data 
) [private]

Definition at line 419 of file gzip.py.

00419 
00420     def _add_read_data(self, data):
00421         self.crc = zlib.crc32(data, self.crc) & 0xffffffff
00422         offset = self.offset - self.extrastart
00423         self.extrabuf = self.extrabuf[offset:] + data
00424         self.extrasize = self.extrasize + len(data)
00425         self.extrastart = self.offset
00426         self.size = self.size + len(data)

Here is the caller graph for this function:

def gzip.GzipFile._check_closed (   self) [private]
Raises a ValueError if the underlying file object has been closed.

Definition at line 213 of file gzip.py.

00213 
00214     def _check_closed(self):
00215         """Raises a ValueError if the underlying file object has been closed.
00216 
00217         """
00218         if self.closed:
00219             raise ValueError('I/O operation on closed file.')

Here is the call graph for this function:

Here is the caller graph for this function:

def gzip.GzipFile._init_read (   self) [private]

Definition at line 252 of file gzip.py.

00252 
00253     def _init_read(self):
00254         self.crc = zlib.crc32(b"") & 0xffffffff
00255         self.size = 0

Here is the caller graph for this function:

def gzip.GzipFile._init_write (   self,
  filename 
) [private]

Definition at line 220 of file gzip.py.

00220 
00221     def _init_write(self, filename):
00222         self.name = filename
00223         self.crc = zlib.crc32(b"") & 0xffffffff
00224         self.size = 0
00225         self.writebuf = []
00226         self.bufsize = 0

def gzip.GzipFile._read (   self,
  size = 1024 
) [private]

Definition at line 377 of file gzip.py.

00377 
00378     def _read(self, size=1024):
00379         if self.fileobj is None:
00380             raise EOFError("Reached EOF")
00381 
00382         if self._new_member:
00383             # If the _new_member flag is set, we have to
00384             # jump to the next member, if there is one.
00385             self._init_read()
00386             self._read_gzip_header()
00387             self.decompress = zlib.decompressobj(-zlib.MAX_WBITS)
00388             self._new_member = False
00389 
00390         # Read a chunk of data from the file
00391         buf = self.fileobj.read(size)
00392 
00393         # If the EOF has been reached, flush the decompression object
00394         # and mark this object as finished.
00395 
00396         if buf == b"":
00397             uncompress = self.decompress.flush()
00398             # Prepend the already read bytes to the fileobj to they can be
00399             # seen by _read_eof()
00400             self.fileobj.prepend(self.decompress.unused_data, True)
00401             self._read_eof()
00402             self._add_read_data( uncompress )
00403             raise EOFError('Reached EOF')
00404 
00405         uncompress = self.decompress.decompress(buf)
00406         self._add_read_data( uncompress )
00407 
00408         if self.decompress.unused_data != b"":
00409             # Ending case: we've come to the end of a member in the file,
00410             # so seek back to the start of the unused data, finish up
00411             # this member, and read a new gzip header.
00412             # Prepend the already read bytes to the fileobj to they can be
00413             # seen by _read_eof() and _read_gzip_header()
00414             self.fileobj.prepend(self.decompress.unused_data, True)
00415             # Check the CRC and file size, and set the flag so we read
00416             # a new member on the next call
00417             self._read_eof()
00418             self._new_member = True

Here is the call graph for this function:

Here is the caller graph for this function:

def gzip.GzipFile._read_eof (   self) [private]

Definition at line 427 of file gzip.py.

00427 
00428     def _read_eof(self):
00429         # We've read to the end of the file
00430         # We check the that the computed CRC and size of the
00431         # uncompressed data matches the stored values.  Note that the size
00432         # stored is the true file size mod 2**32.
00433         crc32 = read32(self.fileobj)
00434         isize = read32(self.fileobj)  # may exceed 2GB
00435         if crc32 != self.crc:
00436             raise IOError("CRC check failed %s != %s" % (hex(crc32),
00437                                                          hex(self.crc)))
00438         elif isize != (self.size & 0xffffffff):
00439             raise IOError("Incorrect length of data produced")
00440 
00441         # Gzip files can be padded with zeroes and still have archives.
00442         # Consume all zero bytes and set the file position to the first
00443         # non-zero byte. See http://www.gzip.org/#faq8
00444         c = b"\x00"
00445         while c == b"\x00":
00446             c = self.fileobj.read(1)
00447         if c:
00448             self.fileobj.prepend(c, True)

Here is the call graph for this function:

def gzip.GzipFile._read_gzip_header (   self) [private]

Definition at line 256 of file gzip.py.

00256 
00257     def _read_gzip_header(self):
00258         magic = self.fileobj.read(2)
00259         if magic == b'':
00260             raise EOFError("Reached EOF")
00261 
00262         if magic != b'\037\213':
00263             raise IOError('Not a gzipped file')
00264         method = ord( self.fileobj.read(1) )
00265         if method != 8:
00266             raise IOError('Unknown compression method')
00267         flag = ord( self.fileobj.read(1) )
00268         self.mtime = read32(self.fileobj)
00269         # extraflag = self.fileobj.read(1)
00270         # os = self.fileobj.read(1)
00271         self.fileobj.read(2)
00272 
00273         if flag & FEXTRA:
00274             # Read & discard the extra field, if present
00275             xlen = ord(self.fileobj.read(1))
00276             xlen = xlen + 256*ord(self.fileobj.read(1))
00277             self.fileobj.read(xlen)
00278         if flag & FNAME:
00279             # Read and discard a null-terminated string containing the filename
00280             while True:
00281                 s = self.fileobj.read(1)
00282                 if not s or s==b'\000':
00283                     break
00284         if flag & FCOMMENT:
00285             # Read and discard a null-terminated string containing a comment
00286             while True:
00287                 s = self.fileobj.read(1)
00288                 if not s or s==b'\000':
00289                     break
00290         if flag & FHCRC:
00291             self.fileobj.read(2)     # Read & discard the 16-bit header CRC
00292 
00293         unused = self.fileobj.unused()
00294         if unused:
00295             uncompress = self.decompress.decompress(unused)
00296             self._add_read_data(uncompress)

Here is the call graph for this function:

Here is the caller graph for this function:

def gzip.GzipFile._unread (   self,
  buf 
) [private]

Definition at line 373 of file gzip.py.

00373 
00374     def _unread(self, buf):
00375         self.extrasize = len(buf) + self.extrasize
00376         self.offset -= len(buf)

Here is the caller graph for this function:

def gzip.GzipFile._write_gzip_header (   self) [private]

Definition at line 227 of file gzip.py.

00227 
00228     def _write_gzip_header(self):
00229         self.fileobj.write(b'\037\213')             # magic header
00230         self.fileobj.write(b'\010')                 # compression method
00231         try:
00232             # RFC 1952 requires the FNAME field to be Latin-1. Do not
00233             # include filenames that cannot be represented that way.
00234             fname = os.path.basename(self.name)
00235             fname = fname.encode('latin-1')
00236             if fname.endswith(b'.gz'):
00237                 fname = fname[:-3]
00238         except UnicodeEncodeError:
00239             fname = b''
00240         flags = 0
00241         if fname:
00242             flags = FNAME
00243         self.fileobj.write(chr(flags).encode('latin-1'))
00244         mtime = self.mtime
00245         if mtime is None:
00246             mtime = time.time()
00247         write32u(self.fileobj, int(mtime))
00248         self.fileobj.write(b'\002')
00249         self.fileobj.write(b'\377')
00250         if fname:
00251             self.fileobj.write(fname + b'\000')

Here is the call graph for this function:

def gzip.GzipFile.close (   self)

Reimplemented in xmlrpc.client.GzipDecodedResponse.

Definition at line 453 of file gzip.py.

00453 
00454     def close(self):
00455         if self.fileobj is None:
00456             return
00457         if self.mode == WRITE:
00458             self.fileobj.write(self.compress.flush())
00459             write32u(self.fileobj, self.crc)
00460             # self.size may exceed 2GB, or even 4GB
00461             write32u(self.fileobj, self.size & 0xffffffff)
00462             self.fileobj = None
00463         elif self.mode == READ:
00464             self.fileobj = None
00465         if self.myfileobj:
00466             self.myfileobj.close()
00467             self.myfileobj = None

Here is the call graph for this function:

def gzip.GzipFile.closed (   self)

Definition at line 450 of file gzip.py.

00450 
00451     def closed(self):
00452         return self.fileobj is None

Here is the caller graph for this function:

def gzip.GzipFile.filename (   self)

Definition at line 199 of file gzip.py.

00199 
00200     def filename(self):
00201         import warnings
00202         warnings.warn("use the name attribute", DeprecationWarning, 2)
00203         if self.mode == WRITE and self.name[-3:] != ".gz":
00204             return self.name + ".gz"
00205         return self.name

Here is the call graph for this function:

Here is the caller graph for this function:

def gzip.GzipFile.fileno (   self)
Invoke the underlying file object's fileno() method.

This will raise AttributeError if the underlying file object
doesn't support fileno().

Definition at line 475 of file gzip.py.

00475 
00476     def fileno(self):
00477         """Invoke the underlying file object's fileno() method.
00478 
00479         This will raise AttributeError if the underlying file object
00480         doesn't support fileno().
00481         """
00482         return self.fileobj.fileno()

Here is the caller graph for this function:

def gzip.GzipFile.flush (   self,
  zlib_mode = zlib.Z_SYNC_FLUSH 
)

Definition at line 468 of file gzip.py.

00468 
00469     def flush(self,zlib_mode=zlib.Z_SYNC_FLUSH):
00470         self._check_closed()
00471         if self.mode == WRITE:
00472             # Ensure the compressor's buffer is flushed
00473             self.fileobj.write(self.compress.flush(zlib_mode))
00474             self.fileobj.flush()

Here is the call graph for this function:

Here is the caller graph for this function:

def gzip.GzipFile.peek (   self,
  n 
)

Definition at line 351 of file gzip.py.

00351 
00352     def peek(self, n):
00353         if self.mode != READ:
00354             import errno
00355             raise IOError(errno.EBADF, "peek() on write-only GzipFile object")
00356 
00357         # Do not return ridiculously small buffers, for one common idiom
00358         # is to call peek(1) and expect more bytes in return.
00359         if n < 100:
00360             n = 100
00361         if self.extrasize == 0:
00362             if self.fileobj is None:
00363                 return b''
00364             try:
00365                 # 1024 is the same buffering heuristic used in read()
00366                 self._read(max(n, 1024))
00367             except EOFError:
00368                 pass
00369         offset = self.offset - self.extrastart
00370         remaining = self.extrasize
00371         assert remaining == len(self.extrabuf) - offset
00372         return self.extrabuf[offset:offset + n]

Here is the call graph for this function:

Here is the caller graph for this function:

def gzip.GzipFile.read (   self,
  size = -1 
)

Definition at line 318 of file gzip.py.

00318 
00319     def read(self, size=-1):
00320         self._check_closed()
00321         if self.mode != READ:
00322             import errno
00323             raise IOError(errno.EBADF, "read() on write-only GzipFile object")
00324 
00325         if self.extrasize <= 0 and self.fileobj is None:
00326             return b''
00327 
00328         readsize = 1024
00329         if size < 0:        # get the whole thing
00330             try:
00331                 while True:
00332                     self._read(readsize)
00333                     readsize = min(self.max_read_chunk, readsize * 2)
00334             except EOFError:
00335                 size = self.extrasize
00336         else:               # just get some more of it
00337             try:
00338                 while size > self.extrasize:
00339                     self._read(readsize)
00340                     readsize = min(self.max_read_chunk, readsize * 2)
00341             except EOFError:
00342                 if size > self.extrasize:
00343                     size = self.extrasize
00344 
00345         offset = self.offset - self.extrastart
00346         chunk = self.extrabuf[offset: offset + size]
00347         self.extrasize = self.extrasize - size
00348 
00349         self.offset += size
00350         return chunk

Here is the call graph for this function:

Here is the caller graph for this function:

def gzip.GzipFile.readable (   self)

Definition at line 495 of file gzip.py.

00495 
00496     def readable(self):
00497         return self.mode == READ

def gzip.GzipFile.readline (   self,
  size = -1 
)

Definition at line 529 of file gzip.py.

00529 
00530     def readline(self, size=-1):
00531         if size < 0:
00532             # Shortcut common case - newline found in buffer.
00533             offset = self.offset - self.extrastart
00534             i = self.extrabuf.find(b'\n', offset) + 1
00535             if i > 0:
00536                 self.extrasize -= i - offset
00537                 self.offset += i - offset
00538                 return self.extrabuf[offset: i]
00539 
00540             size = sys.maxsize
00541             readsize = self.min_readsize
00542         else:
00543             readsize = size
00544         bufs = []
00545         while size != 0:
00546             c = self.read(readsize)
00547             i = c.find(b'\n')
00548 
00549             # We set i=size to break out of the loop under two
00550             # conditions: 1) there's no newline, and the chunk is
00551             # larger than size, or 2) there is a newline, but the
00552             # resulting line would be longer than 'size'.
00553             if (size <= i) or (i == -1 and len(c) > size):
00554                 i = size - 1
00555 
00556             if i >= 0 or c == b'':
00557                 bufs.append(c[:i + 1])    # Add portion of last chunk
00558                 self._unread(c[i + 1:])   # Push back rest of chunk
00559                 break
00560 
00561             # Append chunk to list, decrease 'size',
00562             bufs.append(c)
00563             size = size - len(c)
00564             readsize = min(size, readsize * 2)
00565         if readsize > self.min_readsize:
00566             self.min_readsize = min(readsize, self.min_readsize * 2, 512)
00567         return b''.join(bufs) # Return resulting line
00568 

Here is the call graph for this function:

Here is the caller graph for this function:

def abc.ABCMeta.register (   cls,
  subclass 
) [inherited]
Register a virtual subclass of an ABC.

Definition at line 135 of file abc.py.

00135 
00136     def register(cls, subclass):
00137         """Register a virtual subclass of an ABC."""
00138         if not isinstance(subclass, type):
00139             raise TypeError("Can only register classes")
00140         if issubclass(subclass, cls):
00141             return  # Already a subclass
00142         # Subtle: test for cycles *after* testing for "already a subclass";
00143         # this means we allow X.register(X) and interpret it as a no-op.
00144         if issubclass(cls, subclass):
00145             # This would create a cycle, which is bad for the algorithm below
00146             raise RuntimeError("Refusing to create an inheritance cycle")
00147         cls._abc_registry.add(subclass)
00148         ABCMeta._abc_invalidation_counter += 1  # Invalidate negative cache

Here is the caller graph for this function:

def gzip.GzipFile.rewind (   self)
Return the uncompressed stream file position indicator to the
beginning of the file

Definition at line 483 of file gzip.py.

00483 
00484     def rewind(self):
00485         '''Return the uncompressed stream file position indicator to the
00486         beginning of the file'''
00487         if self.mode != READ:
00488             raise IOError("Can't rewind in write mode")
00489         self.fileobj.seek(0)
00490         self._new_member = True
00491         self.extrabuf = b""
00492         self.extrasize = 0
00493         self.extrastart = 0
00494         self.offset = 0

Here is the caller graph for this function:

def gzip.GzipFile.seek (   self,
  offset,
  whence = 0 
)

Definition at line 504 of file gzip.py.

00504 
00505     def seek(self, offset, whence=0):
00506         if whence:
00507             if whence == 1:
00508                 offset = self.offset + offset
00509             else:
00510                 raise ValueError('Seek from end not supported')
00511         if self.mode == WRITE:
00512             if offset < self.offset:
00513                 raise IOError('Negative seek in write mode')
00514             count = offset - self.offset
00515             chunk = bytes(1024)
00516             for i in range(count // 1024):
00517                 self.write(chunk)
00518             self.write(bytes(count % 1024))
00519         elif self.mode == READ:
00520             if offset < self.offset:
00521                 # for negative seek, rewind and do positive seek
00522                 self.rewind()
00523             count = offset - self.offset
00524             for i in range(count // 1024):
00525                 self.read(1024)
00526             self.read(count % 1024)
00527 
00528         return self.offset

Here is the caller graph for this function:

def gzip.GzipFile.seekable (   self)

Definition at line 501 of file gzip.py.

00501 
00502     def seekable(self):
00503         return True

def gzip.GzipFile.writable (   self)

Definition at line 498 of file gzip.py.

00498 
00499     def writable(self):
00500         return self.mode == WRITE

def gzip.GzipFile.write (   self,
  data 
)

Definition at line 297 of file gzip.py.

00297 
00298     def write(self,data):
00299         self._check_closed()
00300         if self.mode != WRITE:
00301             import errno
00302             raise IOError(errno.EBADF, "write() on read-only GzipFile object")
00303 
00304         if self.fileobj is None:
00305             raise ValueError("write() on closed GzipFile object")
00306 
00307         # Convert data type if called by io.BufferedWriter.
00308         if isinstance(data, memoryview):
00309             data = data.tobytes()
00310 
00311         if len(data) > 0:
00312             self.size = self.size + len(data)
00313             self.crc = zlib.crc32(data, self.crc) & 0xffffffff
00314             self.fileobj.write( self.compress.compress(data) )
00315             self.offset += len(data)
00316 
00317         return len(data)

Here is the call graph for this function:


Member Data Documentation

Definition at line 168 of file gzip.py.

Definition at line 225 of file gzip.py.

Definition at line 183 of file gzip.py.

Definition at line 222 of file gzip.py.

Definition at line 386 of file gzip.py.

Definition at line 172 of file gzip.py.

Definition at line 173 of file gzip.py.

Definition at line 174 of file gzip.py.

Definition at line 191 of file gzip.py.

Definition at line 111 of file gzip.py.

Definition at line 177 of file gzip.py.

Definition at line 166 of file gzip.py.

Definition at line 193 of file gzip.py.

gzip.GzipFile.myfileobj = None [static]

Definition at line 110 of file gzip.py.

Definition at line 175 of file gzip.py.

Definition at line 192 of file gzip.py.

Definition at line 223 of file gzip.py.

Definition at line 224 of file gzip.py.


The documentation for this class was generated from the following file: