Back to index

apport  2.4
Public Member Functions | Public Attributes | Private Member Functions | Private Attributes
apport.report.Report Class Reference
Inheritance diagram for apport.report.Report:
Inheritance graph
[legend]
Collaboration diagram for apport.report.Report:
Collaboration graph
[legend]

List of all members.

Public Member Functions

def __init__
def add_package_info
def add_os_info
def add_user_info
def add_proc_info
def add_proc_environ
def add_kernel_crash_info
def add_gdb_info
def add_hooks_info
def search_bug_patterns
def check_ignored
def mark_ignore
def has_useful_stacktrace
def stacktrace_top_function
def standard_title
def obsolete_packages
def crash_signature
def crash_signature_addresses
def anonymize
def load
def has_removed_fields
def write
def add_to_existing
def write_mime
def __setitem__
def new_keys

Public Attributes

 pid
 data
 old_keys

Private Member Functions

def _customized_package_suffix
def _check_interpreted
def _twistd_executable
def _python_module_path
def _gen_stacktrace_top
def _get_ignore_dom
def _address_to_offset
def _build_proc_maps_cache

Private Attributes

 _proc_maps_cache

Detailed Description

A problem report specific to apport (crash or bug).

This class wraps a standard ProblemReport and adds methods for collecting
standard debugging data.

Definition at line 180 of file report.py.


Constructor & Destructor Documentation

def apport.report.Report.__init__ (   self,
  type = 'Crash',
  date = None 
)
Initialize a fresh problem report.

date is the desired date/time string; if None (default), the current
local time is used.

If the report is attached to a process ID, this should be set in
self.pid, so that e. g. hooks can use it to collect additional data.

Reimplemented from problem_report.ProblemReport.

Definition at line 186 of file report.py.

00186 
00187     def __init__(self, type='Crash', date=None):
00188         '''Initialize a fresh problem report.
00189 
00190         date is the desired date/time string; if None (default), the current
00191         local time is used.
00192 
00193         If the report is attached to a process ID, this should be set in
00194         self.pid, so that e. g. hooks can use it to collect additional data.
00195         '''
00196         problem_report.ProblemReport.__init__(self, type, date)
00197         self.pid = None
00198         self._proc_maps_cache = None

Here is the call graph for this function:


Member Function Documentation

def problem_report.ProblemReport.__setitem__ (   self,
  k,
  v 
) [inherited]

Definition at line 559 of file problem_report.py.

00559 
00560     def __setitem__(self, k, v):
00561         assert hasattr(k, 'isalnum')
00562         assert k.replace('.', '').replace('-', '').replace('_', '').isalnum()
00563         # value must be a string or a CompressedValue or a file reference
00564         # (tuple (string|file [, bool]))
00565         assert (isinstance(v, CompressedValue) or hasattr(v, 'isalnum') or
00566                 (hasattr(v, '__getitem__') and (
00567                     len(v) == 1 or (len(v) >= 2 and v[1] in (True, False)))
00568                     and (hasattr(v[0], 'isalnum') or hasattr(v[0], 'read'))))
00569 
00570         return self.data.__setitem__(k, v)

def apport.report.Report._address_to_offset (   self,
  addr 
) [private]
Resolve a memory address to an ELF name and offset.

This can be used for building duplicate signatures from non-symbolic
stack traces. These often do not have enough symbols available to
resolve function names, but taking the raw addresses also is not
suitable due to ASLR. But the offsets within a library should be
constant between crashes (assuming the same version of all libraries).

This needs and uses the "ProcMaps" field to resolve addresses.

Return 'path+offset' when found, or None if address is not in any
mapped range.

Definition at line 1335 of file report.py.

01335 
01336     def _address_to_offset(self, addr):
01337         '''Resolve a memory address to an ELF name and offset.
01338 
01339         This can be used for building duplicate signatures from non-symbolic
01340         stack traces. These often do not have enough symbols available to
01341         resolve function names, but taking the raw addresses also is not
01342         suitable due to ASLR. But the offsets within a library should be
01343         constant between crashes (assuming the same version of all libraries).
01344 
01345         This needs and uses the "ProcMaps" field to resolve addresses.
01346 
01347         Return 'path+offset' when found, or None if address is not in any
01348         mapped range.
01349         '''
01350         self._build_proc_maps_cache()
01351 
01352         for (start, end, elf) in self._proc_maps_cache:
01353             if start <= addr and end >= addr:
01354                 return '%s+%x' % (elf, addr - start)
01355 
01356         return None

Here is the call graph for this function:

Here is the caller graph for this function:

Generate self._proc_maps_cache from ProcMaps field.

This only gets done once.

Definition at line 1357 of file report.py.

01357 
01358     def _build_proc_maps_cache(self):
01359         '''Generate self._proc_maps_cache from ProcMaps field.
01360 
01361         This only gets done once.
01362         '''
01363         if self._proc_maps_cache:
01364             return
01365 
01366         assert 'ProcMaps' in self
01367         self._proc_maps_cache = []
01368         # library paths might have spaces, so we need to make some assumptions
01369         # about the intermediate fields. But we know that in between the pre-last
01370         # data field and the path there are many spaces, while between the
01371         # other data fields there is only one. So we take 4 or more spaces as
01372         # the separator of the last data field and the path.
01373         fmt = re.compile('^([0-9a-fA-F]+)-([0-9a-fA-F]+).*\s{4,}(\S.*$)')
01374         fmt_unknown = re.compile('^([0-9a-fA-F]+)-([0-9a-fA-F]+)\s')
01375 
01376         for line in self['ProcMaps'].splitlines():
01377             if not line.strip():
01378                 continue
01379             m = fmt.match(line)
01380             if not m:
01381                 # ignore lines with unknown ELF
01382                 if fmt_unknown.match(line):
01383                     continue
01384                 # but complain otherwise, as this means we encounter an
01385                 # architecture or new kernel version where the format changed
01386                 assert m, 'cannot parse ProcMaps line: ' + line
01387             self._proc_maps_cache.append((int(m.group(1), 16),
01388                                           int(m.group(2), 16), m.group(3)))

Here is the caller graph for this function:

def apport.report.Report._check_interpreted (   self) [private]
Check if process is a script.

Use ExecutablePath, ProcStatus and ProcCmdline to determine if
process is an interpreted script. If so, set InterpreterPath
accordingly.

Definition at line 304 of file report.py.

00304 
00305     def _check_interpreted(self):
00306         '''Check if process is a script.
00307 
00308         Use ExecutablePath, ProcStatus and ProcCmdline to determine if
00309         process is an interpreted script. If so, set InterpreterPath
00310         accordingly.
00311         '''
00312         if 'ExecutablePath' not in self:
00313             return
00314 
00315         exebasename = os.path.basename(self['ExecutablePath'])
00316 
00317         # check if we consider ExecutablePath an interpreter; we have to do
00318         # this, otherwise 'gedit /tmp/foo.txt' would be detected as interpreted
00319         # script as well
00320         if not any(filter(lambda i: fnmatch.fnmatch(exebasename, i), interpreters)):
00321             return
00322 
00323         # first, determine process name
00324         name = None
00325         for l in self['ProcStatus'].splitlines():
00326             try:
00327                 (k, v) = l.split('\t', 1)
00328             except ValueError:
00329                 continue
00330             if k == 'Name:':
00331                 name = v
00332                 break
00333         if not name:
00334             return
00335 
00336         cmdargs = self['ProcCmdline'].split('\0')
00337         bindirs = ['/bin/', '/sbin/', '/usr/bin/', '/usr/sbin/']
00338 
00339         # filter out interpreter options
00340         while len(cmdargs) >= 2 and cmdargs[1].startswith('-'):
00341             # check for -m
00342             if name.startswith('python') and cmdargs[1] == '-m' and len(cmdargs) >= 3:
00343                 path = self._python_module_path(cmdargs[2])
00344                 if path:
00345                     self['InterpreterPath'] = self['ExecutablePath']
00346                     self['ExecutablePath'] = path
00347                 else:
00348                     self['UnreportableReason'] = 'Cannot determine path of python module %s' % cmdargs[2]
00349                 return
00350 
00351             del cmdargs[1]
00352 
00353         # catch scripts explicitly called with interpreter
00354         if len(cmdargs) >= 2:
00355             # ensure that cmdargs[1] is an absolute path
00356             if cmdargs[1].startswith('.') and 'ProcCwd' in self:
00357                 cmdargs[1] = os.path.join(self['ProcCwd'], cmdargs[1])
00358             if os.access(cmdargs[1], os.R_OK):
00359                 self['InterpreterPath'] = self['ExecutablePath']
00360                 self['ExecutablePath'] = os.path.realpath(cmdargs[1])
00361 
00362         # catch directly executed scripts
00363         if 'InterpreterPath' not in self and name != exebasename:
00364             for p in bindirs:
00365                 if os.access(p + cmdargs[0], os.R_OK):
00366                     argvexe = p + cmdargs[0]
00367                     if os.path.basename(os.path.realpath(argvexe)) == name:
00368                         self['InterpreterPath'] = self['ExecutablePath']
00369                         self['ExecutablePath'] = argvexe
00370                     break
00371 
00372         # special case: crashes from twistd are usually the fault of the
00373         # launched program
00374         if 'InterpreterPath' in self and os.path.basename(self['ExecutablePath']) == 'twistd':
00375             self['InterpreterPath'] = self['ExecutablePath']
00376             exe = self._twistd_executable()
00377             if exe:
00378                 self['ExecutablePath'] = exe
00379             else:
00380                 self['UnreportableReason'] = 'Cannot determine twistd client program'

Here is the call graph for this function:

Here is the caller graph for this function:

def apport.report.Report._customized_package_suffix (   self,
  package 
) [private]
Return a string suitable for appending to Package/Dependencies.

If package has only unmodified files, return the empty string. If not,
return ' [modified: ...]' with a list of modified files.

Definition at line 199 of file report.py.

00199 
00200     def _customized_package_suffix(self, package):
00201         '''Return a string suitable for appending to Package/Dependencies.
00202 
00203         If package has only unmodified files, return the empty string. If not,
00204         return ' [modified: ...]' with a list of modified files.
00205         '''
00206         suffix = ''
00207         mod = packaging.get_modified_files(package)
00208         if mod:
00209             suffix += ' [modified: %s]' % ' '.join(mod)
00210         try:
00211             if not packaging.is_distro_package(package):
00212                 origin = packaging.get_package_origin(package)
00213                 if origin:
00214                     suffix += ' [origin: %s]' % origin
00215         except ValueError:
00216             # no-op for nonexisting packages
00217             pass
00218 
00219         return suffix

Here is the caller graph for this function:

def apport.report.Report._gen_stacktrace_top (   self) [private]
Build field StacktraceTop as the top five functions of Stacktrace.

Signal handler invocations and related functions are skipped since they
are generally not useful for triaging and duplicate detection.

Definition at line 665 of file report.py.

00665 
00666     def _gen_stacktrace_top(self):
00667         '''Build field StacktraceTop as the top five functions of Stacktrace.
00668 
00669         Signal handler invocations and related functions are skipped since they
00670         are generally not useful for triaging and duplicate detection.
00671         '''
00672         unwind_functions = set(['g_logv', 'g_log', 'IA__g_log', 'IA__g_logv',
00673                                 'g_assert_warning', 'IA__g_assert_warning',
00674                                 '__GI_abort', '_XError'])
00675         toptrace = [''] * 5
00676         depth = 0
00677         unwound = False
00678         unwinding = False
00679         unwinding_xerror = False
00680         bt_fn_re = re.compile('^#(\d+)\s+(?:0x(?:\w+)\s+in\s+\*?(.*)|(<signal handler called>)\s*)$')
00681         bt_fn_noaddr_re = re.compile('^#(\d+)\s+(?:(.*)|(<signal handler called>)\s*)$')
00682         # some internal functions like the SSE stubs cause unnecessary jitter
00683         ignore_functions_re = re.compile('^(__.*_s?sse\d+(?:_\w+)?|__kernel_vsyscall)$')
00684 
00685         for line in self['Stacktrace'].splitlines():
00686             m = bt_fn_re.match(line)
00687             if not m:
00688                 m = bt_fn_noaddr_re.match(line)
00689                 if not m:
00690                     continue
00691 
00692             if not unwound or unwinding:
00693                 if m.group(2):
00694                     fn = m.group(2).split()[0].split('(')[0]
00695                 else:
00696                     fn = None
00697 
00698                 # handle XErrors
00699                 if unwinding_xerror:
00700                     if fn.startswith('_X') or fn in ['handle_response', 'handle_error', 'XWindowEvent']:
00701                         continue
00702                     else:
00703                         unwinding_xerror = False
00704 
00705                 if m.group(3) or fn in unwind_functions:
00706                     unwinding = True
00707                     depth = 0
00708                     toptrace = [''] * 5
00709                     if m.group(3):
00710                         # we stop unwinding when we found a <signal handler>,
00711                         # but we continue unwinding otherwise, as e. g. a glib
00712                         # abort is usually sitting on top of an XError
00713                         unwound = True
00714 
00715                     if fn == '_XError':
00716                         unwinding_xerror = True
00717                     continue
00718                 else:
00719                     unwinding = False
00720 
00721             frame = m.group(2) or m.group(3)
00722             function = frame.split()[0]
00723             if depth < len(toptrace) and not ignore_functions_re.match(function):
00724                 toptrace[depth] = frame
00725                 depth += 1
00726         self['StacktraceTop'] = '\n'.join(toptrace).strip()

Here is the caller graph for this function:

def apport.report.Report._get_ignore_dom (   self) [private]
Read ignore list XML file and return a DOM tree.

Return an empty DOM tree if file does not exist.

Raises ValueError if the file exists but is invalid XML.

Definition at line 856 of file report.py.

00856 
00857     def _get_ignore_dom(self):
00858         '''Read ignore list XML file and return a DOM tree.
00859 
00860         Return an empty DOM tree if file does not exist.
00861 
00862         Raises ValueError if the file exists but is invalid XML.
00863         '''
00864         ifpath = os.path.expanduser(_ignore_file)
00865         if not os.access(ifpath, os.R_OK) or os.path.getsize(ifpath) == 0:
00866             # create a document from scratch
00867             dom = xml.dom.getDOMImplementation().createDocument(None, 'apport', None)
00868         else:
00869             try:
00870                 dom = xml.dom.minidom.parse(ifpath)
00871             except ExpatError as e:
00872                 raise ValueError('%s has invalid format: %s' % (_ignore_file, str(e)))
00873 
00874         # remove whitespace so that writing back the XML does not accumulate
00875         # whitespace
00876         dom.documentElement.normalize()
00877         _dom_remove_space(dom.documentElement)
00878 
00879         return dom

Here is the call graph for this function:

Here is the caller graph for this function:

def apport.report.Report._python_module_path (   klass,
  module 
) [private]
Determine path of given Python module

Definition at line 404 of file report.py.

00404 
00405     def _python_module_path(klass, module):
00406         '''Determine path of given Python module'''
00407 
00408         try:
00409             m = __import__(module.replace('/', '.'))
00410             m
00411         except:
00412             return None
00413 
00414         # chop off the first component, as it's already covered by m
00415         path = eval('m.%s.__file__' % '.'.join(module.split('/')[1:]))
00416         if path.endswith('.pyc'):
00417             path = path[:-1]
00418         return path

Here is the caller graph for this function:

def apport.report.Report._twistd_executable (   self) [private]
Determine the twistd client program from ProcCmdline.

Definition at line 381 of file report.py.

00381 
00382     def _twistd_executable(self):
00383         '''Determine the twistd client program from ProcCmdline.'''
00384 
00385         args = self['ProcCmdline'].split('\0')[2:]
00386 
00387         # search for a -f/--file, -y/--python or -s/--source argument
00388         while args:
00389             arg = args[0].split('=', 1)
00390             if arg[0].startswith('--file') or arg[0].startswith('--python') or arg[0].startswith('--source'):
00391                 if len(arg) == 2:
00392                     return arg[1]
00393                 else:
00394                     return args[1]
00395             elif len(arg[0]) > 1 and arg[0][0] == '-' and arg[0][1] != '-':
00396                 opts = arg[0][1:]
00397                 if 'f' in opts or 'y' in opts or 's' in opts:
00398                     return args[1]
00399 
00400             args.pop(0)
00401 
00402         return None

Here is the caller graph for this function:

def apport.report.Report.add_gdb_info (   self,
  rootdir = None 
)
Add information from gdb.

This requires that the report has a CoreDump and an
ExecutablePath. This adds the following fields:
- Registers: Output of gdb's 'info registers' command
- Disassembly: Output of gdb's 'x/16i $pc' command
- Stacktrace: Output of gdb's 'bt full' command
- ThreadStacktrace: Output of gdb's 'thread apply all bt full' command
- StacktraceTop: simplified stacktrace (topmost 5 functions) for inline
  inclusion into bug reports and easier processing
- AssertionMessage: Value of __abort_msg, if present

The optional rootdir can specify a root directory which has the
executable, libraries, and debug symbols. This does not require
chroot() or root privileges, it just instructs gdb to search for the
files there.

Definition at line 570 of file report.py.

00570 
00571     def add_gdb_info(self, rootdir=None):
00572         '''Add information from gdb.
00573 
00574         This requires that the report has a CoreDump and an
00575         ExecutablePath. This adds the following fields:
00576         - Registers: Output of gdb's 'info registers' command
00577         - Disassembly: Output of gdb's 'x/16i $pc' command
00578         - Stacktrace: Output of gdb's 'bt full' command
00579         - ThreadStacktrace: Output of gdb's 'thread apply all bt full' command
00580         - StacktraceTop: simplified stacktrace (topmost 5 functions) for inline
00581           inclusion into bug reports and easier processing
00582         - AssertionMessage: Value of __abort_msg, if present
00583 
00584         The optional rootdir can specify a root directory which has the
00585         executable, libraries, and debug symbols. This does not require
00586         chroot() or root privileges, it just instructs gdb to search for the
00587         files there.
00588         '''
00589         if 'CoreDump' not in self or 'ExecutablePath' not in self:
00590             return
00591 
00592         unlink_core = False
00593         try:
00594             if hasattr(self['CoreDump'], 'find'):
00595                 (fd, core) = tempfile.mkstemp()
00596                 unlink_core = True
00597                 os.write(fd, self['CoreDump'])
00598                 os.close(fd)
00599             elif hasattr(self['CoreDump'], 'gzipvalue'):
00600                 (fd, core) = tempfile.mkstemp()
00601                 unlink_core = True
00602                 os.close(fd)
00603                 with open(core, 'wb') as f:
00604                     self['CoreDump'].write(f)
00605             else:
00606                 core = self['CoreDump'][0]
00607 
00608             gdb_reports = {'Registers': 'info registers',
00609                            'Disassembly': 'x/16i $pc',
00610                            'Stacktrace': 'bt full',
00611                            'ThreadStacktrace': 'thread apply all bt full',
00612                            'AssertionMessage': 'print __abort_msg->msg'}
00613 
00614             command = ['gdb', '--batch']
00615             executable = self.get('InterpreterPath', self['ExecutablePath'])
00616             if rootdir:
00617                 command += ['--ex', 'set debug-file-directory %s/usr/lib/debug' % rootdir,
00618                             '--ex', 'set solib-absolute-prefix ' + rootdir]
00619                 executable = rootdir + '/' + executable
00620             command += ['--ex', 'file "%s"' % executable, '--ex', 'core-file ' + core]
00621             # limit maximum backtrace depth (to avoid looped stacks)
00622             command += ['--ex', 'set backtrace limit 2000']
00623             value_keys = []
00624             # append the actual commands and something that acts as a separator
00625             for name, cmd in gdb_reports.items():
00626                 value_keys.append(name)
00627                 command += ['--ex', 'p -99', '--ex', cmd]
00628 
00629             assert os.path.exists(executable)
00630 
00631             # call gdb
00632             try:
00633                 out = _command_output(command).decode('UTF-8', errors='replace')
00634             except OSError:
00635                 return
00636 
00637             # split the output into the various fields
00638             part_re = re.compile('^\$\d+\s*=\s*-99$', re.MULTILINE)
00639             parts = part_re.split(out)
00640             # drop the gdb startup text prior to first separator
00641             parts.pop(0)
00642             for part in parts:
00643                 self[value_keys.pop(0)] = part.replace('\n\n', '\n.\n').strip()
00644         finally:
00645             if unlink_core:
00646                 os.unlink(core)
00647 
00648         # clean up AssertionMessage
00649         if 'AssertionMessage' in self:
00650             # chop off "$n = 0x...." prefix, drop empty ones
00651             m = re.match('^\$\d+\s+=\s+0x[0-9a-fA-F]+\s+"(.*)"\s*$',
00652                          self['AssertionMessage'])
00653             if m:
00654                 self['AssertionMessage'] = m.group(1)
00655                 if self['AssertionMessage'].endswith('\\n'):
00656                     self['AssertionMessage'] = self['AssertionMessage'][0:-2]
00657             else:
00658                 del self['AssertionMessage']
00659 
00660         if 'Stacktrace' in self:
00661             self._gen_stacktrace_top()
00662             addr_signature = self.crash_signature_addresses()
00663             if addr_signature:
00664                 self['StacktraceAddressSignature'] = addr_signature

Here is the call graph for this function:

def apport.report.Report.add_hooks_info (   self,
  ui,
  package = None,
  srcpackage = None 
)
Run hook script for collecting package specific data.

A hook script needs to be in _hook_dir/<Package>.py or in
_common_hook_dir/*.py and has to contain a function 'add_info(report,
ui)' that takes and modifies a Report, and gets an UserInterface
reference for interactivity.

return True if the hook requested to stop the report filing process,
False otherwise.

Definition at line 727 of file report.py.

00727 
00728     def add_hooks_info(self, ui, package=None, srcpackage=None):
00729         '''Run hook script for collecting package specific data.
00730 
00731         A hook script needs to be in _hook_dir/<Package>.py or in
00732         _common_hook_dir/*.py and has to contain a function 'add_info(report,
00733         ui)' that takes and modifies a Report, and gets an UserInterface
00734         reference for interactivity.
00735 
00736         return True if the hook requested to stop the report filing process,
00737         False otherwise.
00738         '''
00739         symb = {}
00740 
00741         # common hooks
00742         for hook in glob.glob(_common_hook_dir + '/*.py'):
00743             try:
00744                 with open(hook) as fd:
00745                     exec(compile(fd.read(), hook, 'exec'), symb)
00746                 try:
00747                     symb['add_info'](self, ui)
00748                 except TypeError as e:
00749                     if str(e).startswith('add_info()'):
00750                         # older versions of apport did not pass UI, and hooks that
00751                         # do not require it don't need to take it
00752                         symb['add_info'](self)
00753                     else:
00754                         raise
00755             except StopIteration:
00756                 return True
00757             except:
00758                 apport.error('hook %s crashed:', hook)
00759                 traceback.print_exc()
00760                 pass
00761 
00762         # binary package hook
00763         if not package:
00764             package = self.get('Package')
00765         if package:
00766             hook = '%s/%s.py' % (_hook_dir, package.split()[0])
00767             if os.path.exists(hook):
00768                 try:
00769                     with open(hook) as fd:
00770                         exec(compile(fd.read(), hook, 'exec'), symb)
00771                     try:
00772                         symb['add_info'](self, ui)
00773                     except TypeError as e:
00774                         if str(e).startswith('add_info()'):
00775                             # older versions of apport did not pass UI, and hooks that
00776                             # do not require it don't need to take it
00777                             symb['add_info'](self)
00778                         else:
00779                             raise
00780                 except StopIteration:
00781                     return True
00782                 except:
00783                     apport.error('hook %s crashed:', hook)
00784                     traceback.print_exc()
00785                     pass
00786 
00787         # source package hook
00788         if not srcpackage:
00789             srcpackage = self.get('SourcePackage')
00790         if srcpackage:
00791             hook = '%s/source_%s.py' % (_hook_dir, srcpackage.split()[0])
00792             if os.path.exists(hook):
00793                 try:
00794                     with open(hook) as fd:
00795                         exec(compile(fd.read(), hook, 'exec'), symb)
00796                     try:
00797                         symb['add_info'](self, ui)
00798                     except TypeError as e:
00799                         if str(e).startswith('add_info()'):
00800                             # older versions of apport did not pass UI, and hooks that
00801                             # do not require it don't need to take it
00802                             symb['add_info'](self)
00803                         else:
00804                             raise
00805                 except StopIteration:
00806                     return True
00807                 except:
00808                     apport.error('hook %s crashed:', hook)
00809                     traceback.print_exc()
00810                     pass
00811 
00812         return False

Here is the call graph for this function:

def apport.report.Report.add_kernel_crash_info (   self,
  debugdir = None 
)
Add information from kernel crash.

This needs a VmCore in the Report.

Definition at line 529 of file report.py.

00529 
00530     def add_kernel_crash_info(self, debugdir=None):
00531         '''Add information from kernel crash.
00532 
00533         This needs a VmCore in the Report.
00534         '''
00535         if 'VmCore' not in self:
00536             return
00537         unlink_core = False
00538         ret = False
00539         try:
00540             if hasattr(self['VmCore'], 'find'):
00541                 (fd, core) = tempfile.mkstemp()
00542                 os.write(fd, self['VmCore'])
00543                 os.close(fd)
00544                 unlink_core = True
00545             kver = self['Uname'].split()[1]
00546             command = ['crash',
00547                        '/usr/lib/debug/boot/vmlinux-%s' % kver,
00548                        core,
00549                        ]
00550             try:
00551                 p = subprocess.Popen(command,
00552                                      stdin=subprocess.PIPE,
00553                                      stdout=subprocess.PIPE,
00554                                      stderr=subprocess.STDOUT)
00555             except OSError:
00556                 return False
00557             p.stdin.write('bt -a -f\n')
00558             p.stdin.write('ps\n')
00559             p.stdin.write('runq\n')
00560             p.stdin.write('quit\n')
00561             # FIXME: split it up nicely etc
00562             out = p.stdout.read()
00563             ret = (p.wait() == 0)
00564             if ret:
00565                 self['Stacktrace'] = out
00566         finally:
00567             if unlink_core:
00568                 os.unlink(core)
00569         return ret

Add operating system information.

This adds:
- DistroRelease: lsb_release -sir output
- Architecture: system architecture in distro specific notation
- Uname: uname -srm output
- NonfreeKernelModules: loaded kernel modules which are not free (if
    there are none, this field will not be present)

Definition at line 274 of file report.py.

00274 
00275     def add_os_info(self):
00276         '''Add operating system information.
00277 
00278         This adds:
00279         - DistroRelease: lsb_release -sir output
00280         - Architecture: system architecture in distro specific notation
00281         - Uname: uname -srm output
00282         - NonfreeKernelModules: loaded kernel modules which are not free (if
00283             there are none, this field will not be present)
00284         '''
00285         p = subprocess.Popen(['lsb_release', '-sir'], stdout=subprocess.PIPE,
00286                              stderr=subprocess.PIPE)
00287         self['DistroRelease'] = p.communicate()[0].decode().strip().replace('\n', ' ')
00288 
00289         u = os.uname()
00290         self['Uname'] = '%s %s %s' % (u[0], u[2], u[4])
00291         self['Architecture'] = packaging.get_system_architecture()

def apport.report.Report.add_package_info (   self,
  package = None 
)
Add packaging information.

If package is not given, the report must have ExecutablePath.
This adds:
- Package: package name and installed version
- SourcePackage: source package name
- PackageArchitecture: processor architecture this package was built
  for
- Dependencies: package names and versions of all dependencies and
  pre-dependencies; this also checks if the files are unmodified and
  appends a list of all modified files

Definition at line 220 of file report.py.

00220 
00221     def add_package_info(self, package=None):
00222         '''Add packaging information.
00223 
00224         If package is not given, the report must have ExecutablePath.
00225         This adds:
00226         - Package: package name and installed version
00227         - SourcePackage: source package name
00228         - PackageArchitecture: processor architecture this package was built
00229           for
00230         - Dependencies: package names and versions of all dependencies and
00231           pre-dependencies; this also checks if the files are unmodified and
00232           appends a list of all modified files
00233         '''
00234         if not package:
00235             # the kernel does not have a executable path but a package
00236             if not 'ExecutablePath' in self and self['ProblemType'] == 'KernelCrash':
00237                 package = self['Package']
00238             else:
00239                 package = apport.fileutils.find_file_package(self['ExecutablePath'])
00240             if not package:
00241                 return
00242 
00243         try:
00244             version = packaging.get_version(package)
00245         except ValueError:
00246             # package not installed
00247             version = None
00248         self['Package'] = '%s %s%s' % (package, version or '(not installed)',
00249                                        self._customized_package_suffix(package))
00250         if version or 'SourcePackage' not in self:
00251             self['SourcePackage'] = packaging.get_source(package)
00252         if not version:
00253             return
00254 
00255         self['PackageArchitecture'] = packaging.get_architecture(package)
00256 
00257         # get set of all transitive dependencies
00258         dependencies = set([])
00259         _transitive_dependencies(package, dependencies)
00260 
00261         # get dependency versions
00262         self['Dependencies'] = ''
00263         for dep in sorted(dependencies):
00264             try:
00265                 v = packaging.get_version(dep)
00266             except ValueError:
00267                 # can happen with uninstalled alternate dependencies
00268                 continue
00269 
00270             if self['Dependencies']:
00271                 self['Dependencies'] += '\n'
00272             self['Dependencies'] += '%s %s%s' % (
00273                 dep, v, self._customized_package_suffix(dep))

Here is the call graph for this function:

def apport.report.Report.add_proc_environ (   self,
  pid = None,
  extraenv = [] 
)
Add environment information.

If pid is not given, it defaults to the process' current pid.

This adds the following fields:
- ProcEnviron: A subset of the process' environment (only some standard
  variables that do not disclose potentially sensitive information, plus
  the ones mentioned in extraenv)

Definition at line 488 of file report.py.

00488 
00489     def add_proc_environ(self, pid=None, extraenv=[]):
00490         '''Add environment information.
00491 
00492         If pid is not given, it defaults to the process' current pid.
00493 
00494         This adds the following fields:
00495         - ProcEnviron: A subset of the process' environment (only some standard
00496           variables that do not disclose potentially sensitive information, plus
00497           the ones mentioned in extraenv)
00498         '''
00499         safe_vars = ['SHELL', 'TERM', 'LANGUAGE', 'LANG', 'LC_CTYPE',
00500                      'LC_COLLATE', 'LC_TIME', 'LC_NUMERIC', 'LC_MONETARY',
00501                      'LC_MESSAGES', 'LC_PAPER', 'LC_NAME', 'LC_ADDRESS',
00502                      'LC_TELEPHONE', 'LC_MEASUREMENT', 'LC_IDENTIFICATION',
00503                      'LOCPATH'] + extraenv
00504 
00505         if not pid:
00506             pid = os.getpid()
00507         pid = str(pid)
00508 
00509         self['ProcEnviron'] = ''
00510         env = _read_file('/proc/' + pid + '/environ').replace('\n', '\\n')
00511         if env.startswith('Error:'):
00512             self['ProcEnviron'] = env
00513         else:
00514             for l in env.split('\0'):
00515                 if l.split('=', 1)[0] in safe_vars:
00516                     if self['ProcEnviron']:
00517                         self['ProcEnviron'] += '\n'
00518                     self['ProcEnviron'] += l
00519                 elif l.startswith('PATH='):
00520                     p = l.split('=', 1)[1]
00521                     if '/home' in p or '/tmp' in p:
00522                         if self['ProcEnviron']:
00523                             self['ProcEnviron'] += '\n'
00524                         self['ProcEnviron'] += 'PATH=(custom, user)'
00525                     elif p != '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games':
00526                         if self['ProcEnviron']:
00527                             self['ProcEnviron'] += '\n'
00528                         self['ProcEnviron'] += 'PATH=(custom, no user)'

Here is the call graph for this function:

Here is the caller graph for this function:

def apport.report.Report.add_proc_info (   self,
  pid = None,
  extraenv = [] 
)
Add /proc/pid information.

If neither pid nor self.pid are given, it defaults to the process'
current pid and sets self.pid.

This adds the following fields:
- ExecutablePath: /proc/pid/exe contents; if the crashed process is
  interpreted, this contains the script path instead
- InterpreterPath: /proc/pid/exe contents if the crashed process is
  interpreted; otherwise this key does not exist
- ExecutableTimestamp: time stamp of ExecutablePath, for comparing at
  report time
- ProcEnviron: A subset of the process' environment (only some standard
  variables that do not disclose potentially sensitive information, plus
  the ones mentioned in extraenv)
- ProcCmdline: /proc/pid/cmdline contents
- ProcStatus: /proc/pid/status contents
- ProcMaps: /proc/pid/maps contents
- ProcAttrCurrent: /proc/pid/attr/current contents, if not "unconfined"

Definition at line 419 of file report.py.

00419 
00420     def add_proc_info(self, pid=None, extraenv=[]):
00421         '''Add /proc/pid information.
00422 
00423         If neither pid nor self.pid are given, it defaults to the process'
00424         current pid and sets self.pid.
00425 
00426         This adds the following fields:
00427         - ExecutablePath: /proc/pid/exe contents; if the crashed process is
00428           interpreted, this contains the script path instead
00429         - InterpreterPath: /proc/pid/exe contents if the crashed process is
00430           interpreted; otherwise this key does not exist
00431         - ExecutableTimestamp: time stamp of ExecutablePath, for comparing at
00432           report time
00433         - ProcEnviron: A subset of the process' environment (only some standard
00434           variables that do not disclose potentially sensitive information, plus
00435           the ones mentioned in extraenv)
00436         - ProcCmdline: /proc/pid/cmdline contents
00437         - ProcStatus: /proc/pid/status contents
00438         - ProcMaps: /proc/pid/maps contents
00439         - ProcAttrCurrent: /proc/pid/attr/current contents, if not "unconfined"
00440         '''
00441         if not pid:
00442             pid = self.pid or os.getpid()
00443         if not self.pid:
00444             self.pid = int(pid)
00445         pid = str(pid)
00446 
00447         try:
00448             self['ProcCwd'] = os.readlink('/proc/' + pid + '/cwd')
00449         except OSError:
00450             pass
00451         self.add_proc_environ(pid, extraenv)
00452         self['ProcStatus'] = _read_file('/proc/' + pid + '/status')
00453         self['ProcCmdline'] = _read_file('/proc/' + pid + '/cmdline').rstrip('\0')
00454         self['ProcMaps'] = _read_maps(int(pid))
00455         try:
00456             self['ExecutablePath'] = os.readlink('/proc/' + pid + '/exe')
00457         except OSError as e:
00458             if e.errno == errno.ENOENT:
00459                 raise ValueError('invalid process')
00460             else:
00461                 raise
00462         for p in ('rofs', 'rwfs', 'squashmnt', 'persistmnt'):
00463             if self['ExecutablePath'].startswith('/%s/' % p):
00464                 self['ExecutablePath'] = self['ExecutablePath'][len('/%s' % p):]
00465                 break
00466         assert os.path.exists(self['ExecutablePath'])
00467 
00468         # check if we have an interpreted program
00469         self._check_interpreted()
00470 
00471         self['ExecutableTimestamp'] = str(int(os.stat(self['ExecutablePath']).st_mtime))
00472 
00473         # make ProcCmdline ASCII friendly, do shell escaping
00474         self['ProcCmdline'] = self['ProcCmdline'].replace('\\', '\\\\').replace(' ', '\\ ').replace('\0', ' ')
00475 
00476         # grab AppArmor or SELinux context
00477         # If no LSM is loaded, reading will return -EINVAL
00478         try:
00479             # On Linux 2.6.28+, 'current' is world readable, but read() gives
00480             # EPERM; Python 2.5.3+ crashes on that (LP: #314065)
00481             if os.getuid() == 0:
00482                 with open('/proc/' + pid + '/attr/current') as fd:
00483                     val = fd.read().strip()
00484                 if val != 'unconfined':
00485                     self['ProcAttrCurrent'] = val
00486         except (IOError, OSError):
00487             pass

Here is the call graph for this function:

def problem_report.ProblemReport.add_to_existing (   self,
  reportfile,
  keep_times = False 
) [inherited]
Add this report's data to an already existing report file.

The file will be temporarily chmod'ed to 000 to prevent frontends
from picking up a hal-updated report file. If keep_times
is True, then the file's atime and mtime restored after updating.

Definition at line 402 of file problem_report.py.

00402 
00403     def add_to_existing(self, reportfile, keep_times=False):
00404         '''Add this report's data to an already existing report file.
00405 
00406         The file will be temporarily chmod'ed to 000 to prevent frontends
00407         from picking up a hal-updated report file. If keep_times
00408         is True, then the file's atime and mtime restored after updating.
00409         '''
00410         st = os.stat(reportfile)
00411         try:
00412             f = open(reportfile, 'ab')
00413             os.chmod(reportfile, 0)
00414             self.write(f)
00415             f.close()
00416         finally:
00417             if keep_times:
00418                 os.utime(reportfile, (st.st_atime, st.st_mtime))
00419             os.chmod(reportfile, st.st_mode)

Here is the call graph for this function:

Add information about the user.

This adds:
- UserGroups: system groups the user is in

Definition at line 292 of file report.py.

00292 
00293     def add_user_info(self):
00294         '''Add information about the user.
00295 
00296         This adds:
00297         - UserGroups: system groups the user is in
00298         '''
00299         user = pwd.getpwuid(os.getuid()).pw_name
00300         groups = [name for name, p, gid, memb in grp.getgrall()
00301                   if user in memb and gid < 1000]
00302         groups.sort()
00303         self['UserGroups'] = ' '.join(groups)

Remove user identifying strings from the report.

This particularly removes the user name, host name, and IPs
from attributes which contain data read from the environment, and
removes the ProcCwd attribute completely.

Definition at line 1294 of file report.py.

01294 
01295     def anonymize(self):
01296         '''Remove user identifying strings from the report.
01297 
01298         This particularly removes the user name, host name, and IPs
01299         from attributes which contain data read from the environment, and
01300         removes the ProcCwd attribute completely.
01301         '''
01302         replacements = []
01303         if (os.getuid() > 0):
01304             # do not replace "root"
01305             p = pwd.getpwuid(os.getuid())
01306             if len(p[0]) >= 2:
01307                 replacements.append((re.compile('\\b%s\\b' % p[0]), 'username'))
01308             replacements.append((re.compile('\\b%s\\b' % p[5]), '/home/username'))
01309 
01310             for s in p[4].split(','):
01311                 s = s.strip()
01312                 if len(s) > 2:
01313                     replacements.append((re.compile('\\b%s\\b' % s), 'User Name'))
01314 
01315         hostname = os.uname()[1]
01316         if len(hostname) >= 2:
01317             replacements.append((re.compile('\\b%s\\b' % hostname), 'hostname'))
01318 
01319         try:
01320             del self['ProcCwd']
01321         except KeyError:
01322             pass
01323 
01324         for k in self:
01325             is_proc_field = k.startswith('Proc') and not k in [
01326                 'ProcCpuinfo', 'ProcMaps', 'ProcStatus', 'ProcInterrupts', 'ProcModules']
01327             if is_proc_field or 'Stacktrace' in k or k in ['Traceback', 'PythonArgs', 'Title']:
01328                 if not hasattr(self[k], 'isspace'):
01329                     continue
01330                 for (pattern, repl) in replacements:
01331                     if type(self[k]) == bytes:
01332                         self[k] = pattern.sub(repl, self[k].decode('UTF-8', errors='replace')).encode('UTF-8')
01333                     else:
01334                         self[k] = pattern.sub(repl, self[k])

Check if current report should not be presented.

Reports can be suppressed by per-user blacklisting in
~/.apport-ignore.xml (in the real UID's home) and
/etc/apport/blacklist.d/. For environments where you are only
interested in crashes of some programs, you can also create a whitelist
in /etc/apport/whitelist.d/, everything which does not match gets
ignored then.

This requires the ExecutablePath attribute. Throws a ValueError if the
file has an invalid format.

Definition at line 880 of file report.py.

00880 
00881     def check_ignored(self):
00882         '''Check if current report should not be presented.
00883 
00884         Reports can be suppressed by per-user blacklisting in
00885         ~/.apport-ignore.xml (in the real UID's home) and
00886         /etc/apport/blacklist.d/. For environments where you are only
00887         interested in crashes of some programs, you can also create a whitelist
00888         in /etc/apport/whitelist.d/, everything which does not match gets
00889         ignored then.
00890 
00891         This requires the ExecutablePath attribute. Throws a ValueError if the
00892         file has an invalid format.
00893         '''
00894         assert 'ExecutablePath' in self
00895 
00896         # check blacklist
00897         try:
00898             for f in os.listdir(_blacklist_dir):
00899                 try:
00900                     with open(os.path.join(_blacklist_dir, f)) as fd:
00901                         for line in fd:
00902                             if line.strip() == self['ExecutablePath']:
00903                                 return True
00904                 except IOError:
00905                     continue
00906         except OSError:
00907             pass
00908 
00909         # check whitelist
00910         try:
00911             whitelist = set()
00912             for f in os.listdir(_whitelist_dir):
00913                 try:
00914                     with open(os.path.join(_whitelist_dir, f)) as fd:
00915                         for line in fd:
00916                             whitelist.add(line.strip())
00917                 except IOError:
00918                     continue
00919 
00920             if whitelist and self['ExecutablePath'] not in whitelist:
00921                 return True
00922         except OSError:
00923             pass
00924 
00925         dom = self._get_ignore_dom()
00926 
00927         try:
00928             cur_mtime = int(os.stat(self['ExecutablePath']).st_mtime)
00929         except OSError:
00930             # if it does not exist any more, do nothing
00931             return False
00932 
00933         # search for existing entry and update it
00934         for ignore in dom.getElementsByTagName('ignore'):
00935             if ignore.getAttribute('program') == self['ExecutablePath']:
00936                 if float(ignore.getAttribute('mtime')) >= cur_mtime:
00937                     return True
00938 
00939         return False

Here is the call graph for this function:

Get a signature string for a crash.

This is suitable for identifying duplicates.

For signal crashes this the concatenation of ExecutablePath, Signal
number, and StacktraceTop function names, separated by a colon. If
StacktraceTop has unknown functions or the report lacks any of those
fields, return None. In this case, you can use
crash_signature_addresses() to get a less precise duplicate signature
based on addresses instead of symbol names.

For assertion failures, it is the concatenation of ExecutablePath
and assertion message, separated by colons.

For Python crashes, this concatenates the ExecutablePath, exception
name, and Traceback function names, again separated by a colon.

Definition at line 1151 of file report.py.

01151 
01152     def crash_signature(self):
01153         '''Get a signature string for a crash.
01154 
01155         This is suitable for identifying duplicates.
01156 
01157         For signal crashes this the concatenation of ExecutablePath, Signal
01158         number, and StacktraceTop function names, separated by a colon. If
01159         StacktraceTop has unknown functions or the report lacks any of those
01160         fields, return None. In this case, you can use
01161         crash_signature_addresses() to get a less precise duplicate signature
01162         based on addresses instead of symbol names.
01163 
01164         For assertion failures, it is the concatenation of ExecutablePath
01165         and assertion message, separated by colons.
01166 
01167         For Python crashes, this concatenates the ExecutablePath, exception
01168         name, and Traceback function names, again separated by a colon.
01169         '''
01170         if 'ExecutablePath' not in self and not self['ProblemType'] == 'KernelCrash':
01171             return None
01172 
01173         # kernel crash
01174         if 'Stacktrace' in self and self['ProblemType'] == 'KernelCrash':
01175             sig = 'kernel'
01176             regex = re.compile('^\s*\#\d+\s\[\w+\]\s(\w+)')
01177             for line in self['Stacktrace'].splitlines():
01178                 m = regex.match(line)
01179                 if m:
01180                     sig += ':' + (m.group(1))
01181             return sig
01182 
01183         # assertion failures
01184         if self.get('Signal') == '6' and 'AssertionMessage' in self:
01185             sig = self['ExecutablePath'] + ':' + self['AssertionMessage']
01186             # filter out addresses, to help match duplicates more sanely
01187             return re.sub(r'0x[0-9a-f]{6,}', 'ADDR', sig)
01188 
01189         # signal crashes
01190         if 'StacktraceTop' in self and 'Signal' in self:
01191             sig = '%s:%s' % (self['ExecutablePath'], self['Signal'])
01192             bt_fn_re = re.compile('^(?:([\w:~]+).*|(<signal handler called>)\s*)$')
01193 
01194             lines = self['StacktraceTop'].splitlines()
01195             if len(lines) < 2:
01196                 return None
01197 
01198             for line in lines:
01199                 m = bt_fn_re.match(line)
01200                 if m:
01201                     sig += ':' + (m.group(1) or m.group(2))
01202                 else:
01203                     # this will also catch ??
01204                     return None
01205             return sig
01206 
01207         # Python crashes
01208         if 'Traceback' in self:
01209             trace = self['Traceback'].splitlines()
01210 
01211             sig = ''
01212             if len(trace) == 1:
01213                 # sometimes, Python exceptions do not have file references
01214                 m = re.match('(\w+): ', trace[0])
01215                 if m:
01216                     return self['ExecutablePath'] + ':' + m.group(1)
01217                 else:
01218                     return None
01219             elif len(trace) < 3:
01220                 return None
01221 
01222             loc_re = re.compile('^\s+File "([^"]+).*line (\d+).*\sin (.*)$')
01223             for l in trace:
01224                 m = loc_re.match(l)
01225                 if m:
01226                     # if we have a function name, use this; for a a crash
01227                     # outside of a function/method, fall back to the source
01228                     # file location
01229                     if m.group(3) != '<module>':
01230                         sig += ':' + m.group(3)
01231                     else:
01232                         sig += ':%s@%s' % (m.group(1), m.group(2))
01233 
01234             return self['ExecutablePath'] + ':' + trace[-1].split(':')[0] + sig
01235 
01236         return None

Compute heuristic duplicate signature for a signal crash.

This should be used if crash_signature() fails, i. e. Stacktrace does
not have enough symbols.

This approach only uses addresses in the stack trace and does not rely
on symbol resolution. As we can't unwind these stack traces, we cannot
limit them to the top five frames and thus will end up with several or
many different signatures for a particular crash. But these can be
computed and synchronously checked with a crash database at the client
side, which avoids having to upload and process the full report. So on
the server-side crash database we will only have to deal with all the
equivalence classes (i. e. same crash producing a number of possible
signatures) instead of every single report.

Return None when signature cannot be determined.

Definition at line 1237 of file report.py.

01237 
01238     def crash_signature_addresses(self):
01239         '''Compute heuristic duplicate signature for a signal crash.
01240 
01241         This should be used if crash_signature() fails, i. e. Stacktrace does
01242         not have enough symbols.
01243 
01244         This approach only uses addresses in the stack trace and does not rely
01245         on symbol resolution. As we can't unwind these stack traces, we cannot
01246         limit them to the top five frames and thus will end up with several or
01247         many different signatures for a particular crash. But these can be
01248         computed and synchronously checked with a crash database at the client
01249         side, which avoids having to upload and process the full report. So on
01250         the server-side crash database we will only have to deal with all the
01251         equivalence classes (i. e. same crash producing a number of possible
01252         signatures) instead of every single report.
01253 
01254         Return None when signature cannot be determined.
01255         '''
01256         if not 'ProcMaps' in self or not 'Stacktrace' in self or not 'Signal' in self:
01257             return None
01258 
01259         stack = []
01260         failed = 0
01261         for line in self['Stacktrace'].splitlines():
01262             if line.startswith('#'):
01263                 addr = line.split()[1]
01264                 if not addr.startswith('0x'):
01265                     continue
01266                 addr = int(addr, 16)  # we do want to know about ValueErrors here, so don't catch
01267                 offset = self._address_to_offset(addr)
01268                 if offset:
01269                     # avoid ':' in ELF paths, we use that as separator
01270                     stack.append(offset.replace(':', '..'))
01271                 else:
01272                     failed += 1
01273 
01274             # stack unwinding chops off ~ 5 functions, and we need some more
01275             # accuracy because we do not have symbols; but beyond a depth of 15
01276             # we get too much noise, so we can abort there
01277             if len(stack) >= 15:
01278                 break
01279 
01280         # we only accept a small minority (< 20%) of failed resolutions, otherwise we
01281         # discard
01282         if failed > 0 and len(stack) / failed < 4:
01283             return None
01284 
01285         # we also discard if the trace is too short
01286         if (failed == 0 and len(stack) < 3) or (failed > 0 and len(stack) < 6):
01287             return None
01288 
01289         return '%s:%s:%s:%s' % (
01290             self['ExecutablePath'],
01291             self['Signal'],
01292             os.uname()[4],
01293             ':'.join(stack))

Here is the call graph for this function:

Here is the caller graph for this function:

Check if the report has any keys which were not loaded.

This could happen when using binary=False in load().

Definition at line 191 of file problem_report.py.

00191 
00192     def has_removed_fields(self):
00193         '''Check if the report has any keys which were not loaded.
00194 
00195         This could happen when using binary=False in load().
00196         '''
00197         return ('' in self.values())

Check whether StackTrace can be considered 'useful'.

The current heuristic is to consider it useless if it either is shorter
than three lines and has any unknown function, or for longer traces, a
minority of known functions.

Definition at line 980 of file report.py.

00980 
00981     def has_useful_stacktrace(self):
00982         '''Check whether StackTrace can be considered 'useful'.
00983 
00984         The current heuristic is to consider it useless if it either is shorter
00985         than three lines and has any unknown function, or for longer traces, a
00986         minority of known functions.
00987         '''
00988         if not self.get('StacktraceTop'):
00989             return False
00990 
00991         unknown_fn = [f.startswith('??') for f in self['StacktraceTop'].splitlines()]
00992 
00993         if len(unknown_fn) < 3:
00994             return unknown_fn.count(True) == 0
00995 
00996         return unknown_fn.count(True) <= len(unknown_fn) / 2.

def problem_report.ProblemReport.load (   self,
  file,
  binary = True 
) [inherited]
Initialize problem report from a file-like object.

If binary is False, binary data is not loaded; the dictionary key is
created, but its value will be an empty string. If it is True, it is
transparently uncompressed and available as dictionary byte array values.
If binary is 'compressed', the compressed value is retained, and the
dictionary value will be a CompressedValue object. This is useful if
the compressed value is still useful (to avoid recompression if the
file needs to be written back).

file needs to be opened in binary mode.

Files are in RFC822 format.

Definition at line 109 of file problem_report.py.

00109 
00110     def load(self, file, binary=True):
00111         '''Initialize problem report from a file-like object.
00112 
00113         If binary is False, binary data is not loaded; the dictionary key is
00114         created, but its value will be an empty string. If it is True, it is
00115         transparently uncompressed and available as dictionary byte array values.
00116         If binary is 'compressed', the compressed value is retained, and the
00117         dictionary value will be a CompressedValue object. This is useful if
00118         the compressed value is still useful (to avoid recompression if the
00119         file needs to be written back).
00120 
00121         file needs to be opened in binary mode.
00122 
00123         Files are in RFC822 format.
00124         '''
00125         self._assert_bin_mode(file)
00126         self.data.clear()
00127         key = None
00128         value = None
00129         b64_block = False
00130         bd = None
00131         for line in file:
00132             # continuation line
00133             if line.startswith(b' '):
00134                 if b64_block and not binary:
00135                     continue
00136                 assert (key is not None and value is not None)
00137                 if b64_block:
00138                     l = base64.b64decode(line)
00139                     if bd:
00140                         value += bd.decompress(l)
00141                     else:
00142                         if binary == 'compressed':
00143                             # check gzip header; if absent, we have legacy zlib
00144                             # data
00145                             if value.gzipvalue == b'' and not l.startswith(b'\037\213\010'):
00146                                 value.legacy_zlib = True
00147                             value.gzipvalue += l
00148                         else:
00149                             # lazy initialization of bd
00150                             # skip gzip header, if present
00151                             if l.startswith(b'\037\213\010'):
00152                                 bd = zlib.decompressobj(-zlib.MAX_WBITS)
00153                                 value = bd.decompress(self._strip_gzip_header(l))
00154                             else:
00155                                 # legacy zlib-only format used default block
00156                                 # size
00157                                 bd = zlib.decompressobj()
00158                                 value += bd.decompress(l)
00159                 else:
00160                     if len(value) > 0:
00161                         value += b'\n'
00162                     if line.endswith(b'\n'):
00163                         value += line[1:-1]
00164                     else:
00165                         value += line[1:]
00166             else:
00167                 if b64_block:
00168                     if bd:
00169                         value += bd.flush()
00170                     b64_block = False
00171                     bd = None
00172                 if key:
00173                     assert value is not None
00174                     self.data[key] = self._try_unicode(value)
00175                 (key, value) = line.split(b':', 1)
00176                 if not _python2:
00177                     key = key.decode('ASCII')
00178                 value = value.strip()
00179                 if value == b'base64':
00180                     if binary == 'compressed':
00181                         value = CompressedValue(key.encode())
00182                         value.gzipvalue = b''
00183                     else:
00184                         value = b''
00185                     b64_block = True
00186 
00187         if key is not None:
00188             self.data[key] = self._try_unicode(value)
00189 
00190         self.old_keys = set(self.data.keys())

Here is the call graph for this function:

Ignore future crashes of this executable.

Add a ignore list entry for this report to ~/.apport-ignore.xml, so
that future reports for this ExecutablePath are not presented to the
user any more.

Throws a ValueError if the file already exists and has an invalid
format.

Definition at line 940 of file report.py.

00940 
00941     def mark_ignore(self):
00942         '''Ignore future crashes of this executable.
00943 
00944         Add a ignore list entry for this report to ~/.apport-ignore.xml, so
00945         that future reports for this ExecutablePath are not presented to the
00946         user any more.
00947 
00948         Throws a ValueError if the file already exists and has an invalid
00949         format.
00950         '''
00951         assert 'ExecutablePath' in self
00952 
00953         dom = self._get_ignore_dom()
00954         try:
00955             mtime = str(int(os.stat(self['ExecutablePath']).st_mtime))
00956         except OSError as e:
00957             # file went away underneath us, ignore
00958             if e.errno == errno.ENOENT:
00959                 return
00960             else:
00961                 raise
00962 
00963         # search for existing entry and update it
00964         for ignore in dom.getElementsByTagName('ignore'):
00965             if ignore.getAttribute('program') == self['ExecutablePath']:
00966                 ignore.setAttribute('mtime', mtime)
00967                 break
00968         else:
00969             # none exists yet, create new ignore node if none exists yet
00970             e = dom.createElement('ignore')
00971             e.setAttribute('program', self['ExecutablePath'])
00972             e.setAttribute('mtime', mtime)
00973             dom.documentElement.appendChild(e)
00974 
00975         # write back file
00976         with open(os.path.expanduser(_ignore_file), 'w') as fd:
00977             dom.writexml(fd, addindent='  ', newl='\n')
00978 
00979         dom.unlink()

Here is the call graph for this function:

def problem_report.ProblemReport.new_keys (   self) [inherited]
Return newly added keys.

Return the set of keys which have been added to the report since it
was constructed or loaded.

Definition at line 571 of file problem_report.py.

00571 
00572     def new_keys(self):
00573         '''Return newly added keys.
00574 
00575         Return the set of keys which have been added to the report since it
00576         was constructed or loaded.
00577         '''
00578         return set(self.data.keys()) - self.old_keys

Return list of obsolete packages in Package and Dependencies.

Definition at line 1138 of file report.py.

01138 
01139     def obsolete_packages(self):
01140         '''Return list of obsolete packages in Package and Dependencies.'''
01141 
01142         obsolete = []
01143         for l in (self.get('Package', '') + '\n' + self.get('Dependencies', '')).splitlines():
01144             if not l:
01145                 continue
01146             pkg, ver = l.split()[:2]
01147             avail = packaging.get_available_version(pkg)
01148             if ver is not None and ver != 'None' and avail is not None and packaging.compare_versions(ver, avail) < 0:
01149                 obsolete.append(pkg)
01150         return obsolete

def apport.report.Report.search_bug_patterns (   self,
  url 
)
Check bug patterns loaded from the specified url.

Return bug URL on match, or None otherwise.

The url must refer to a valid XML document with the following syntax:
root element := <patterns>
patterns := <pattern url="http://bug.url"> *
pattern := <re key="report_key">regular expression*</re> +

For example:
<?xml version="1.0"?>
<patterns>
    <pattern url="http://bugtracker.net/bugs/1">
<re key="Foo">ba.*r</re>
    </pattern>
    <pattern url="http://bugtracker.net/bugs/2">
<re key="Package">^\S* 1-2$</re> <!-- test for a particular version -->
<re key="Foo">write_(hello|goodbye)</re>
    </pattern>
</patterns>

Definition at line 813 of file report.py.

00813 
00814     def search_bug_patterns(self, url):
00815         '''Check bug patterns loaded from the specified url.
00816 
00817         Return bug URL on match, or None otherwise.
00818 
00819         The url must refer to a valid XML document with the following syntax:
00820         root element := <patterns>
00821         patterns := <pattern url="http://bug.url"> *
00822         pattern := <re key="report_key">regular expression*</re> +
00823 
00824         For example:
00825         <?xml version="1.0"?>
00826         <patterns>
00827             <pattern url="http://bugtracker.net/bugs/1">
00828                 <re key="Foo">ba.*r</re>
00829             </pattern>
00830             <pattern url="http://bugtracker.net/bugs/2">
00831                 <re key="Package">^\S* 1-2$</re> <!-- test for a particular version -->
00832                 <re key="Foo">write_(hello|goodbye)</re>
00833             </pattern>
00834         </patterns>
00835         '''
00836         # some distros might not want to support these
00837         if not url:
00838             return
00839 
00840         try:
00841             f = urlopen(url)
00842             patterns = f.read().decode('UTF-8', errors='replace')
00843             f.close()
00844         except (IOError, URLError):
00845             # doesn't exist or failed to load
00846             return
00847 
00848         if '<title>404 Not Found' in patterns:
00849             return
00850 
00851         url = _check_bug_patterns(self, patterns)
00852         if url:
00853             return url
00854 
00855         return None

Here is the call graph for this function:

Return topmost function in StacktraceTop

Definition at line 997 of file report.py.

00997 
00998     def stacktrace_top_function(self):
00999         '''Return topmost function in StacktraceTop'''
01000 
01001         for l in self.get('StacktraceTop', '').splitlines():
01002             fname = l.split('(')[0].strip()
01003             if fname != '??':
01004                 return fname
01005 
01006         return None

Here is the caller graph for this function:

Create an appropriate title for a crash database entry.

This contains the topmost function name from the stack trace and the
signal (for signal crashes) or the Python exception (for unhandled
Python exceptions).

Return None if the report is not a crash or a default title could not
be generated.

Definition at line 1007 of file report.py.

01007 
01008     def standard_title(self):
01009         '''Create an appropriate title for a crash database entry.
01010 
01011         This contains the topmost function name from the stack trace and the
01012         signal (for signal crashes) or the Python exception (for unhandled
01013         Python exceptions).
01014 
01015         Return None if the report is not a crash or a default title could not
01016         be generated.
01017         '''
01018         # assertion failure
01019         if self.get('Signal') == '6' and \
01020                 'ExecutablePath' in self and \
01021                 'AssertionMessage' in self:
01022             return '%s assert failure: %s' % (
01023                 os.path.basename(self['ExecutablePath']),
01024                 self['AssertionMessage'])
01025 
01026         # signal crash
01027         if 'Signal' in self and 'ExecutablePath' in self and 'StacktraceTop' in self:
01028 
01029             signal_names = {
01030                 '4': 'SIGILL',
01031                 '6': 'SIGABRT',
01032                 '8': 'SIGFPE',
01033                 '11': 'SIGSEGV',
01034                 '13': 'SIGPIPE'}
01035 
01036             fn = self.stacktrace_top_function()
01037             if fn:
01038                 fn = ' in %s()' % fn
01039             else:
01040                 fn = ''
01041 
01042             arch_mismatch = ''
01043             if 'Architecture' in self and 'PackageArchitecture' in self and self['Architecture'] != self['PackageArchitecture'] and self['PackageArchitecture'] != 'all':
01044                 arch_mismatch = ' [non-native %s package]' % self['PackageArchitecture']
01045 
01046             return '%s crashed with %s%s%s' % (
01047                 os.path.basename(self['ExecutablePath']),
01048                 signal_names.get(self.get('Signal'), 'signal ' + self.get('Signal')),
01049                 fn, arch_mismatch
01050             )
01051 
01052         # Python exception
01053         if 'Traceback' in self and 'ExecutablePath' in self:
01054 
01055             trace = self['Traceback'].splitlines()
01056 
01057             if len(trace) < 1:
01058                 return None
01059             if len(trace) < 3:
01060                 return '%s crashed with %s' % (
01061                     os.path.basename(self['ExecutablePath']),
01062                     trace[0])
01063 
01064             trace_re = re.compile('^\s*File\s*"(\S+)".* in (.+)$')
01065             i = len(trace) - 1
01066             function = 'unknown'
01067             while i >= 0:
01068                 m = trace_re.match(trace[i])
01069                 if m:
01070                     module_path = m.group(1)
01071                     function = m.group(2)
01072                     break
01073                 i -= 1
01074 
01075             path = os.path.basename(self['ExecutablePath'])
01076             last_line = trace[-1]
01077             exception = last_line.split(':')[0]
01078             m = re.match('^%s: (.+)$' % re.escape(exception), last_line)
01079             if m:
01080                 message = m.group(1)
01081             else:
01082                 message = None
01083 
01084             if function == '<module>':
01085                 if module_path == self['ExecutablePath']:
01086                     context = '__main__'
01087                 else:
01088                     # Maybe use os.path.basename?
01089                     context = module_path
01090             else:
01091                 context = '%s()' % function
01092 
01093             title = '%s crashed with %s in %s' % (
01094                 path,
01095                 exception,
01096                 context
01097             )
01098 
01099             if message:
01100                 title += ': %s' % message
01101 
01102             return title
01103 
01104         # package problem
01105         if self.get('ProblemType') == 'Package' and 'Package' in self:
01106 
01107             title = 'package %s failed to install/upgrade' % \
01108                 self['Package']
01109             if self.get('ErrorMessage'):
01110                 title += ': ' + self['ErrorMessage'].splitlines()[-1]
01111 
01112             return title
01113 
01114         if self.get('ProblemType') == 'KernelOops' and 'OopsText' in self:
01115 
01116             oops = self['OopsText']
01117             if oops.startswith('------------[ cut here ]------------'):
01118                 title = oops.split('\n', 2)[1]
01119             else:
01120                 title = oops.split('\n', 1)[0]
01121 
01122             return title
01123 
01124         if self.get('ProblemType') == 'KernelOops' and 'Failure' in self:
01125             # Title the report with suspend or hibernate as appropriate,
01126             # and mention any non-free modules loaded up front.
01127             title = ''
01128             if 'MachineType' in self:
01129                 title += '[' + self['MachineType'] + '] '
01130             title += self['Failure'] + ' failure'
01131             if 'NonfreeKernelModules' in self:
01132                 title += ' [non-free: ' + self['NonfreeKernelModules'] + ']'
01133             title += '\n'
01134 
01135             return title
01136 
01137         return None

Here is the call graph for this function:

def problem_report.ProblemReport.write (   self,
  file,
  only_new = False 
) [inherited]
Write information into the given file-like object.

If only_new is True, only keys which have been added since the last
load() are written (i. e. those returned by new_keys()).

If a value is a string, it is written directly. Otherwise it must be a
tuple of the form (file, encode=True, limit=None, fail_on_empty=False).
The first argument can be a file name or a file-like object,
which will be read and its content will become the value of this key.
'encode' specifies whether the contents will be
gzip compressed and base64-encoded (this defaults to True). If limit is
set to a positive integer, the file is not attached if it's larger
than the given limit, and the entire key will be removed. If
fail_on_empty is True, reading zero bytes will cause an IOError.

file needs to be opened in binary mode.

Files are written in RFC822 format.

Definition at line 231 of file problem_report.py.

00231 
00232     def write(self, file, only_new=False):
00233         '''Write information into the given file-like object.
00234 
00235         If only_new is True, only keys which have been added since the last
00236         load() are written (i. e. those returned by new_keys()).
00237 
00238         If a value is a string, it is written directly. Otherwise it must be a
00239         tuple of the form (file, encode=True, limit=None, fail_on_empty=False).
00240         The first argument can be a file name or a file-like object,
00241         which will be read and its content will become the value of this key.
00242         'encode' specifies whether the contents will be
00243         gzip compressed and base64-encoded (this defaults to True). If limit is
00244         set to a positive integer, the file is not attached if it's larger
00245         than the given limit, and the entire key will be removed. If
00246         fail_on_empty is True, reading zero bytes will cause an IOError.
00247 
00248         file needs to be opened in binary mode.
00249 
00250         Files are written in RFC822 format.
00251         '''
00252         self._assert_bin_mode(file)
00253 
00254         # sort keys into ASCII non-ASCII/binary attachment ones, so that
00255         # the base64 ones appear last in the report
00256         asckeys = []
00257         binkeys = []
00258         for k in self.data.keys():
00259             if only_new and k in self.old_keys:
00260                 continue
00261             v = self.data[k]
00262             if hasattr(v, 'find'):
00263                 if self._is_binary(v):
00264                     binkeys.append(k)
00265                 else:
00266                     asckeys.append(k)
00267             else:
00268                 if not isinstance(v, CompressedValue) and len(v) >= 2 and not v[1]:
00269                     # force uncompressed
00270                     asckeys.append(k)
00271                 else:
00272                     binkeys.append(k)
00273 
00274         asckeys.sort()
00275         if 'ProblemType' in asckeys:
00276             asckeys.remove('ProblemType')
00277             asckeys.insert(0, 'ProblemType')
00278         binkeys.sort()
00279 
00280         # write the ASCII keys first
00281         for k in asckeys:
00282             v = self.data[k]
00283 
00284             # if it's a tuple, we have a file reference; read the contents
00285             if not hasattr(v, 'find'):
00286                 if len(v) >= 3 and v[2] is not None:
00287                     limit = v[2]
00288                 else:
00289                     limit = None
00290 
00291                 fail_on_empty = len(v) >= 4 and v[3]
00292 
00293                 if hasattr(v[0], 'read'):
00294                     v = v[0].read()  # file-like object
00295                 else:
00296                     with open(v[0], 'rb') as f:  # file name
00297                         v = f.read()
00298 
00299                 if fail_on_empty and len(v) == 0:
00300                     raise IOError('did not get any data for field ' + k)
00301 
00302                 if limit is not None and len(v) > limit:
00303                     del self.data[k]
00304                     continue
00305 
00306             if _python2:
00307                 if isinstance(v, unicode):
00308                     # unicode → str
00309                     v = v.encode('UTF-8')
00310             else:
00311                 if isinstance(v, str):
00312                     # unicode → str
00313                     v = v.encode('UTF-8')
00314 
00315             file.write(k.encode('ASCII'))
00316             if b'\n' in v:
00317                 # multiline value
00318                 file.write(b':\n ')
00319                 file.write(v.replace(b'\n', b'\n '))
00320             else:
00321                 file.write(b': ')
00322                 file.write(v)
00323             file.write(b'\n')
00324 
00325         # now write the binary keys with gzip compression and base64 encoding
00326         for k in binkeys:
00327             v = self.data[k]
00328             limit = None
00329             size = 0
00330 
00331             curr_pos = file.tell()
00332             file.write(k.encode('ASCII'))
00333             file.write(b': base64\n ')
00334 
00335             # CompressedValue
00336             if isinstance(v, CompressedValue):
00337                 file.write(base64.b64encode(v.gzipvalue))
00338                 file.write(b'\n')
00339                 continue
00340 
00341             # write gzip header
00342             gzip_header = b'\037\213\010\010\000\000\000\000\002\377' + k.encode('UTF-8') + b'\000'
00343             file.write(base64.b64encode(gzip_header))
00344             file.write(b'\n ')
00345             crc = zlib.crc32(b'')
00346 
00347             bc = zlib.compressobj(9, zlib.DEFLATED, -zlib.MAX_WBITS,
00348                                   zlib.DEF_MEM_LEVEL, 0)
00349             # direct value
00350             if hasattr(v, 'find'):
00351                 size += len(v)
00352                 crc = zlib.crc32(v, crc)
00353                 outblock = bc.compress(v)
00354                 if outblock:
00355                     file.write(base64.b64encode(outblock))
00356                     file.write(b'\n ')
00357             # file reference
00358             else:
00359                 if len(v) >= 3 and v[2] is not None:
00360                     limit = v[2]
00361 
00362                 if hasattr(v[0], 'read'):
00363                     f = v[0]  # file-like object
00364                 else:
00365                     f = open(v[0], 'rb')  # file name
00366                 while True:
00367                     block = f.read(1048576)
00368                     size += len(block)
00369                     crc = zlib.crc32(block, crc)
00370                     if limit is not None:
00371                         if size > limit:
00372                             # roll back
00373                             file.seek(curr_pos)
00374                             file.truncate(curr_pos)
00375                             del self.data[k]
00376                             crc = None
00377                             break
00378                     if block:
00379                         outblock = bc.compress(block)
00380                         if outblock:
00381                             file.write(base64.b64encode(outblock))
00382                             file.write(b'\n ')
00383                     else:
00384                         break
00385                 if not hasattr(v[0], 'read'):
00386                     f.close()
00387 
00388                 if len(v) >= 4 and v[3]:
00389                     if size == 0:
00390                         raise IOError('did not get any data for field %s from %s' % (k, str(v[0])))
00391 
00392             # flush compressor and write the rest
00393             if not limit or size <= limit:
00394                 block = bc.flush()
00395                 # append gzip trailer: crc (32 bit) and size (32 bit)
00396                 if crc:
00397                     block += struct.pack('<L', crc & 0xFFFFFFFF)
00398                     block += struct.pack('<L', size & 0xFFFFFFFF)
00399 
00400                 file.write(base64.b64encode(block))
00401                 file.write(b'\n')

Here is the call graph for this function:

Here is the caller graph for this function:

def problem_report.ProblemReport.write_mime (   self,
  file,
  attach_treshold = 5,
  extra_headers = {},
  skip_keys = None,
  priority_fields = None 
) [inherited]
Write MIME/Multipart RFC 2822 formatted data into file.

file must be a file-like object, not a path.  It needs to be opened in
binary mode.

If a value is a string or a CompressedValue, it is written directly.
Otherwise it must be a tuple containing the source file and an optional
boolean value (in that order); the first argument can be a file name or
a file-like object, which will be read and its content will become the
value of this key.  The file will be gzip compressed, unless the key
already ends in .gz.

attach_treshold specifies the maximum number of lines for a value to be
included into the first inline text part. All bigger values (as well as
all non-ASCII ones) will become an attachment, as well as text
values bigger than 1 kB.

Extra MIME preamble headers can be specified, too, as a dictionary.

skip_keys is a set/list specifying keys which are filtered out and not
written to the destination file.

priority_fields is a set/list specifying the order in which keys should
appear in the destination file.

Definition at line 421 of file problem_report.py.

00421 
00422                    skip_keys=None, priority_fields=None):
00423         '''Write MIME/Multipart RFC 2822 formatted data into file.
00424 
00425         file must be a file-like object, not a path.  It needs to be opened in
00426         binary mode.
00427 
00428         If a value is a string or a CompressedValue, it is written directly.
00429         Otherwise it must be a tuple containing the source file and an optional
00430         boolean value (in that order); the first argument can be a file name or
00431         a file-like object, which will be read and its content will become the
00432         value of this key.  The file will be gzip compressed, unless the key
00433         already ends in .gz.
00434 
00435         attach_treshold specifies the maximum number of lines for a value to be
00436         included into the first inline text part. All bigger values (as well as
00437         all non-ASCII ones) will become an attachment, as well as text
00438         values bigger than 1 kB.
00439 
00440         Extra MIME preamble headers can be specified, too, as a dictionary.
00441 
00442         skip_keys is a set/list specifying keys which are filtered out and not
00443         written to the destination file.
00444 
00445         priority_fields is a set/list specifying the order in which keys should
00446         appear in the destination file.
00447         '''
00448         self._assert_bin_mode(file)
00449 
00450         keys = sorted(self.data.keys())
00451 
00452         text = b''
00453         attachments = []
00454 
00455         if 'ProblemType' in keys:
00456             keys.remove('ProblemType')
00457             keys.insert(0, 'ProblemType')
00458 
00459         if priority_fields:
00460             counter = 0
00461             for priority_field in priority_fields:
00462                 if priority_field in keys:
00463                     keys.remove(priority_field)
00464                     keys.insert(counter, priority_field)
00465                     counter += 1
00466 
00467         for k in keys:
00468             if skip_keys and k in skip_keys:
00469                 continue
00470             v = self.data[k]
00471             attach_value = None
00472 
00473             # compressed values are ready for attaching in gzip form
00474             if isinstance(v, CompressedValue):
00475                 attach_value = v.gzipvalue
00476 
00477             # if it's a tuple, we have a file reference; read the contents
00478             # and gzip it
00479             elif not hasattr(v, 'find'):
00480                 attach_value = ''
00481                 if hasattr(v[0], 'read'):
00482                     f = v[0]  # file-like object
00483                 else:
00484                     f = open(v[0], 'rb')  # file name
00485                 if k.endswith('.gz'):
00486                     attach_value = f.read()
00487                 else:
00488                     io = BytesIO()
00489                     gf = gzip.GzipFile(k, mode='wb', fileobj=io)
00490                     while True:
00491                         block = f.read(1048576)
00492                         if block:
00493                             gf.write(block)
00494                         else:
00495                             gf.close()
00496                             break
00497                     attach_value = io.getvalue()
00498                 f.close()
00499 
00500             # binary value
00501             elif self._is_binary(v):
00502                 if k.endswith('.gz'):
00503                     attach_value = v
00504                 else:
00505                     attach_value = CompressedValue(v, k).gzipvalue
00506 
00507             # if we have an attachment value, create an attachment
00508             if attach_value:
00509                 att = MIMEBase('application', 'x-gzip')
00510                 if k.endswith('.gz'):
00511                     att.add_header('Content-Disposition', 'attachment', filename=k)
00512                 else:
00513                     att.add_header('Content-Disposition', 'attachment', filename=k + '.gz')
00514                 att.set_payload(attach_value)
00515                 encode_base64(att)
00516                 attachments.append(att)
00517             else:
00518                 # plain text value
00519                 size = len(v)
00520 
00521                 # ensure that byte arrays are valid UTF-8
00522                 if type(v) == bytes:
00523                     v = v.decode('UTF-8', 'replace')
00524                 # convert unicode to UTF-8 str
00525                 if _python2:
00526                     assert isinstance(v, unicode)
00527                 else:
00528                     assert isinstance(v, str)
00529                 v = v.encode('UTF-8')
00530 
00531                 lines = len(v.splitlines())
00532                 if size <= 1000 and lines == 1:
00533                     v = v.rstrip()
00534                     text += k.encode() + b': ' + v + b'\n'
00535                 elif size <= 1000 and lines <= attach_treshold:
00536                     text += k.encode() + b':\n '
00537                     if not v.endswith(b'\n'):
00538                         v += b'\n'
00539                     text += v.strip().replace(b'\n', b'\n ') + b'\n'
00540                 else:
00541                     # too large, separate attachment
00542                     att = MIMEText(v, _charset='UTF-8')
00543                     att.add_header('Content-Disposition', 'attachment', filename=k + '.txt')
00544                     attachments.append(att)
00545 
00546         # create initial text attachment
00547         att = MIMEText(text, _charset='UTF-8')
00548         att.add_header('Content-Disposition', 'inline')
00549         attachments.insert(0, att)
00550 
00551         msg = MIMEMultipart()
00552         for k, v in extra_headers.items():
00553             msg.add_header(k, v)
00554         for a in attachments:
00555             msg.attach(a)
00556 
00557         file.write(msg.as_string().encode('UTF-8'))
00558         file.write(b'\n')

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Definition at line 197 of file report.py.

Definition at line 104 of file problem_report.py.

Definition at line 107 of file problem_report.py.

Definition at line 196 of file report.py.


The documentation for this class was generated from the following file: