Back to index

python3.2  3.2.2
Public Member Functions | Public Attributes | Static Public Attributes | Private Member Functions | Private Attributes | Static Private Attributes
doctest.DocTestRunner Class Reference
Inheritance diagram for doctest.DocTestRunner:
Inheritance graph
[legend]
Collaboration diagram for doctest.DocTestRunner:
Collaboration graph
[legend]

List of all members.

Public Member Functions

def __init__
def report_start
def report_success
def report_failure
def report_unexpected_exception
def run
def summarize
def merge

Public Attributes

 optionflags
 original_optionflags
 tries
 failures
 test
 debugger
 save_linecache_getlines

Static Public Attributes

string DIVIDER = "*"

Private Member Functions

def _failure_header
def __run
def __record_outcome
def __patched_linecache_getlines

Private Attributes

 _checker
 _verbose
 _name2ft
 _fakeout

Static Private Attributes

tuple __LINECACHE_FILENAME_RE

Detailed Description

  1. DocTest Runner
    A class used to run DocTest test cases, and accumulate statistics.
    The `run` method is used to process a single DocTest case.  It
    returns a tuple `(f, t)`, where `t` is the number of test cases
    tried, and `f` is the number of test cases that failed.
    
        >>> tests = DocTestFinder().find(_TestClass)
        >>> runner = DocTestRunner(verbose=False)
        >>> tests.sort(key = lambda test: test.name)
        >>> for test in tests:
        ...     print(test.name, '->', runner.run(test))
        _TestClass -> TestResults(failed=0, attempted=2)
        _TestClass.__init__ -> TestResults(failed=0, attempted=2)
        _TestClass.get -> TestResults(failed=0, attempted=2)
        _TestClass.square -> TestResults(failed=0, attempted=1)
    
    The `summarize` method prints a summary of all the test cases that
    have been run by the runner, and returns an aggregated `(f, t)`
    tuple:
    
        >>> runner.summarize(verbose=1)
        4 items passed all tests:
           2 tests in _TestClass
           2 tests in _TestClass.__init__
           2 tests in _TestClass.get
           1 tests in _TestClass.square
        7 tests in 4 items.
        7 passed and 0 failed.
        Test passed.
        TestResults(failed=0, attempted=7)
    
    The aggregated number of tried examples and failed examples is
    also available via the `tries` and `failures` attributes:
    
        >>> runner.tries
        7
        >>> runner.failures
        0
    
    The comparison between expected outputs and actual outputs is done
    by an `OutputChecker`.  This comparison may be customized with a
    number of option flags; see the documentation for `testmod` for
    more information.  If the option flags are insufficient, then the
    comparison may also be customized by passing a subclass of
    `OutputChecker` to the constructor.
    
    The test runner's display output can be controlled in two ways.
    First, an output function (`out) can be passed to
    `TestRunner.run`; this function will be called with strings that
    should be displayed.  It defaults to `sys.stdout.write`.  If
    capturing the output is not sufficient, then the display output
    can be also customized by subclassing DocTestRunner, and
    overriding the methods `report_start`, `report_success`,
    `report_unexpected_exception`, and `report_failure`.
    

Definition at line 1044 of file doctest.py.


Constructor & Destructor Documentation

def doctest.DocTestRunner.__init__ (   self,
  checker = None,
  verbose = None,
  optionflags = 0 
)
Create a new test runner.

Optional keyword arg `checker` is the `OutputChecker` that
should be used to compare the expected outputs and actual
outputs of doctest examples.

Optional keyword arg 'verbose' prints lots of stuff if true,
only failures if false; by default, it's true iff '-v' is in
sys.argv.

Optional argument `optionflags` can be used to control how the
test runner compares expected output to actual output, and how
it displays failures.  See the documentation for `testmod` for
more information.

Definition at line 1104 of file doctest.py.

01104 
01105     def __init__(self, checker=None, verbose=None, optionflags=0):
01106         """
01107         Create a new test runner.
01108 
01109         Optional keyword arg `checker` is the `OutputChecker` that
01110         should be used to compare the expected outputs and actual
01111         outputs of doctest examples.
01112 
01113         Optional keyword arg 'verbose' prints lots of stuff if true,
01114         only failures if false; by default, it's true iff '-v' is in
01115         sys.argv.
01116 
01117         Optional argument `optionflags` can be used to control how the
01118         test runner compares expected output to actual output, and how
01119         it displays failures.  See the documentation for `testmod` for
01120         more information.
01121         """
01122         self._checker = checker or OutputChecker()
01123         if verbose is None:
01124             verbose = '-v' in sys.argv
01125         self._verbose = verbose
01126         self.optionflags = optionflags
01127         self.original_optionflags = optionflags
01128 
01129         # Keep track of the examples we've run.
01130         self.tries = 0
01131         self.failures = 0
01132         self._name2ft = {}
01133 
01134         # Create a fake output target for capturing doctest output.
01135         self._fakeout = _SpoofOut()

Here is the caller graph for this function:


Member Function Documentation

def doctest.DocTestRunner.__patched_linecache_getlines (   self,
  filename,
  module_globals = None 
) [private]

Definition at line 1331 of file doctest.py.

01331 
01332     def __patched_linecache_getlines(self, filename, module_globals=None):
01333         m = self.__LINECACHE_FILENAME_RE.match(filename)
01334         if m and m.group('name') == self.test.name:
01335             example = self.test.examples[int(m.group('examplenum'))]
01336             return example.source.splitlines(True)
01337         else:
01338             return self.save_linecache_getlines(filename, module_globals)

def doctest.DocTestRunner.__record_outcome (   self,
  test,
  f,
  t 
) [private]
Record the fact that the given DocTest (`test`) generated `f`
failures out of `t` tried examples.

Definition at line 1318 of file doctest.py.

01318 
01319     def __record_outcome(self, test, f, t):
01320         """
01321         Record the fact that the given DocTest (`test`) generated `f`
01322         failures out of `t` tried examples.
01323         """
01324         f2, t2 = self._name2ft.get(test.name, (0,0))
01325         self._name2ft[test.name] = (f+f2, t+t2)
01326         self.failures += f
01327         self.tries += t

Here is the caller graph for this function:

def doctest.DocTestRunner.__run (   self,
  test,
  compileflags,
  out 
) [private]
Run the examples in `test`.  Write the outcome of each example
with one of the `DocTestRunner.report_*` methods, using the
writer function `out`.  `compileflags` is the set of compiler
flags that should be used to execute examples.  Return a tuple
`(f, t)`, where `t` is the number of examples tried, and `f`
is the number of examples that failed.  The examples are run
in the namespace `test.globs`.

Definition at line 1195 of file doctest.py.

01195 
01196     def __run(self, test, compileflags, out):
01197         """
01198         Run the examples in `test`.  Write the outcome of each example
01199         with one of the `DocTestRunner.report_*` methods, using the
01200         writer function `out`.  `compileflags` is the set of compiler
01201         flags that should be used to execute examples.  Return a tuple
01202         `(f, t)`, where `t` is the number of examples tried, and `f`
01203         is the number of examples that failed.  The examples are run
01204         in the namespace `test.globs`.
01205         """
01206         # Keep track of the number of failures and tries.
01207         failures = tries = 0
01208 
01209         # Save the option flags (since option directives can be used
01210         # to modify them).
01211         original_optionflags = self.optionflags
01212 
01213         SUCCESS, FAILURE, BOOM = range(3) # `outcome` state
01214 
01215         check = self._checker.check_output
01216 
01217         # Process each example.
01218         for examplenum, example in enumerate(test.examples):
01219 
01220             # If REPORT_ONLY_FIRST_FAILURE is set, then suppress
01221             # reporting after the first failure.
01222             quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and
01223                      failures > 0)
01224 
01225             # Merge in the example's options.
01226             self.optionflags = original_optionflags
01227             if example.options:
01228                 for (optionflag, val) in example.options.items():
01229                     if val:
01230                         self.optionflags |= optionflag
01231                     else:
01232                         self.optionflags &= ~optionflag
01233 
01234             # If 'SKIP' is set, then skip this example.
01235             if self.optionflags & SKIP:
01236                 continue
01237 
01238             # Record that we started this example.
01239             tries += 1
01240             if not quiet:
01241                 self.report_start(out, test, example)
01242 
01243             # Use a special filename for compile(), so we can retrieve
01244             # the source code during interactive debugging (see
01245             # __patched_linecache_getlines).
01246             filename = '<doctest %s[%d]>' % (test.name, examplenum)
01247 
01248             # Run the example in the given context (globs), and record
01249             # any exception that gets raised.  (But don't intercept
01250             # keyboard interrupts.)
01251             try:
01252                 # Don't blink!  This is where the user's code gets run.
01253                 exec(compile(example.source, filename, "single",
01254                              compileflags, 1), test.globs)
01255                 self.debugger.set_continue() # ==== Example Finished ====
01256                 exception = None
01257             except KeyboardInterrupt:
01258                 raise
01259             except:
01260                 exception = sys.exc_info()
01261                 self.debugger.set_continue() # ==== Example Finished ====
01262 
01263             got = self._fakeout.getvalue()  # the actual output
01264             self._fakeout.truncate(0)
01265             outcome = FAILURE   # guilty until proved innocent or insane
01266 
01267             # If the example executed without raising any exceptions,
01268             # verify its output.
01269             if exception is None:
01270                 if check(example.want, got, self.optionflags):
01271                     outcome = SUCCESS
01272 
01273             # The example raised an exception:  check if it was expected.
01274             else:
01275                 exc_msg = traceback.format_exception_only(*exception[:2])[-1]
01276                 if not quiet:
01277                     got += _exception_traceback(exception)
01278 
01279                 # If `example.exc_msg` is None, then we weren't expecting
01280                 # an exception.
01281                 if example.exc_msg is None:
01282                     outcome = BOOM
01283 
01284                 # We expected an exception:  see whether it matches.
01285                 elif check(example.exc_msg, exc_msg, self.optionflags):
01286                     outcome = SUCCESS
01287 
01288                 # Another chance if they didn't care about the detail.
01289                 elif self.optionflags & IGNORE_EXCEPTION_DETAIL:
01290                     m1 = re.match(r'(?:[^:]*\.)?([^:]*:)', example.exc_msg)
01291                     m2 = re.match(r'(?:[^:]*\.)?([^:]*:)', exc_msg)
01292                     if m1 and m2 and check(m1.group(1), m2.group(1),
01293                                            self.optionflags):
01294                         outcome = SUCCESS
01295 
01296             # Report the outcome.
01297             if outcome is SUCCESS:
01298                 if not quiet:
01299                     self.report_success(out, test, example, got)
01300             elif outcome is FAILURE:
01301                 if not quiet:
01302                     self.report_failure(out, test, example, got)
01303                 failures += 1
01304             elif outcome is BOOM:
01305                 if not quiet:
01306                     self.report_unexpected_exception(out, test, example,
01307                                                      exception)
01308                 failures += 1
01309             else:
01310                 assert False, ("unknown outcome", outcome)
01311 
01312         # Restore the option flags (in case they were modified)
01313         self.optionflags = original_optionflags
01314 
01315         # Record and return the number of failures and tries.
01316         self.__record_outcome(test, failures, tries)
01317         return TestResults(failures, tries)

Here is the call graph for this function:

def doctest.DocTestRunner._failure_header (   self,
  test,
  example 
) [private]

Definition at line 1175 of file doctest.py.

01175 
01176     def _failure_header(self, test, example):
01177         out = [self.DIVIDER]
01178         if test.filename:
01179             if test.lineno is not None and example.lineno is not None:
01180                 lineno = test.lineno + example.lineno + 1
01181             else:
01182                 lineno = '?'
01183             out.append('File "%s", line %s, in %s' %
01184                        (test.filename, lineno, test.name))
01185         else:
01186             out.append('Line %s, in %s' % (example.lineno+1, test.name))
01187         out.append('Failed example:')
01188         source = example.source
01189         out.append(_indent(source))
01190         return '\n'.join(out)

Here is the call graph for this function:

Here is the caller graph for this function:

def doctest.DocTestRunner.merge (   self,
  other 
)

Definition at line 1467 of file doctest.py.

01467 
01468     def merge(self, other):
01469         d = self._name2ft
01470         for name, (f, t) in other._name2ft.items():
01471             if name in d:
01472                 # Don't print here by default, since doing
01473                 #     so breaks some of the buildbots
01474                 #print("*** DocTestRunner.merge: '" + name + "' in both" \
01475                 #    " testers; summing outcomes.")
01476                 f2, t2 = d[name]
01477                 f = f + f2
01478                 t = t + t2
01479             d[name] = f, t

def doctest.DocTestRunner.report_failure (   self,
  out,
  test,
  example,
  got 
)
Report that the given example failed.

Reimplemented in doctest.DebugRunner.

Definition at line 1161 of file doctest.py.

01161 
01162     def report_failure(self, out, test, example, got):
01163         """
01164         Report that the given example failed.
01165         """
01166         out(self._failure_header(test, example) +
01167             self._checker.output_difference(example, got, self.optionflags))

Here is the call graph for this function:

Here is the caller graph for this function:

def doctest.DocTestRunner.report_start (   self,
  out,
  test,
  example 
)
Report that the test runner is about to process the given
example.  (Only displays a message if verbose=True)

Definition at line 1140 of file doctest.py.

01140 
01141     def report_start(self, out, test, example):
01142         """
01143         Report that the test runner is about to process the given
01144         example.  (Only displays a message if verbose=True)
01145         """
01146         if self._verbose:
01147             if example.want:
01148                 out('Trying:\n' + _indent(example.source) +
01149                     'Expecting:\n' + _indent(example.want))
01150             else:
01151                 out('Trying:\n' + _indent(example.source) +
01152                     'Expecting nothing\n')

Here is the call graph for this function:

Here is the caller graph for this function:

def doctest.DocTestRunner.report_success (   self,
  out,
  test,
  example,
  got 
)
Report that the given example ran successfully.  (Only
displays a message if verbose=True)

Definition at line 1153 of file doctest.py.

01153 
01154     def report_success(self, out, test, example, got):
01155         """
01156         Report that the given example ran successfully.  (Only
01157         displays a message if verbose=True)
01158         """
01159         if self._verbose:
01160             out("ok\n")

Here is the caller graph for this function:

def doctest.DocTestRunner.report_unexpected_exception (   self,
  out,
  test,
  example,
  exc_info 
)
Report that the given example raised an unexpected exception.

Reimplemented in doctest.DebugRunner.

Definition at line 1168 of file doctest.py.

01168 
01169     def report_unexpected_exception(self, out, test, example, exc_info):
01170         """
01171         Report that the given example raised an unexpected exception.
01172         """
01173         out(self._failure_header(test, example) +
01174             'Exception raised:\n' + _indent(_exception_traceback(exc_info)))

Here is the call graph for this function:

Here is the caller graph for this function:

def doctest.DocTestRunner.run (   self,
  test,
  compileflags = None,
  out = None,
  clear_globs = True 
)
Run the examples in `test`, and display the results using the
writer function `out`.

The examples are run in the namespace `test.globs`.  If
`clear_globs` is true (the default), then this namespace will
be cleared after the test runs, to help with garbage
collection.  If you would like to examine the namespace after
the test completes, then use `clear_globs=False`.

`compileflags` gives the set of flags that should be used by
the Python compiler when running the examples.  If not
specified, then it will default to the set of future-import
flags that apply to `globs`.

The output of each example is checked using
`DocTestRunner.check_output`, and the results are formatted by
the `DocTestRunner.report_*` methods.

Reimplemented in doctest.DebugRunner.

Definition at line 1339 of file doctest.py.

01339 
01340     def run(self, test, compileflags=None, out=None, clear_globs=True):
01341         """
01342         Run the examples in `test`, and display the results using the
01343         writer function `out`.
01344 
01345         The examples are run in the namespace `test.globs`.  If
01346         `clear_globs` is true (the default), then this namespace will
01347         be cleared after the test runs, to help with garbage
01348         collection.  If you would like to examine the namespace after
01349         the test completes, then use `clear_globs=False`.
01350 
01351         `compileflags` gives the set of flags that should be used by
01352         the Python compiler when running the examples.  If not
01353         specified, then it will default to the set of future-import
01354         flags that apply to `globs`.
01355 
01356         The output of each example is checked using
01357         `DocTestRunner.check_output`, and the results are formatted by
01358         the `DocTestRunner.report_*` methods.
01359         """
01360         self.test = test
01361 
01362         if compileflags is None:
01363             compileflags = _extract_future_flags(test.globs)
01364 
01365         save_stdout = sys.stdout
01366         if out is None:
01367             encoding = save_stdout.encoding
01368             if encoding is None or encoding.lower() == 'utf-8':
01369                 out = save_stdout.write
01370             else:
01371                 # Use backslashreplace error handling on write
01372                 def out(s):
01373                     s = str(s.encode(encoding, 'backslashreplace'), encoding)
01374                     save_stdout.write(s)
01375         sys.stdout = self._fakeout
01376 
01377         # Patch pdb.set_trace to restore sys.stdout during interactive
01378         # debugging (so it's not still redirected to self._fakeout).
01379         # Note that the interactive output will go to *our*
01380         # save_stdout, even if that's not the real sys.stdout; this
01381         # allows us to write test cases for the set_trace behavior.
01382         save_set_trace = pdb.set_trace
01383         self.debugger = _OutputRedirectingPdb(save_stdout)
01384         self.debugger.reset()
01385         pdb.set_trace = self.debugger.set_trace
01386 
01387         # Patch linecache.getlines, so we can see the example's source
01388         # when we're inside the debugger.
01389         self.save_linecache_getlines = linecache.getlines
01390         linecache.getlines = self.__patched_linecache_getlines
01391 
01392         # Make sure sys.displayhook just prints the value to stdout
01393         save_displayhook = sys.displayhook
01394         sys.displayhook = sys.__displayhook__
01395 
01396         try:
01397             return self.__run(test, compileflags, out)
01398         finally:
01399             sys.stdout = save_stdout
01400             pdb.set_trace = save_set_trace
01401             linecache.getlines = self.save_linecache_getlines
01402             sys.displayhook = save_displayhook
01403             if clear_globs:
01404                 test.globs.clear()
01405                 import builtins
01406                 builtins._ = None

Here is the caller graph for this function:

def doctest.DocTestRunner.summarize (   self,
  verbose = None 
)
Print a summary of all the test cases that have been run by
this DocTestRunner, and return a tuple `(f, t)`, where `f` is
the total number of failed examples, and `t` is the total
number of tried examples.

The optional `verbose` argument controls how detailed the
summary is.  If the verbosity is not specified, then the
DocTestRunner's verbosity is used.

Definition at line 1410 of file doctest.py.

01410 
01411     def summarize(self, verbose=None):
01412         """
01413         Print a summary of all the test cases that have been run by
01414         this DocTestRunner, and return a tuple `(f, t)`, where `f` is
01415         the total number of failed examples, and `t` is the total
01416         number of tried examples.
01417 
01418         The optional `verbose` argument controls how detailed the
01419         summary is.  If the verbosity is not specified, then the
01420         DocTestRunner's verbosity is used.
01421         """
01422         if verbose is None:
01423             verbose = self._verbose
01424         notests = []
01425         passed = []
01426         failed = []
01427         totalt = totalf = 0
01428         for x in self._name2ft.items():
01429             name, (f, t) = x
01430             assert f <= t
01431             totalt += t
01432             totalf += f
01433             if t == 0:
01434                 notests.append(name)
01435             elif f == 0:
01436                 passed.append( (name, t) )
01437             else:
01438                 failed.append(x)
01439         if verbose:
01440             if notests:
01441                 print(len(notests), "items had no tests:")
01442                 notests.sort()
01443                 for thing in notests:
01444                     print("   ", thing)
01445             if passed:
01446                 print(len(passed), "items passed all tests:")
01447                 passed.sort()
01448                 for thing, count in passed:
01449                     print(" %3d tests in %s" % (count, thing))
01450         if failed:
01451             print(self.DIVIDER)
01452             print(len(failed), "items had failures:")
01453             failed.sort()
01454             for thing, (f, t) in failed:
01455                 print(" %3d of %3d in %s" % (f, t, thing))
01456         if verbose:
01457             print(totalt, "tests in", len(self._name2ft), "items.")
01458             print(totalt - totalf, "passed and", totalf, "failed.")
01459         if totalf:
01460             print("***Test Failed***", totalf, "failures.")
01461         elif verbose:
01462             print("Test passed.")
01463         return TestResults(totalf, totalt)


Member Data Documentation

Initial value:
re.compile(r'<doctest '
                                         r'(?P<name>.+)'
                                         r'\[(?P<examplenum>\d+)\]>$')

Definition at line 1328 of file doctest.py.

Definition at line 1121 of file doctest.py.

Definition at line 1134 of file doctest.py.

Definition at line 1131 of file doctest.py.

Definition at line 1124 of file doctest.py.

Definition at line 1382 of file doctest.py.

Definition at line 1102 of file doctest.py.

Definition at line 1130 of file doctest.py.

Definition at line 1125 of file doctest.py.

Definition at line 1126 of file doctest.py.

Definition at line 1388 of file doctest.py.

Definition at line 1359 of file doctest.py.

Definition at line 1129 of file doctest.py.


The documentation for this class was generated from the following file: