Back to index

python3.2  3.2.2
Public Member Functions | Public Attributes | Static Public Attributes
pybench.Benchmark Class Reference

Benchmark base class. More...

Collaboration diagram for pybench.Benchmark:
Collaboration graph
[legend]

List of all members.

Public Member Functions

def __init__
def get_timer
def compatible
def load_tests
def calibrate
def run
def stat
def print_header
def print_benchmark
def print_comparison

Public Attributes

 name
 verbose
 warp
 calibration_runs
 tests
 version
 roundtimes

Static Public Attributes

string name = ''
int rounds = 1
int warp = 1
int roundtime = 0
float version = 2.1
int verbose = 0
 machine_details = None
 timer = TIMER_PLATFORM_DEFAULT

Detailed Description

Benchmark base class.

Definition at line 392 of file pybench.py.


Constructor & Destructor Documentation

def pybench.Benchmark.__init__ (   self,
  name,
  verbose = None,
  timer = None,
  warp = None,
  calibration_runs = None 
)

Definition at line 419 of file pybench.py.

00419 
00420                  calibration_runs=None):
00421 
00422         if name:
00423             self.name = name
00424         else:
00425             self.name = '%04i-%02i-%02i %02i:%02i:%02i' % \
00426                         (time.localtime(time.time())[:6])
00427         if verbose is not None:
00428             self.verbose = verbose
00429         if timer is not None:
00430             self.timer = timer
00431         if warp is not None:
00432             self.warp = warp
00433         if calibration_runs is not None:
00434             self.calibration_runs = calibration_runs
00435 
00436         # Init vars
00437         self.tests = {}
00438         if _debug:
00439             print('Getting machine details...')
00440         self.machine_details = get_machine_details()
00441 
00442         # Make .version an instance attribute to have it saved in the
00443         # Benchmark pickle
00444         self.version = self.version

Here is the caller graph for this function:


Member Function Documentation

Definition at line 498 of file pybench.py.

00498 
00499     def calibrate(self):
00500 
00501         print('Calibrating tests. Please wait...', end=' ')
00502         sys.stdout.flush()
00503         if self.verbose:
00504             print()
00505             print()
00506             print('Test                              min      max')
00507             print('-' * LINE)
00508         tests = sorted(self.tests.items())
00509         for i in range(len(tests)):
00510             name, test = tests[i]
00511             test.calibrate_test()
00512             if self.verbose:
00513                 print('%30s:  %6.3fms  %6.3fms' % \
00514                       (name,
00515                        min(test.overhead_times) * MILLI_SECONDS,
00516                        max(test.overhead_times) * MILLI_SECONDS))
00517         if self.verbose:
00518             print()
00519             print('Done with the calibration.')
00520         else:
00521             print('done.')
00522         print()

Here is the call graph for this function:

def pybench.Benchmark.compatible (   self,
  other 
)
Return 1/0 depending on whether the benchmark is
    compatible with the other Benchmark instance or not.

Definition at line 452 of file pybench.py.

00452 
00453     def compatible(self, other):
00454 
00455         """ Return 1/0 depending on whether the benchmark is
00456             compatible with the other Benchmark instance or not.
00457 
00458         """
00459         if self.version != other.version:
00460             return 0
00461         if (self.machine_details == other.machine_details and
00462             self.timer != other.timer):
00463             return 0
00464         if (self.calibration_runs == 0 and
00465             other.calibration_runs != 0):
00466             return 0
00467         if (self.calibration_runs != 0 and
00468             other.calibration_runs == 0):
00469             return 0
00470         return 1

Here is the caller graph for this function:

Return the timer function to use for the test.

Definition at line 445 of file pybench.py.

00445 
00446     def get_timer(self):
00447 
00448         """ Return the timer function to use for the test.
00449 
00450         """
00451         return get_timer(self.timer)

Here is the caller graph for this function:

def pybench.Benchmark.load_tests (   self,
  setupmod,
  limitnames = None 
)

Definition at line 471 of file pybench.py.

00471 
00472     def load_tests(self, setupmod, limitnames=None):
00473 
00474         # Add tests
00475         if self.verbose:
00476             print('Searching for tests ...')
00477             print('--------------------------------------')
00478         for testclass in setupmod.__dict__.values():
00479             if not hasattr(testclass, 'is_a_test'):
00480                 continue
00481             name = testclass.__name__
00482             if  name == 'Test':
00483                 continue
00484             if (limitnames is not None and
00485                 limitnames.search(name) is None):
00486                 continue
00487             self.tests[name] = testclass(
00488                 warp=self.warp,
00489                 calibration_runs=self.calibration_runs,
00490                 timer=self.timer)
00491         l = sorted(self.tests)
00492         if self.verbose:
00493             for name in l:
00494                 print('  %s' % name)
00495             print('--------------------------------------')
00496             print('  %i tests found' % len(l))
00497             print()

Here is the call graph for this function:

def pybench.Benchmark.print_benchmark (   self,
  hidenoise = 0,
  limitnames = None 
)

Definition at line 595 of file pybench.py.

00595 
00596     def print_benchmark(self, hidenoise=0, limitnames=None):
00597 
00598         print('Test                          '
00599                '   minimum  average  operation  overhead')
00600         print('-' * LINE)
00601         tests = sorted(self.tests.items())
00602         total_min_time = 0.0
00603         total_avg_time = 0.0
00604         for name, test in tests:
00605             if (limitnames is not None and
00606                 limitnames.search(name) is None):
00607                 continue
00608             (min_time,
00609              avg_time,
00610              total_time,
00611              op_avg,
00612              min_overhead) = test.stat()
00613             total_min_time = total_min_time + min_time
00614             total_avg_time = total_avg_time + avg_time
00615             print('%30s:  %5.0fms  %5.0fms  %6.2fus  %7.3fms' % \
00616                   (name,
00617                    min_time * MILLI_SECONDS,
00618                    avg_time * MILLI_SECONDS,
00619                    op_avg * MICRO_SECONDS,
00620                    min_overhead *MILLI_SECONDS))
00621         print('-' * LINE)
00622         print('Totals:                        '
00623                ' %6.0fms %6.0fms' %
00624                (total_min_time * MILLI_SECONDS,
00625                 total_avg_time * MILLI_SECONDS,
00626                 ))
00627         print()

Here is the call graph for this function:

Here is the caller graph for this function:

def pybench.Benchmark.print_comparison (   self,
  compare_to,
  hidenoise = 0,
  limitnames = None 
)

Definition at line 628 of file pybench.py.

00628 
00629     def print_comparison(self, compare_to, hidenoise=0, limitnames=None):
00630 
00631         # Check benchmark versions
00632         if compare_to.version != self.version:
00633             print('* Benchmark versions differ: '
00634                    'cannot compare this benchmark to "%s" !' %
00635                    compare_to.name)
00636             print()
00637             self.print_benchmark(hidenoise=hidenoise,
00638                                  limitnames=limitnames)
00639             return
00640 
00641         # Print header
00642         compare_to.print_header('Comparing with')
00643         print('Test                          '
00644                '   minimum run-time        average  run-time')
00645         print('                              '
00646                '   this    other   diff    this    other   diff')
00647         print('-' * LINE)
00648 
00649         # Print test comparisons
00650         tests = sorted(self.tests.items())
00651         total_min_time = other_total_min_time = 0.0
00652         total_avg_time = other_total_avg_time = 0.0
00653         benchmarks_compatible = self.compatible(compare_to)
00654         tests_compatible = 1
00655         for name, test in tests:
00656             if (limitnames is not None and
00657                 limitnames.search(name) is None):
00658                 continue
00659             (min_time,
00660              avg_time,
00661              total_time,
00662              op_avg,
00663              min_overhead) = test.stat()
00664             total_min_time = total_min_time + min_time
00665             total_avg_time = total_avg_time + avg_time
00666             try:
00667                 other = compare_to.tests[name]
00668             except KeyError:
00669                 other = None
00670             if other is None:
00671                 # Other benchmark doesn't include the given test
00672                 min_diff, avg_diff = 'n/a', 'n/a'
00673                 other_min_time = 0.0
00674                 other_avg_time = 0.0
00675                 tests_compatible = 0
00676             else:
00677                 (other_min_time,
00678                  other_avg_time,
00679                  other_total_time,
00680                  other_op_avg,
00681                  other_min_overhead) = other.stat()
00682                 other_total_min_time = other_total_min_time + other_min_time
00683                 other_total_avg_time = other_total_avg_time + other_avg_time
00684                 if (benchmarks_compatible and
00685                     test.compatible(other)):
00686                     # Both benchmark and tests are comparable
00687                     min_diff = ((min_time * self.warp) /
00688                                 (other_min_time * other.warp) - 1.0)
00689                     avg_diff = ((avg_time * self.warp) /
00690                                 (other_avg_time * other.warp) - 1.0)
00691                     if hidenoise and abs(min_diff) < 10.0:
00692                         min_diff = ''
00693                     else:
00694                         min_diff = '%+5.1f%%' % (min_diff * PERCENT)
00695                     if hidenoise and abs(avg_diff) < 10.0:
00696                         avg_diff = ''
00697                     else:
00698                         avg_diff = '%+5.1f%%' % (avg_diff * PERCENT)
00699                 else:
00700                     # Benchmark or tests are not comparable
00701                     min_diff, avg_diff = 'n/a', 'n/a'
00702                     tests_compatible = 0
00703             print('%30s: %5.0fms %5.0fms %7s %5.0fms %5.0fms %7s' % \
00704                   (name,
00705                    min_time * MILLI_SECONDS,
00706                    other_min_time * MILLI_SECONDS * compare_to.warp / self.warp,
00707                    min_diff,
00708                    avg_time * MILLI_SECONDS,
00709                    other_avg_time * MILLI_SECONDS * compare_to.warp / self.warp,
00710                    avg_diff))
00711         print('-' * LINE)
00712 
00713         # Summarise test results
00714         if not benchmarks_compatible or not tests_compatible:
00715             min_diff, avg_diff = 'n/a', 'n/a'
00716         else:
00717             if other_total_min_time != 0.0:
00718                 min_diff = '%+5.1f%%' % (
00719                     ((total_min_time * self.warp) /
00720                      (other_total_min_time * compare_to.warp) - 1.0) * PERCENT)
00721             else:
00722                 min_diff = 'n/a'
00723             if other_total_avg_time != 0.0:
00724                 avg_diff = '%+5.1f%%' % (
00725                     ((total_avg_time * self.warp) /
00726                      (other_total_avg_time * compare_to.warp) - 1.0) * PERCENT)
00727             else:
00728                 avg_diff = 'n/a'
00729         print('Totals:                       '
00730                '  %5.0fms %5.0fms %7s %5.0fms %5.0fms %7s' %
00731                (total_min_time * MILLI_SECONDS,
00732                 (other_total_min_time * compare_to.warp/self.warp
00733                  * MILLI_SECONDS),
00734                 min_diff,
00735                 total_avg_time * MILLI_SECONDS,
00736                 (other_total_avg_time * compare_to.warp/self.warp
00737                  * MILLI_SECONDS),
00738                 avg_diff
00739                ))
00740         print()
00741         print('(this=%s, other=%s)' % (self.name,
00742                                        compare_to.name))
00743         print()

Here is the call graph for this function:

def pybench.Benchmark.print_header (   self,
  title = 'Benchmark' 
)

Definition at line 581 of file pybench.py.

00581 
00582     def print_header(self, title='Benchmark'):
00583 
00584         print('-' * LINE)
00585         print('%s: %s' % (title, self.name))
00586         print('-' * LINE)
00587         print()
00588         print('    Rounds: %s' % self.rounds)
00589         print('    Warp:   %s' % self.warp)
00590         print('    Timer:  %s' % self.timer)
00591         print()
00592         if self.machine_details:
00593             print_machine_details(self.machine_details, indent='    ')
00594             print()

Here is the call graph for this function:

def pybench.Benchmark.run (   self)

Definition at line 523 of file pybench.py.

00523 
00524     def run(self):
00525 
00526         tests = sorted(self.tests.items())
00527         timer = self.get_timer()
00528         print('Running %i round(s) of the suite at warp factor %i:' % \
00529               (self.rounds, self.warp))
00530         print()
00531         self.roundtimes = []
00532         for i in range(self.rounds):
00533             if self.verbose:
00534                 print(' Round %-25i  effective   absolute  overhead' % (i+1))
00535             total_eff_time = 0.0
00536             for j in range(len(tests)):
00537                 name, test = tests[j]
00538                 if self.verbose:
00539                     print('%30s:' % name, end=' ')
00540                 test.run()
00541                 (eff_time, abs_time, min_overhead) = test.last_timing
00542                 total_eff_time = total_eff_time + eff_time
00543                 if self.verbose:
00544                     print('    %5.0fms    %5.0fms %7.3fms' % \
00545                           (eff_time * MILLI_SECONDS,
00546                            abs_time * MILLI_SECONDS,
00547                            min_overhead * MILLI_SECONDS))
00548             self.roundtimes.append(total_eff_time)
00549             if self.verbose:
00550                 print('                   '
00551                        '               ------------------------------')
00552                 print('                   '
00553                        '     Totals:    %6.0fms' %
00554                        (total_eff_time * MILLI_SECONDS))
00555                 print()
00556             else:
00557                 print('* Round %i done in %.3f seconds.' % (i+1,
00558                                                             total_eff_time))
00559         print()

Here is the call graph for this function:

def pybench.Benchmark.stat (   self)
Return benchmark run statistics as tuple:

    (minimum round time,
     average round time,
     maximum round time)

    XXX Currently not used, since the benchmark does test
statistics across all rounds.

Definition at line 560 of file pybench.py.

00560 
00561     def stat(self):
00562 
00563         """ Return benchmark run statistics as tuple:
00564 
00565             (minimum round time,
00566              average round time,
00567              maximum round time)
00568 
00569             XXX Currently not used, since the benchmark does test
00570                 statistics across all rounds.
00571 
00572         """
00573         runs = len(self.roundtimes)
00574         if runs == 0:
00575             return 0.0, 0.0
00576         min_time = min(self.roundtimes)
00577         total_time = sum(self.roundtimes)
00578         avg_time = total_time / float(runs)
00579         max_time = max(self.roundtimes)
00580         return (min_time, avg_time, max_time)

Here is the call graph for this function:


Member Data Documentation

Definition at line 433 of file pybench.py.

Definition at line 413 of file pybench.py.

Definition at line 395 of file pybench.py.

Definition at line 422 of file pybench.py.

Definition at line 398 of file pybench.py.

Definition at line 404 of file pybench.py.

Definition at line 530 of file pybench.py.

Definition at line 436 of file pybench.py.

Definition at line 416 of file pybench.py.

Definition at line 410 of file pybench.py.

Definition at line 427 of file pybench.py.

float pybench.Benchmark.version = 2.1 [static]

Definition at line 407 of file pybench.py.

Definition at line 443 of file pybench.py.

Definition at line 401 of file pybench.py.

Definition at line 431 of file pybench.py.


The documentation for this class was generated from the following file: