Back to index

moin  1.9.0~rc2
Classes | Functions | Variables
MoinMoin.support.pygments.lexer Namespace Reference

Classes

class  LexerMeta
class  Lexer
class  DelegatingLexer
class  include
class  combined
class  _PseudoMatch
class  _This
class  RegexLexerMeta
class  RegexLexer
class  LexerContext
class  ExtendedRegexLexer

Functions

def bygroups
def using
def do_insertions

Variables

list __all__
tuple _default_analyse = staticmethod(lambda x: 0.0)
tuple this = _This()

Class Documentation

class MoinMoin::support::pygments::lexer::include
Indicates that a state should include rules from another state.

Definition at line 217 of file lexer.py.

class MoinMoin::support::pygments::lexer::_This
Special singleton used for indicating the caller class.
Used by ``using``.

Definition at line 288 of file lexer.py.


Function Documentation

Callback that yields multiple actions for each group in the match.

Definition at line 264 of file lexer.py.

00264 
00265 def bygroups(*args):
00266     """
00267     Callback that yields multiple actions for each group in the match.
00268     """
00269     def callback(lexer, match, ctx=None):
00270         for i, action in enumerate(args):
00271             if action is None:
00272                 continue
00273             elif type(action) is _TokenType:
00274                 data = match.group(i + 1)
00275                 if data:
00276                     yield match.start(i + 1), action, data
00277             else:
00278                 if ctx:
00279                     ctx.pos = match.start(i + 1)
00280                 for item in action(lexer, _PseudoMatch(match.start(i + 1),
00281                                    match.group(i + 1)), ctx):
00282                     if item:
00283                         yield item
00284         if ctx:
00285             ctx.pos = match.end()
00286     return callback
00287 

Here is the call graph for this function:

Here is the caller graph for this function:

def MoinMoin.support.pygments.lexer.do_insertions (   insertions,
  tokens 
)
Helper for lexers which must combine the results of several
sublexers.

``insertions`` is a list of ``(index, itokens)`` pairs.
Each ``itokens`` iterable should be inserted at position
``index`` into the token stream given by the ``tokens``
argument.

The result is a combined token stream.

TODO: clean up the code here.

Definition at line 599 of file lexer.py.

00599 
00600 def do_insertions(insertions, tokens):
00601     """
00602     Helper for lexers which must combine the results of several
00603     sublexers.
00604 
00605     ``insertions`` is a list of ``(index, itokens)`` pairs.
00606     Each ``itokens`` iterable should be inserted at position
00607     ``index`` into the token stream given by the ``tokens``
00608     argument.
00609 
00610     The result is a combined token stream.
00611 
00612     TODO: clean up the code here.
00613     """
00614     insertions = iter(insertions)
00615     try:
00616         index, itokens = insertions.next()
00617     except StopIteration:
00618         # no insertions
00619         for item in tokens:
00620             yield item
00621         return
00622 
00623     realpos = None
00624     insleft = True
00625 
00626     # iterate over the token stream where we want to insert
00627     # the tokens from the insertion list.
00628     for i, t, v in tokens:
00629         # first iteration. store the postition of first item
00630         if realpos is None:
00631             realpos = i
00632         oldi = 0
00633         while insleft and i + len(v) >= index:
00634             tmpval = v[oldi:index - i]
00635             yield realpos, t, tmpval
00636             realpos += len(tmpval)
00637             for it_index, it_token, it_value in itokens:
00638                 yield realpos, it_token, it_value
00639                 realpos += len(it_value)
00640             oldi = index - i
00641             try:
00642                 index, itokens = insertions.next()
00643             except StopIteration:
00644                 insleft = False
00645                 break  # not strictly necessary
00646         yield realpos, t, v[oldi:]
00647         realpos += len(v) - oldi
00648 
00649     # leftover tokens
00650     if insleft:
00651         # no normal tokens, set realpos to zero
00652         realpos = realpos or 0
00653         for p, t, v in itokens:
00654             yield realpos, t, v
00655             realpos += len(v)

Here is the caller graph for this function:

def MoinMoin.support.pygments.lexer.using (   _other,
  kwargs 
)
Callback that processes the match with a different lexer.

The keyword arguments are forwarded to the lexer, except `state` which
is handled separately.

`state` specifies the state that the new lexer will start in, and can
be an enumerable such as ('root', 'inline', 'string') or a simple
string which is assumed to be on top of the root state.

Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.

Definition at line 296 of file lexer.py.

00296 
00297 def using(_other, **kwargs):
00298     """
00299     Callback that processes the match with a different lexer.
00300 
00301     The keyword arguments are forwarded to the lexer, except `state` which
00302     is handled separately.
00303 
00304     `state` specifies the state that the new lexer will start in, and can
00305     be an enumerable such as ('root', 'inline', 'string') or a simple
00306     string which is assumed to be on top of the root state.
00307 
00308     Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
00309     """
00310     gt_kwargs = {}
00311     if 'state' in kwargs:
00312         s = kwargs.pop('state')
00313         if isinstance(s, (list, tuple)):
00314             gt_kwargs['stack'] = s
00315         else:
00316             gt_kwargs['stack'] = ('root', s)
00317 
00318     if _other is this:
00319         def callback(lexer, match, ctx=None):
00320             # if keyword arguments are given the callback
00321             # function has to create a new lexer instance
00322             if kwargs:
00323                 # XXX: cache that somehow
00324                 kwargs.update(lexer.options)
00325                 lx = lexer.__class__(**kwargs)
00326             else:
00327                 lx = lexer
00328             s = match.start()
00329             for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
00330                 yield i + s, t, v
00331             if ctx:
00332                 ctx.pos = match.end()
00333     else:
00334         def callback(lexer, match, ctx=None):
00335             # XXX: cache that somehow
00336             kwargs.update(lexer.options)
00337             lx = _other(**kwargs)
00338 
00339             s = match.start()
00340             for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
00341                 yield i + s, t, v
00342             if ctx:
00343                 ctx.pos = match.end()
00344     return callback
00345 

Here is the call graph for this function:


Variable Documentation

Initial value:
00001 ['Lexer', 'RegexLexer', 'ExtendedRegexLexer', 'DelegatingLexer',
00002            'LexerContext', 'include', 'flags', 'bygroups', 'using', 'this']

Definition at line 25 of file lexer.py.

tuple MoinMoin.support.pygments.lexer._default_analyse = staticmethod(lambda x: 0.0)

Definition at line 29 of file lexer.py.

Definition at line 293 of file lexer.py.