Back to index

moin  1.9.0~rc2
Public Member Functions | Public Attributes | Static Public Attributes
MoinMoin.support.pygments.lexer.ExtendedRegexLexer Class Reference
Inheritance diagram for MoinMoin.support.pygments.lexer.ExtendedRegexLexer:
Inheritance graph
[legend]
Collaboration diagram for MoinMoin.support.pygments.lexer.ExtendedRegexLexer:
Collaboration graph
[legend]

List of all members.

Public Member Functions

def get_tokens_unprocessed
def get_tokens_unprocessed
def __repr__
def add_filter
def analyse_text
def get_tokens

Public Attributes

 options
 stripnl
 stripall
 tabsize
 encoding
 filters

Static Public Attributes

 flags = re.MULTILINE
dictionary tokens = {}
 name = None
list aliases = []
list filenames = []
list alias_filenames = []
list mimetypes = []

Detailed Description

A RegexLexer that uses a context object to store its state.

Definition at line 537 of file lexer.py.


Member Function Documentation

Definition at line 92 of file lexer.py.

00092 
00093     def __repr__(self):
00094         if self.options:
00095             return '<pygments.lexers.%s with %r>' % (self.__class__.__name__,
00096                                                      self.options)
00097         else:
00098             return '<pygments.lexers.%s>' % self.__class__.__name__

def MoinMoin.support.pygments.lexer.Lexer.add_filter (   self,
  filter_,
  options 
) [inherited]
Add a new stream filter to this lexer.

Definition at line 99 of file lexer.py.

00099 
00100     def add_filter(self, filter_, **options):
00101         """
00102         Add a new stream filter to this lexer.
00103         """
00104         if not isinstance(filter_, Filter):
00105             filter_ = get_filter_by_name(filter_, **options)
00106         self.filters.append(filter_)

Here is the call graph for this function:

Has to return a float between ``0`` and ``1`` that indicates
if a lexer wants to highlight this text. Used by ``guess_lexer``.
If this method returns ``0`` it won't highlight it in any case, if
it returns ``1`` highlighting with this lexer is guaranteed.

The `LexerMeta` metaclass automatically wraps this function so
that it works like a static method (no ``self`` or ``cls``
parameter) and the return value is automatically converted to
`float`. If the return value is an object that is boolean `False`
it's the same as if the return values was ``0.0``.

Definition at line 107 of file lexer.py.

00107 
00108     def analyse_text(text):
00109         """
00110         Has to return a float between ``0`` and ``1`` that indicates
00111         if a lexer wants to highlight this text. Used by ``guess_lexer``.
00112         If this method returns ``0`` it won't highlight it in any case, if
00113         it returns ``1`` highlighting with this lexer is guaranteed.
00114 
00115         The `LexerMeta` metaclass automatically wraps this function so
00116         that it works like a static method (no ``self`` or ``cls``
00117         parameter) and the return value is automatically converted to
00118         `float`. If the return value is an object that is boolean `False`
00119         it's the same as if the return values was ``0.0``.
00120         """

def MoinMoin.support.pygments.lexer.Lexer.get_tokens (   self,
  text,
  unfiltered = False 
) [inherited]
Return an iterable of (tokentype, value) pairs generated from
`text`. If `unfiltered` is set to `True`, the filtering mechanism
is bypassed even if filters are defined.

Also preprocess the text, i.e. expand tabs and strip it if
wanted and applies registered filters.

Definition at line 121 of file lexer.py.

00121 
00122     def get_tokens(self, text, unfiltered=False):
00123         """
00124         Return an iterable of (tokentype, value) pairs generated from
00125         `text`. If `unfiltered` is set to `True`, the filtering mechanism
00126         is bypassed even if filters are defined.
00127 
00128         Also preprocess the text, i.e. expand tabs and strip it if
00129         wanted and applies registered filters.
00130         """
00131         if not isinstance(text, unicode):
00132             if self.encoding == 'guess':
00133                 try:
00134                     text = text.decode('utf-8')
00135                     if text.startswith(u'\ufeff'):
00136                         text = text[len(u'\ufeff'):]
00137                 except UnicodeDecodeError:
00138                     text = text.decode('latin1')
00139             elif self.encoding == 'chardet':
00140                 try:
00141                     import chardet
00142                 except ImportError:
00143                     raise ImportError('To enable chardet encoding guessing, '
00144                                       'please install the chardet library '
00145                                       'from http://chardet.feedparser.org/')
00146                 enc = chardet.detect(text)
00147                 text = text.decode(enc['encoding'])
00148             else:
00149                 text = text.decode(self.encoding)
00150         # text now *is* a unicode string
00151         text = text.replace('\r\n', '\n')
00152         text = text.replace('\r', '\n')
00153         if self.stripall:
00154             text = text.strip()
00155         elif self.stripnl:
00156             text = text.strip('\n')
00157         if self.tabsize > 0:
00158             text = text.expandtabs(self.tabsize)
00159         if not text.endswith('\n'):
00160             text += '\n'
00161 
00162         def streamer():
00163             for i, t, v in self.get_tokens_unprocessed(text):
00164                 yield t, v
00165         stream = streamer()
00166         if not unfiltered:
00167             stream = apply_filters(stream, self.filters, self)
00168         return stream

Here is the call graph for this function:

def MoinMoin.support.pygments.lexer.Lexer.get_tokens_unprocessed (   self,
  text 
) [inherited]
Return an iterable of (tokentype, value) pairs.
In subclasses, implement this method as a generator to
maximize effectiveness.

Reimplemented in MoinMoin.support.pygments.lexer.DelegatingLexer.

Definition at line 169 of file lexer.py.

00169 
00170     def get_tokens_unprocessed(self, text):
00171         """
00172         Return an iterable of (tokentype, value) pairs.
00173         In subclasses, implement this method as a generator to
00174         maximize effectiveness.
00175         """
00176         raise NotImplementedError
00177 

Here is the caller graph for this function:

def MoinMoin.support.pygments.lexer.ExtendedRegexLexer.get_tokens_unprocessed (   self,
  text = None,
  context = None 
)
Split ``text`` into (tokentype, text) pairs.
If ``context`` is given, use this lexer context instead.

Reimplemented from MoinMoin.support.pygments.lexer.RegexLexer.

Definition at line 542 of file lexer.py.

00542 
00543     def get_tokens_unprocessed(self, text=None, context=None):
00544         """
00545         Split ``text`` into (tokentype, text) pairs.
00546         If ``context`` is given, use this lexer context instead.
00547         """
00548         tokendefs = self._tokens
00549         if not context:
00550             ctx = LexerContext(text, 0)
00551             statetokens = tokendefs['root']
00552         else:
00553             ctx = context
00554             statetokens = tokendefs[ctx.stack[-1]]
00555             text = ctx.text
00556         while 1:
00557             for rexmatch, action, new_state in statetokens:
00558                 m = rexmatch(text, ctx.pos, ctx.end)
00559                 if m:
00560                     if type(action) is _TokenType:
00561                         yield ctx.pos, action, m.group()
00562                         ctx.pos = m.end()
00563                     else:
00564                         for item in action(self, m, ctx):
00565                             yield item
00566                         if not new_state:
00567                             # altered the state stack?
00568                             statetokens = tokendefs[ctx.stack[-1]]
00569                     # CAUTION: callback must set ctx.pos!
00570                     if new_state is not None:
00571                         # state transition
00572                         if isinstance(new_state, tuple):
00573                             ctx.stack.extend(new_state)
00574                         elif isinstance(new_state, int):
00575                             # pop
00576                             del ctx.stack[new_state:]
00577                         elif new_state == '#push':
00578                             ctx.stack.append(ctx.stack[-1])
00579                         else:
00580                             assert False, "wrong state def: %r" % new_state
00581                         statetokens = tokendefs[ctx.stack[-1]]
00582                     break
00583             else:
00584                 try:
00585                     if ctx.pos >= ctx.end:
00586                         break
00587                     if text[ctx.pos] == '\n':
00588                         # at EOL, reset state to "root"
00589                         ctx.pos += 1
00590                         ctx.stack = ['root']
00591                         statetokens = tokendefs['root']
00592                         yield ctx.pos, Text, u'\n'
00593                         continue
00594                     yield ctx.pos, Error, text[ctx.pos]
00595                     ctx.pos += 1
00596                 except IndexError:
00597                     break
00598 

Here is the caller graph for this function:


Member Data Documentation

Definition at line 74 of file lexer.py.

list MoinMoin.support.pygments.lexer.Lexer.aliases = [] [static, inherited]

Definition at line 68 of file lexer.py.

Definition at line 86 of file lexer.py.

list MoinMoin.support.pygments.lexer.Lexer.filenames = [] [static, inherited]

Definition at line 71 of file lexer.py.

Definition at line 88 of file lexer.py.

MoinMoin.support.pygments.lexer.RegexLexer.flags = re.MULTILINE [static, inherited]

Definition at line 446 of file lexer.py.

list MoinMoin.support.pygments.lexer.Lexer.mimetypes = [] [static, inherited]

Definition at line 77 of file lexer.py.

MoinMoin.support.pygments.lexer.Lexer.name = None [static, inherited]

Definition at line 65 of file lexer.py.

Definition at line 82 of file lexer.py.

Definition at line 84 of file lexer.py.

Definition at line 83 of file lexer.py.

Definition at line 85 of file lexer.py.

dictionary MoinMoin.support.pygments.lexer.RegexLexer.tokens = {} [static, inherited]

Definition at line 465 of file lexer.py.


The documentation for this class was generated from the following file: