Grammalecte  Check-in [ff58bafc4d]

Overview
Comment:[core] gc engine: dictionary of tokens position for disambiguation
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | core | rg
Files: files | file ages | folders
SHA3-256: ff58bafc4d92aec3b974dabc01e7348355d1c9ab7123f58a4898c8f30a6f4ba8
User & Date: olr on 2018-06-13 07:57:29
Other Links: branch diff | manifest | tags
Context
2018-06-13
09:26
[build][core] regex rules now use tokens for disambiguation check-in: 1184e8ba6d user: olr tags: core, build, rg
07:57
[core] gc engine: dictionary of tokens position for disambiguation check-in: ff58bafc4d user: olr tags: core, rg
06:02
[misc] SublimeText theme update check-in: 655e0c4bf5 user: olr tags: misc, rg
Changes

Modified gc_core/py/lang_core/gc_engine.py from [acd000b8ca] to [f48195d439].

584
585
586
587
588
589
590

591
592
593
594
595
596
597
class TokenSentence:

    def __init__ (self, sSentence, sSentence0, nOffset):
        self.sSentence = sSentence
        self.sSentence0 = sSentence0
        self.nOffset = nOffset
        self.lToken = list(_oTokenizer.genTokens(sSentence, True))

        self.createError = self._createWriterError  if _bWriterError  else self._createDictError

    def update (self, sSentence):
        self.sSentence = sSentence
        self.lToken = list(_oTokenizer.genTokens(sSentence, True))

    def _getNextMatchingNodes (self, dToken, dGraph, dNode):







>







584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
class TokenSentence:

    def __init__ (self, sSentence, sSentence0, nOffset):
        self.sSentence = sSentence
        self.sSentence0 = sSentence0
        self.nOffset = nOffset
        self.lToken = list(_oTokenizer.genTokens(sSentence, True))
        self.dTokenPos = { dToken["nStart"]: dToken  for dToken in self.lToken }
        self.createError = self._createWriterError  if _bWriterError  else self._createDictError

    def update (self, sSentence):
        self.sSentence = sSentence
        self.lToken = list(_oTokenizer.genTokens(sSentence, True))

    def _getNextMatchingNodes (self, dToken, dGraph, dNode):