Overview
| Comment: | [build] add startswith/endswith functions |
|---|---|
| Downloads: | Tarball | ZIP archive | SQL archive |
| Timelines: | family | ancestors | descendants | both | build | rg |
| Files: | files | file ages | folders |
| SHA3-256: |
786d886c3bce191be6a94b9ba0efbcf0 |
| User & Date: | olr on 2018-07-09 18:14:15 |
| Other Links: | branch diff | manifest | tags |
Context
|
2018-07-10
| ||
| 14:08 | [fr] conversion: regex rules -> graph rules check-in: 305446b23f user: olr tags: fr, rg | |
|
2018-07-09
| ||
| 18:14 | [build] add startswith/endswith functions check-in: 786d886c3b user: olr tags: build, rg | |
| 18:11 | [fr] conversion: regex rules -> graph rules check-in: 060199513b user: olr tags: fr, rg | |
Changes
Modified compile_rules_graph.py from [9ae99796bb] to [fa2c85d15b].
| ︙ | |||
21 22 23 24 25 26 27 28 29 30 31 32 33 34 | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | + |
s = re.sub(r"(morph|analyse|value|displayInfo)[(]\\(\d+)", 'g_\\1(lToken[\\2+nTokenOffset]', s)
s = re.sub(r"(select|exclude|define)[(][\\](\d+)", 'g_\\1(lToken[\\2+nTokenOffset]', s)
s = re.sub(r"(tag_before|tag_after)[(][\\](\d+)", 'g_\\1(lToken[\\2+nTokenOffset], dTags', s)
s = re.sub(r"(switchGender|has(?:(?:Mas|Fem)Form)|Simil)[(]\\(\d+)", '\\1(lToken[\\2+nTokenOffset]["sValue"]', s)
s = re.sub(r"(morph|analyse|value)\(>1", 'g_\\1(lToken[nLastToken+1]', s) # next token
s = re.sub(r"(morph|analyse|value)\(<1", 'g_\\1(lToken[nTokenOffset]', s) # previous token
s = re.sub(r"[\\](\d+)\.is(upper|lower|title)\(\)", 'lToken[\\1+nTokenOffset]["sValue"].is\\2()', s)
s = re.sub(r"[\\](\d+)\.(startswith|endswith)\(", 'lToken[\\1+nTokenOffset]["sValue"].\\2(', s)
s = re.sub(r"\bspell *[(]", '_oSpellChecker.isValid(', s)
s = re.sub(r"\bbefore\(\s*", 'look(sSentence[:lToken[1+nTokenOffset]["nStart"]], ', s) # before(s)
s = re.sub(r"\bafter\(\s*", 'look(sSentence[lToken[nLastToken]["nEnd"]:], ', s) # after(s)
s = re.sub(r"\bbefore0\(\s*", 'look(sSentence0[:lToken[1+nTokenOffset]["nStart"]], ', s) # before0(s)
s = re.sub(r"\bafter0\(\s*", 'look(sSentence[lToken[nLastToken]["nEnd"]:], ', s) # after0(s)
if bTokenValue:
# token values are used as parameter
|
| ︙ |