Overview
Comment: | [build] graph builder: sContext |
---|---|
Downloads: | Tarball | ZIP archive | SQL archive |
Timelines: | family | ancestors | descendants | both | build | rg |
Files: | files | file ages | folders |
SHA3-256: |
fc68092e827b8d8695e0ced3b237f20b |
User & Date: | olr on 2018-07-24 21:22:57 |
Other Links: | branch diff | manifest | tags |
Context
2018-07-24
| ||
21:23 | [fr] conversion: regex rules -> graph rules check-in: e4f2abc13a user: olr tags: fr, rg | |
21:22 | [build] graph builder: sContext check-in: fc68092e82 user: olr tags: build, rg | |
2018-07-23
| ||
18:33 | [fr] trois quarts check-in: 073641bf70 user: olr tags: fr, rg | |
Changes
Modified compile_rules_graph.py from [7bc95b954c] to [b0a5afa0c5].
︙ | ︙ | |||
14 15 16 17 18 19 20 21 22 23 24 25 26 27 | dFUNCTIONS = {} def prepareFunction (s): "convert simple rule syntax to a string of Python code" s = s.replace("__also__", "bCondMemo") s = s.replace("__else__", "not bCondMemo") s = re.sub(r"(morph|morphVC|analyse|value|displayInfo)[(]\\(\d+)", 'g_\\1(lToken[\\2+nTokenOffset]', s) s = re.sub(r"(select|exclude|define|define_from)[(][\\](\d+)", 'g_\\1(lToken[\\2+nTokenOffset]', s) s = re.sub(r"(tag_before|tag_after)[(][\\](\d+)", 'g_\\1(lToken[\\2+nTokenOffset], dTags', s) s = re.sub(r"space_after[(][\\](\d+)", 'g_space_between_tokens(lToken[\\1+nTokenOffset], lToken[\\1+nTokenOffset+1]', s) s = re.sub(r"analyse_with_next[(][\\](\d+)", 'g_merged_analyse(lToken[\\1+nTokenOffset], lToken[\\1+nTokenOffset+1]', s) s = re.sub(r"(morph|analyse|value)\(>1", 'g_\\1(lToken[nLastToken+1]', s) # next token s = re.sub(r"(morph|analyse|value)\(<1", 'g_\\1(lToken[nTokenOffset]', s) # previous token | > | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | dFUNCTIONS = {} def prepareFunction (s): "convert simple rule syntax to a string of Python code" s = s.replace("__also__", "bCondMemo") s = s.replace("__else__", "not bCondMemo") s = s.replace("sContext", "_sAppContext") s = re.sub(r"(morph|morphVC|analyse|value|displayInfo)[(]\\(\d+)", 'g_\\1(lToken[\\2+nTokenOffset]', s) s = re.sub(r"(select|exclude|define|define_from)[(][\\](\d+)", 'g_\\1(lToken[\\2+nTokenOffset]', s) s = re.sub(r"(tag_before|tag_after)[(][\\](\d+)", 'g_\\1(lToken[\\2+nTokenOffset], dTags', s) s = re.sub(r"space_after[(][\\](\d+)", 'g_space_between_tokens(lToken[\\1+nTokenOffset], lToken[\\1+nTokenOffset+1]', s) s = re.sub(r"analyse_with_next[(][\\](\d+)", 'g_merged_analyse(lToken[\\1+nTokenOffset], lToken[\\1+nTokenOffset+1]', s) s = re.sub(r"(morph|analyse|value)\(>1", 'g_\\1(lToken[nLastToken+1]', s) # next token s = re.sub(r"(morph|analyse|value)\(<1", 'g_\\1(lToken[nTokenOffset]', s) # previous token |
︙ | ︙ |