Overview
Comment: | [doc] syntax update |
---|---|
Downloads: | Tarball | ZIP archive | SQL archive |
Timelines: | family | ancestors | descendants | both | trunk | doc |
Files: | files | file ages | folders |
SHA3-256: |
0ac7e8df1f3bd93ca31a01b17613fc3e |
User & Date: | olr on 2020-04-13 09:25:23 |
Other Links: | manifest | tags |
Context
2020-04-13
| ||
11:11 | [fr] ajustements check-in: 6bf6e60197 user: olr tags: trunk, fr | |
09:25 | [doc] syntax update check-in: 0ac7e8df1f user: olr tags: trunk, doc | |
00:10 | [fr] ajustements check-in: e8e76ad812 user: olr tags: trunk, fr | |
Changes
Modified doc/syntax.txt from [c2a94b6c6f] to [97c9a7a212].
|
| | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | # Writing rules for Grammalecte # Note: This documentation is a __draft__. Information may be obsolete or incomplete. ## FILES REQUIRED ## The rules file for your language must be named `rules.grx` in the folder `gc_lang/<lang>/`. The settings file must be named `config.ini`. These files are simple UTF-8 text files. ## PRINCIPLES ## Grammalecte is a bi-passes grammar checker engine. On the first pass, the engine checks the text paragraph by paragraph. On the second pass, the engine check the text sentence by sentence. You may alter how sentences are split by removing punctuation marks during the first pass. |
︙ | ︙ | |||
33 34 35 36 37 38 39 | * flags “LCR” for the regex word boundaries and case sensitiveness * [optional] option name (the rule is active only if the option defined by user or config is active) * [optional] rule name (named rules can be disabled by user or by config) * [optional] priority number * a regex pattern trigger * a list of actions | | | | | < | < < < < < < < < < < < < < < | < < < < < < < | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | * flags “LCR” for the regex word boundaries and case sensitiveness * [optional] option name (the rule is active only if the option defined by user or config is active) * [optional] rule name (named rules can be disabled by user or by config) * [optional] priority number * a regex pattern trigger * a list of actions A token rule is defined by: * rule name * [optional] priority number * one or several lists of tokens * a list of actions (the action is active only if the option defined by user or config is active) Token rules must be defined within a graph. Each graph is defined within the second pass with the command: @@@@GRAPH: graph_name|graph_code A graph ends when another graph is defined or when is found the command: @@@@END_GRAPH There is no limit to the number of actions and the type of actions a rule can launch. Each action has its own condition to be triggered. There are several kinds of actions: * Error warning, with a message, and optionally suggestions, and optionally a URL * Text transformation, modifying internally the checked text * Disambiguation action * [second pass only] Tagging token * [second pass only] Immunity rules On the first pass, you can only write regex rules. On the second pass, you can write regex rules and token rules. All tokens rules must be written within a graph. ## REGEX RULE SYNTAX ## __LCR/option(rulename)!priority__ pattern <<- condition ->> error_suggestions # message_error|URL <<- condition ~>> text_rewriting <<- condition =>> commands_for_disambiguation ... Patterns are written with the Python syntax for regular expressions: http://docs.python.org/library/re.html There can be one or several actions for each rule, executed following the order they are written. Optional: option, rulename, priority, condition, URL LCR flags means: * L: Left boundary for the regex * C: Case sensitiveness * R: Right boundary for the regex |
︙ | ︙ | |||
225 226 227 228 229 230 231 | Example. Back reference in messages. (fooo) bar <<- ->> foo # “\1” should be: ### Pattern matching | | < > | > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | > > > | > > > > | > > | > > > > | > > > > > > > > > > > > > > > > > > > > > > | > > > > | > > | > | > > | > > | > > > > > > > > > > > > > > > > > > > > > | | 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 | Example. Back reference in messages. (fooo) bar <<- ->> foo # “\1” should be: ### Pattern matching Repeating pattern matching of a single rule continues after the previous matching, so instead of general multiword patterns, like (\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo use (\w+) <<- some_check(\1, word(1)) ->> \1, # foo ## TOKEN RULES ## Token rules must be defined within a graph. ### Token rules syntax __rulename!priority__ list_of_tokens list_of_tokens list_of_tokens ... <<- /option/ condition ->> suggestions|URL <<- /option/ condition ~>> rewriting <<- /option/ condition =>> disambiguation <<- /option/ condition />> tagging <<- /option/ condition !>> <immunity> ... list_of_tokens ... <<- action1 <<- action2 ... With token rules, for one rule name, you can define several blocks of list of tokens with different kinds of actions. Each block must be separated by an empty line. Optional: priority, option, condition, URL ### Tokens Tokens can be defined in several ways: * Value (the text of the token). Examples: `word`, `<start>`, `<end>`, `,`. * Lemma: `>lemma`. * Regex: `~pattern`, `~pattern¬antipattern`. * Regex on morphologies: `@pattern`, `@pattern¬antipattern`. * Tags: `/tag`. * Metatags: *NAME. Examples: `*WORD`, `*NUM`, `*SIGN`, etc. * Jump over token: `<>` Selection of tokens: `[value1|value2||>lemma|~pattern|@pattern|*META|/tag|…]` Conditional token: `?token¿` Conditional selection of token: `?[token1|token2|…]¿` ### Token references Positive references are defined by a positive integer (> 0). Examples: `\1`, `\2`, `\3`, etc. If there is at least one token set between parenthesis, these numbers refer to tokens between parenthesis, ignoring all others. If there is no token between parenthesis, these numbers refer to tokens found in order defined by the rule triggered. Negative references are defined by a negative integer (< 0). Examples: `\-1`, `\-2`, `\-3`, etc. These numbers refer to the tokens beginning by the last one found by the rule triggered. Examples: tokens: alpha beta gamma delta epsilon positive refs: 1 2 3 4 5 negative refs: -5 -4 -3 -2 -1 tokens: alpha (beta) gamma (delta) epsilon positive refs: 1 2 negative refs: -5 -4 -3 -2 -1 tokens: alpha (beta) ?gamma¿ (delta) epsilon positive refs: 1 2 negative refs: (-4/-5) (-3/-4) (-3/none) -2 -1 ## CONDITIONS ## Conditions are Python expressions, they must return a value, which will be evaluated as boolean. You can use the usual Python syntax and libraries. With regex rules, you can call pattern subgroups via `\1`, `\2`… `\0` is the full pattern. Example: these (\w+) <<- \1 == "man" -1>> men # Man is a singular noun. Use the plural form: You can also apply functions to subgroups like: `\1.startswith("a")` or `\3.islower()` or `re.search("pattern", \2)`. With token rules, you can also call each token with their reference, like `\1`, `\2`... or `\-1`, `\-2`... Example: foo [really|often|sometimes] bar <<- ->> \1 \-1 # We say “foo bar”. ### Functions for regex rules `word(n)` > Catches the nth next word after the pattern (separated only by white spaces). > Returns None if no word caught `word(-n)` > Catches the nth next word before the pattern (separated only by white spaces). > Returns None if no word caught `textarea(regex[, neg_regex])` > Checks if the full text of the checked area (paragraph or sentence) matches the regex. `morph(n, regex[, neg_regex][, no_word=False])` > Checks if all tags of the word in group n match the regex. > If neg_regex = "*", returns True only if all morphologies match the regex. > If there is no word at position n, returns the value of no_word. `analyse(n, regex[, neg_regex][, no_word=False])` > Checks if all tags of the word in group n match the regex. > If neg_regex = "*", returns True only if all morphologies match the regex. > If there is no word at position n, returns the value of no_word. ### Functions for token rules `value(n, values_string)` > Analyses the value of the nth token. > The <values_string> contains values separated by the sign `|`. > Example: `"|foo|bar|"` `morph(n, "regex", "neg_regex")` `analyse(n, "regex", "neg_regex")` > Same action with morph() and morph0() for regex rules. ### Functions for regex and token rules `__also__` > Returns True if the previous condition returned True. > Example: `<<- __also__ and condition2 ->>` `__else__` > Returns False if the previous condition returned False. > Example: `<<- __else__ and condition2 ->>` `option(option_name)` > Returns True if <option_name> is activated else False Note: the analysis is done on the preprocessed text. `after(regex[, neg_regex])` > Checks if the text after the pattern matches the regex. `before(regex[, neg_regex])` > Checks if the text before the pattern matches the regex. ### Default variables `sCountry` > It contains the current country locale of the checked paragraph. colour <<- sCountry == "US" ->> color # Use American English spelling. ## ACTIONS ## There are 5 kinds of actions: 1. Suggestions. The grammar checker suggests corrections. 2. Text processor. A internal process to modify the text internally. This is used to simplify grammar checking. * text rewriting * text deletion * token rewriting * token merging * token deletion 3. Disambiguation. Select, exclude or define morphologies of tokens. 4. Tagging. Add information on token. 5. Immunity. Prevent suggestions to be triggered. ### Positioning Positioning is valid for suggestions, text processing, tagging and immunity. By default, the full pattern will be underlined with blue. You can shorten the underlined text area by specifying a back reference group of the pattern. Instead of writing ->>, write -n>> n being the number of a back reference group. Actually, ->> is similar to -0>> Example: |
︙ | ︙ | |||
304 305 306 307 308 309 310 | With the rule B, only the first group is underlined: ying and yang ^^^^ | > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > | 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 | With the rule B, only the first group is underlined: ying and yang ^^^^ ### Suggestions #### Multiple suggestions Use `|` in the replacement text to add multiple suggestions: Example. Foo, FOO, Bar and BAR suggestions for the input word "foo". foo <<- ->> Foo|FOO|Bar|BAR # Did you mean: #### No suggestion You can display message without making suggestions. For this purpose, use a single character _ in the suggestion field. Example. No suggestion. foobar <<- ->> _ # Message #### Longer explanations with URLs Warning messages can contain optional URL for longer explanations. your’s <<- ->> yours # Possessive pronoun:|http://en.wikipedia.org/wiki/Possessive_pronoun #### Expressions in suggestion or replacement Suggestions started by an equal sign are Python string expressions extended with possible back references and named definitions: Example: <<- ->> ='"' + \1.upper() + '"' # With uppercase letters and quotation marks <<- ~>> =\1.upper() ### Text rewriting Example. Replacing a string by another. Mr. [A-Z]\w+ <<- ~>> Mister |
︙ | ︙ | |||
332 333 334 335 336 337 338 | `~>> *` > Replace by whitespaces `~>> @` | | < < < | < | 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 | `~>> *` > Replace by whitespaces `~>> @` > Replace with the at sign, useful mostly at first pass, where it is advised to check usage of punctuations and whitespaces. > Successions of @ are automatically removed at the beginning of the second pass. `~>> _` > Replace with underscores. Just a filler. > These characters won’t be removed at the beginning of the second pass. You can use positioning with text rewriting actions. Mr(. [A-Z]\w+) <<- ~1>> * You can also call Python expressions. __[s]__ Mr. ([a-z]\w+) <<- ~1>> =\1.upper() The text processor is useful to simplify texts and write simpler checking rules. For example, sentences with the same grammar mistake: These “cats” are blacks. These cats are “blacks”. These cats are absolutely blacks. These stupid “cats” are all blacks. These unknown cats are as per usual blacks. Instead of writing complex rules or several rules to find mistakes for all possible cases, you can use the text preprocessor to simplify the text. To remove the chars “”, write: [“”] ~>> * The * means: replace text by whitespaces. |
︙ | ︙ | |||
411 412 413 414 415 416 417 | Mister <<- ~>> Mr (Mrs?)[.] <<- ~>> \1 ### Disambiguation | | | 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | Mister <<- ~>> Mr (Mrs?)[.] <<- ~>> \1 ### Disambiguation When Grammalecte analyses a word with `morph()`, before requesting the POS tags to the dictionary, it checks if there is a stored marker for the position where the word is. If there is a marker, Grammalecte uses the stored data and don’t make request to the dictionary. The disambiguation commands store POS tags at the position of a word. There are 3 commands for disambiguation. |
︙ | ︙ | |||
448 449 450 451 452 453 454 | If select() and exclude() generate an empty list, no marker is set. With define, you must set a list of POS tags. Example: define(\1, ["po:nom is:plur", "po:adj is:sing", "po:adv"]) | | < < < | < < | < < < | < < < < < < < < < < < < < < < < < < | < < | < < | < < < < | < < < < | | < < < < < < < < < | < < | | < < < | < < < | < < < < < < < < < < < < < < < < < | < < | | < | < < < < < | < < < < | < < < | < < < < | 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 | If select() and exclude() generate an empty list, no marker is set. With define, you must set a list of POS tags. Example: define(\1, ["po:nom is:plur", "po:adj is:sing", "po:adv"]) ### Tagging **Only for token rules** ### Immunity **Only for token rules** ## OTHER COMMANDS ## ### Comments Lines beginning with `#` are comments. ### End of parsing With the command `#END` at the beginning of a line, the parser won’t go further. Whatever is written after will be considered as comments. ### Definitions Grammalecte supports definitions to simplify the description of complex rules. Definition: DEF: name definition Usage: `{name}` will be replaced by its definition Example: DEF: word_3_letters \w\w\w+ DEF: uppercase_token ~^[A-Z]+$ DEF: month_token [January|February|March|April|May|June|July|August|September|October|November|december] ({word_3_letters}) (\w+) <<- condition ->> suggestion # message|URL {uppercase_token} {month_token} <<- condition ->> message # message|URL |