Grammalecte  Diff

Differences From Artifact [f728cc54d9]:

To Artifact [cbb922d3e1]:



1

2
3

4






5
6
7
8
9


10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26



27
28
29
30

31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52

53
54
55

56
57

58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93

94



95
96
97
98
99
100
101

WRITING RULES FOR GRAMMALECTE


Note: This documentation is a draft. Information may be obsolete or incomplete.








# Principles #

Grammalecte is a bi-passes grammar checker engine. On the first pass, the
engine checks the text paragraph by paragraph. On the second pass, the engine
check the text sentence by sentence.



The command to switch to the second pass is `[++]`.

In each pass, you can write as many rules as you need.

There are two kinds of rules:

* regex rules (triggered by a regular expression)
* token rules (triggered by a succession of tokens)

A regex rule is defined by:

* [optional] flags “LCR” for the regex word boundaries and case sensitiveness
* a regex pattern trigger
* a list of actions
* [optional] option name (the rule is active only if the option defined by user or config is active)
* [optional] rule name (named rules can be disabled by user or by config)




A token rules is defined by:

* rule name

* one or several lists of tokens (triggers)
* a list of actions (the action is active only if the option defined by user or config is active)

Token rules must be defined within a graph.

Each graph is defined within the second pass with the command:

        @@@@GRAPH: graph_name

A graph ends when another graph is defined or when is found the command:

        @@@@END_GRAPH

There is no limit to the number of actions and the type of actions a rule can
launch. Each action has its own condition to be triggered.

There are several kinds of actions:

* Error warning, with a message, and optionally suggestions, and optionally an URL
* Text transformation, modifying internally the checked text
* Disambiguation action, setting tags on a position
* Tagging



The rules file for your language must be named `rules.grx`.

The settings file must be named `config.ini`.


All these files are simple utf-8 text file.
UTF-8 is mandatory.


# Comments #

Lines beginning with `#` are comments.


# End of file #

With the command:

        #END

at the beginning of a line, the parser won’t go further.
Whatever is written after will be considered as comments.


# Regex rule syntax #

        __LCR/option(rulename)__
            pattern
            <<- condition ->> error_suggestions  # message_error|http://awebsite.net...
            <<- condition ~>> text_rewriting
            <<- condition =>> commands_for_disambiguation
            ...

Patterns are written with the Python syntax for regular expressions:
http://docs.python.org/library/re.html

There can be one or several actions for each rule, executed the order they are
written.

Conditions are optional, i.e.:


        <<- ~>> replacement





LCR flags means:

* L: Left boundary for the regex
* C: Case sensitiveness
* R: Right boundary for the regex
>
|
>

|
>

>
>
>
>
>
>
|




>
>












|
<
<


>
>
>




>
|






|












|
|
>


<
>
|

>
|
<


|




|

<
<
<
<
|



|

|
|
|







|


|

>
|
>
>
>







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33


34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68

69
70
71
72
73

74
75
76
77
78
79
80
81
82




83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
# Writing rules for Grammalecte

Note: This documentation is a __draft__. Information may be obsolete or incomplete.


## Files required

The rules file for your language must be named `rules.grx` in the folder `gc_lang/<lang>/`.
The settings file must be named `config.ini`.

These files are simple UTF-8 text files.


## Principles

Grammalecte is a bi-passes grammar checker engine. On the first pass, the
engine checks the text paragraph by paragraph. On the second pass, the engine
check the text sentence by sentence.

You may alter how sentences are split by removing punctuation marks during the first pass.

The command to switch to the second pass is `[++]`.

In each pass, you can write as many rules as you need.

There are two kinds of rules:

* regex rules (triggered by a regular expression)
* token rules (triggered by a succession of tokens)

A regex rule is defined by:

* flags “LCR” for the regex word boundaries and case sensitiveness


* [optional] option name (the rule is active only if the option defined by user or config is active)
* [optional] rule name (named rules can be disabled by user or by config)
* [optional] priority number
* a regex pattern trigger
* a list of actions

A token rules is defined by:

* rule name
* [optional] priority number
* one or several lists of tokens
* a list of actions (the action is active only if the option defined by user or config is active)

Token rules must be defined within a graph.

Each graph is defined within the second pass with the command:

        @@@@GRAPH: graph_name|graph_code

A graph ends when another graph is defined or when is found the command:

        @@@@END_GRAPH

There is no limit to the number of actions and the type of actions a rule can
launch. Each action has its own condition to be triggered.

There are several kinds of actions:

* Error warning, with a message, and optionally suggestions, and optionally an URL
* Text transformation, modifying internally the checked text
* [second pass only] Disambiguation action
* [second pass only] Tagging token
* [second pass only] Immunity rules



On the first pass, you can only write regex rules.
On the second pass, you can write regex rules and token rules. All tokens rules must be written within a graph.

## Syntax details




### Comments

Lines beginning with `#` are comments.


### End of parsing





With the command `#END` at the beginning of a line, the parser won’t go further.
Whatever is written after will be considered as comments.


## Regex rule syntax

    __LCR/option(rulename)!priority__
        pattern
            <<- condition ->> error_suggestions             # message_error|URL
            <<- condition ~>> text_rewriting
            <<- condition =>> commands_for_disambiguation
            ...

Patterns are written with the Python syntax for regular expressions:
http://docs.python.org/library/re.html

There can be one or several actions for each rule, executed following the order they are
written.

Optional:

* option
* rulename
* priority
* conditions
* URL


LCR flags means:

* L: Left boundary for the regex
* C: Case sensitiveness
* R: Right boundary for the regex
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165

166
167
168
169
170
171
172
173
174
175
176


177

178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327





328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370

371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406

>   `u`     uppercase allowed for lowercase characters

>>          i.e.:  "Word"  becomes  "W[oO][rR][dD]"

Examples:

        __[i]__  pattern
        __<s]__  pattern
        __[u>__  pattern
        __<s>__  pattern


User option activating/disactivating is possible with an option name placed
just after the LCR flags, i.e.:

        __[i]/option1__  pattern
        __[u]/option2__  pattern
        __[s>/option1__  pattern
        __<u>/option3__  pattern
        __<i>/option3__  pattern

Rules can be named:

        __[i]/option1(name1)__  pattern
        __[u]/option2(name2)__  pattern
        __[s>/option1(name3)__  pattern
        __<u>(name4)__          pattern
        __<i>(name5)__          pattern

Each rule name must be unique.


The LCR flags are also optional. If you don’t set these flags, the default LCR
flags will be:

        __[i]__

Example. Report “foo” in the text and suggest "bar":

        foo <<- ->> bar # Use bar instead of foo.

Example. Recognize and suggest missing hyphen and rewrite internally the text
with the hyphen:

        __[s]__ foo bar

            <<- ->> foo-bar # Missing hyphen.
            <<- ~>> foo-bar


## Simple-line or multi-line rules ##

Rules can be break to multiple lines by leading tabulators or spaces.
You should use 4 spaces.

Examples:



        __<s>__ pattern

            <<- condition ->> replacement
            # message
            <<- condition ->> suggestion # message
            <<- condition ~>> text_rewriting
            <<- =>> disambiguation

        __<s>__ pattern <<- condition ->> replacement # message


## Whitespaces at the border of patterns or suggestions ##

Example: Recognize double or more spaces and suggests a single space:

        __<s>__  "  +" <<- ->> " "      # Extra space(s).

Characters `"` protect spaces in the pattern and in the replacement text.


## Pattern groups and back references ##

It is usually useful to retrieve parts of the matched pattern. We simply use
parenthesis in pattern to get groups with back references.

Example. Suggest a word with correct quotation marks:

        \"(\w+)\" <<- ->> “\1”      # Correct quotation marks.

Example. Suggest the missing space after the !, ? or . signs:

        __<i]__ \b([?!.])([A-Z]+) <<- ->> \1 \2     # Missing space?

Example. Back reference in messages.

        (fooo) bar <<- ->> foo      # “\1” should be:


## Pattern matching ##

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

        (\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

use

        (\w+) <<- some_check(\1, word(1)) ->> \1, # foo


## Name definitions ##

Grammalecte supports name definitions to simplify the description of the
complex rules.

Example:

        DEF: name pattern

Usage in the rules:

        ({name}) (\w+) ->> "\1-\2" # Missing hyphen?


## Multiple suggestions ##

Use `|` in the replacement text to add multiple suggestions:

Example. Foo, FOO, Bar and BAR suggestions for the input word "foo".

        foo <<- ->> Foo|FOO|Bar|BAR # Did you mean:


## No suggestion ##

You can display message without making suggestions. For this purpose,
use a single character _ in the suggestion field.

Example. No suggestion.

        foobar <<- ->> _ # Message


## Positioning ##

Positioning is valid only for error creation and text rewriting.

By default, the full pattern will be underlined with blue. You can shorten the
underlined text area by specifying a back reference group of the pattern.
Instead of writing ->>, write -n>>  n being the number of a back reference
group. Actually,  ->>  is similar to  -0>>

Example:

        (ying) and yang <<- -1>> yin # Did you mean:

        __[s]__ (Mr.) [A-Z]\w+ <<- ~1>> Mr


### Comparison ###

Rule A:

        ying and yang       <<- ->>     yin and yang        # Did you mean:

Rule B:

        (ying) and yang     <<- -1>>    yin                 # Did you mean:

With the rule A, the full pattern is underlined:

        ying and yang
        ^^^^^^^^^^^^^

With the rule B, only the first group is underlined:

        ying and yang
        ^^^^


## Longer explanations with URLs ##

Warning messages can contain optional URL for longer explanations.

        your’s
            <<- ->> yours
            # Possessive pronoun:|http://en.wikipedia.org/wiki/Possessive_pronoun



# Text rewriting #

Example. Replacing a string by another.

        Mr. [A-Z]\w+ <<- ~>> Mister

WARNING: The replacing text must be shorter than the replaced text or have the
same length. Breaking this rule will misplace following error reports. You
have to ensure yourself the rules comply with this constraint, Grammalecte
won’t do it for you.

Specific commands for text rewriting:

`~>> *`

>   replace by whitespaces

`~>> @`

>   replace by arrobas, useful mostly at first pass, where it is advised to
>   check usage of punctuations and whitespaces.
>   @ are automatically removed at the beginning of the second pass.






You can use positioning with text rewriting actions.

        Mr(. [A-Z]\w+) <<- ~1>> *

You can also call Python expressions.

        __[s]__ Mr. ([a-z]\w+) <<- ~1>> =\1.upper()


# Text preprocessing and multi-passes checking #

On each pass, Lightproof uses rules written in the text preprocessor to modify
internally the text before checking the text.

The text preprocessor is useful to simplify texts and write simplier checking
rules.

For example, sentences with the same grammar mistake:

        These “cats” are black.
        These cats are “black”.
        These cats are absolutely black.
        These stupid “cats” are all black.
        These unknown cats are as per usual black.

Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

To remove the chars “”, write:

        [“”] ->> *

The * means: replace text by whitespaces.

Similarly to grammar rules, you can add conditions:

        \w+ly <<- morph(\0, "adverb") ->> *

You can also remove a group reference:

        these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") -1>> *
        (am|are|is|were|was) (all) <<- -2>> *


With these rules, you get the following sentences:

        These  cats  are black.
        These cats are  black .
        These cats are            black.
        These         cats  are     black.
        These         cats are              black.

These grammar mistakes can be detected with one simple rule:

        these +(\w+) +are +(\w+s)
            <<- morph(\1, "noun") and morph(\2, "plural")
            -2>> _              # Adjectives are invariable.

Instead of replacing text with whitespaces, you can replace text with @.

        https?://\S+ ->> @

This is useful if at first pass you write rules to check successive whitespaces.
@ are automatically removed at the second pass.

You can also replace any text as you wish.

        Mister <<- ->> Mr
        (Mrs?)[.] <<- ->> \1


# Disambiguation #

When Grammalecte analyses a word with morph, before requesting the
POS tags to the dictionary, it checks if there is a stored marker for the
position where the word is. If there is a marker, Grammalecte uses the stored
data and don’t make request to the dictionary.

The disambiguation commands store POS tags at the position of a word.







|
|
|
|





|
|
|
|
|



|
|
|
|
|







|

|

|




|
>
|



|

|




>
>
|
>






<

<
|



|




|






|

|

|



|


|











|

|
<



|



|


|





|


|






|


|










|

|


|



|



|



|
|



|
|


|



|
|



<
|



|

|








|



|

|
>
>
>
>
>



|



|


<
|
<
<

|




|
|
|
|
|






|





|



|
|
>



|
|
|
|
|



|
|
|



|






|
|


|







134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201

202

203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244

245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319

320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356

357


358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423

>   `u`     uppercase allowed for lowercase characters

>>          i.e.:  "Word"  becomes  "W[oO][rR][dD]"

Examples:

    __[i]__
    __<s]__
    __[u>__
    __<s>__


User option activating/disactivating is possible with an option name placed
just after the LCR flags, i.e.:

    __[i]/option1__
    __[u]/option2__
    __[s>/option1__
    __<u>/option3__
    __<i>/option3__

Rules can be named:

    __[i]/option1(name1)__
    __[u]/option2(name2)__
    __[s>/option1(name3)__
    __<u>(name4)__
    __<i>(name5)__

Each rule name must be unique.


The LCR flags are also optional. If you don’t set these flags, the default LCR
flags will be:

    __[i]__

Example. Report “foo” in the text and suggest “bar”:

    foo <<- ->> bar         # Use bar instead of foo.

Example. Recognize and suggest missing hyphen and rewrite internally the text
with the hyphen:

    __[s]__
        foo bar
            <<- ->> foo-bar     # Missing hyphen.
            <<- ~>> foo-bar


### Simple-line or multi-line rules

Rules can be break to multiple lines by leading spaces.
You should use 4 spaces.

Examples:

    __<s>__ pattern <<- condition ->> replacement # message

    __<s>__
        pattern
            <<- condition ->> replacement
            # message
            <<- condition ->> suggestion # message
            <<- condition ~>> text_rewriting
            <<- =>> disambiguation




### Whitespaces at the border of patterns or suggestions

Example: Recognize double or more spaces and suggests a single space:

    __<s>__  "  +" <<- ->> " "      # Remove extra space(s).

Characters `"` protect spaces in the pattern and in the replacement text.


### Pattern groups and back references

It is usually useful to retrieve parts of the matched pattern. We simply use
parenthesis in pattern to get groups with back references.

Example. Suggest a word with correct quotation marks:

    \"(\w+)\" <<- ->> “\1”      # Correct quotation marks.

Example. Suggest the missing space after the signs `!`, `?` or `.`:

    __<i]__  \b([?!.])([A-Z]+) <<- ->> \1 \2     # Missing space?

Example. Back reference in messages.

    (fooo) bar <<- ->> foo      # “\1” should be:


### Pattern matching

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

        (\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

use

        (\w+) <<- some_check(\1, word(1)) ->> \1, # foo


### Definitions

Grammalecte supports definitions to simplify the description of complex rules.


Example:

    DEF: name pattern

Usage in the rules:

    ({name}) (\w+) ->> "\1-\2"          # Missing hyphen?


### Multiple suggestions

Use `|` in the replacement text to add multiple suggestions:

Example. Foo, FOO, Bar and BAR suggestions for the input word "foo".

    foo <<- ->> Foo|FOO|Bar|BAR         # Did you mean:


### No suggestion

You can display message without making suggestions. For this purpose,
use a single character _ in the suggestion field.

Example. No suggestion.

    foobar <<- ->> _                    # Message


### Positioning

Positioning is valid only for error creation and text rewriting.

By default, the full pattern will be underlined with blue. You can shorten the
underlined text area by specifying a back reference group of the pattern.
Instead of writing ->>, write -n>>  n being the number of a back reference
group. Actually,  ->>  is similar to  -0>>

Example:

    (ying) and yang <<- -1>> yin # Did you mean:

    __[s]__ (Mr.) [A-Z]\w+ <<- ~1>> Mr


**Comparison**

Rule A:

    ying and yang       <<- ->>     yin and yang        # Did you mean:

Rule B:

    (ying) and yang     <<- -1>>    yin                 # Did you mean:

With the rule A, the full pattern is underlined:

    ying and yang
    ^^^^^^^^^^^^^

With the rule B, only the first group is underlined:

    ying and yang
    ^^^^


### Longer explanations with URLs

Warning messages can contain optional URL for longer explanations.

    your’s
        <<- ->> yours
            # Possessive pronoun:|http://en.wikipedia.org/wiki/Possessive_pronoun



### Text rewriting

Example. Replacing a string by another.

    Mr. [A-Z]\w+ <<- ~>> Mister

**WARNING**: The replacing text must be shorter than the replaced text or have the
same length. Breaking this rule will misplace following error reports. You
have to ensure yourself the rules comply with this constraint, Grammalecte
won’t do it for you.

Specific commands for text rewriting:

`~>> *`

>   Replace by whitespaces

`~>> @`

>   Replace with arrobas, useful mostly at first pass, where it is advised to
>   check usage of punctuations and whitespaces.
>   Successions of @ are automatically removed at the beginning of the second pass.

`~>> _`

>   Replace with underscores. Just a filler.
>   These characters won’t be removed at the beginning of the second pass.

You can use positioning with text rewriting actions.

    Mr(. [A-Z]\w+) <<- ~1>> *

You can also call Python expressions.

    __[s]__ Mr. ([a-z]\w+) <<- ~1>> =\1.upper()



### Text processing



The text processor is useful to simplify texts and write simplier checking
rules.

For example, sentences with the same grammar mistake:

    These “cats” are blacks.
    These cats are “blacks”.
    These cats are absolutely blacks.
    These stupid “cats” are all blacks.
    These unknown cats are as per usual blacks.

Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

To remove the chars “”, write:

    [“”] ~>> *

The * means: replace text by whitespaces.

Similarly to grammar rules, you can add conditions:

    \w+ly <<- morph(\0, "adverb") ~>> *

You can also remove a group reference:

    these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") ~1>> *
    (am|are|is|were|was) (all) <<- ~2>> *
    as per usual <<- ~>> *

With these rules, you get the following sentences:

    These  cats  are blacks.
    These cats are  blacks .
    These cats are            blacks.
    These         cats  are     blacks.
    These         cats are              blacks.

These grammar mistakes can be detected with one simple rule:

    these +(\w+) +are +(\w+s)
        <<- morph(\1, "noun") and morph(\2, "plural")
        -2>> _              # Adjectives are invariable.

Instead of replacing text with whitespaces, you can replace text with @.

    https?://\S+ <<- ~>> @

This is useful if at first pass you write rules to check successive whitespaces.
@ are automatically removed at the second pass.

You can also replace any text as you wish.

    Mister <<- ~>> Mr
    (Mrs?)[.] <<- ~>> \1


### Disambiguation

When Grammalecte analyses a word with morph, before requesting the
POS tags to the dictionary, it checks if there is a stored marker for the
position where the word is. If there is a marker, Grammalecte uses the stored
data and don’t make request to the dictionary.

The disambiguation commands store POS tags at the position of a word.
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465

`define(n, [definitions])`

>   stores at position n the POS tags in definitions (a list of strings).

Examples:

        =>> select(\1, "po:noun is:pl")
        =>> exclude(\1, "po:verb")
        =>> define(\1, ["po:adv"])
        =>> exclude(\1, "po:verb") and define(\2, ["po:adv"]) and select(\3, "po:adv")

Note: select, exclude and define ALWAYS return True.

If select and exclude generate an empty list, no marker is set.

With define, you must set a list of POS tags. Example:

        define(\1, ["po:nom is:plur", "po:adj is:sing", "po:adv"])


# Conditions #

Conditions are Python expressions, they must return a value, which will be
evaluated as boolean. You can use the usual Python syntax and libraries.

You can call pattern subgroups via \0, \1, \2…

Example:

        these (\w+)
            <<- \1 == "man" -1>> men        # Man is a singular noun. Use the plural form:

You can also apply functions to subgroups like:

        \1.startswith("a")
        \3.islower()
        re.search("pattern", \2)


## Standard functions ##

`word(n)`

>   catches the nth next word after the pattern (separated only by white spaces).
>   returns None if no word catched

`word(-n)`







|
|
|
|

|

|



|


|








|
|



|
|
|


|







435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482

`define(n, [definitions])`

>   stores at position n the POS tags in definitions (a list of strings).

Examples:

    =>> select(\1, "po:noun is:pl")
    =>> exclude(\1, "po:verb")
    =>> define(\1, ["po:adv"])
    =>> exclude(\1, "po:verb") and define(\2, ["po:adv"]) and select(\3, "po:adv")

Note: select(), exclude() and define() ALWAYS return True.

If select() and exclude() generate an empty list, no marker is set.

With define, you must set a list of POS tags. Example:

    define(\1, ["po:nom is:plur", "po:adj is:sing", "po:adv"])


### Conditions

Conditions are Python expressions, they must return a value, which will be
evaluated as boolean. You can use the usual Python syntax and libraries.

You can call pattern subgroups via \0, \1, \2…

Example:

    these (\w+)
        <<- \1 == "man" -1>> men        # Man is a singular noun. Use the plural form:

You can also apply functions to subgroups like:

    \1.startswith("a")
    \3.islower()
    re.search("pattern", \2)


### Standard functions

`word(n)`

>   catches the nth next word after the pattern (separated only by white spaces).
>   returns None if no word catched

`word(-n)`
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519

520
521
522
523
524
525

526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542

543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565

`morph(n, regex[, neg_regex][, no_word=False])`

>   checks if all tags of the word in group n match the regex.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.

`analyse(n, regex[, neg_regex][, no_word=False])`

>   checks if all tags of the word in group n match the regex.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.


`option(option_name)`

>   returns True if option_name is activated else False

Note: the analysis is done on the preprocessed text.


# Default variables #

`sCountry`

>   It contains the current country locale of the checked paragraph.

        colour <<- sCountry == "US" ->> color       # Use American English spelling.



# Expressions in the suggestions #

Suggestions started by an equal sign are Python string expressions
extended with possible back references and named definitions:

Example:

        <<- ->> = '"' + \1.upper() + '"'     # With uppercase letters and quotation marks



# Token rules

Token rules must be defined within a graph.


## Tokens

Tokens can be defined in several ways:

* Value (meaning the text of the token). Examples: `word`, `<start>`, `<end>`, `,`.
* Lemma: `>lemma`.
* Regex: `~pattern`, `~pattern¬antipattern`.
* Regex on morphologies: `@pattern`, `@pattern¬antipattern`.
* Tags: `/tag`.
* Metatags: *NAME. Examples: `*WORD`, `*NUM`, `*SIGN`, etc.

Selection of tokens: `[token1|token2|>lemma1|>lemma2|~pattern1|@pattern1|…]`

Conditional token: `?token¿`

Conditional selection of token: `?[token1|token2|…]¿`


## Token references

Positive references are defined by a positive integer `>= 1`. Examples: \1, \2, \3, etc.
If there is at least one token set between parenthesis, these numbers refer to tokens between parenthesis, ignoring all others.
If there is no token between parenthesis, these numbers refer to tokens found in order defined by the rule triggered.

Negative references are defined by a negative integer `<= -1`. Examples: \-1, \-2, \-3, etc.
These numbers refer to the tokens beginning by the last one found by the rule triggered.

Examples:

        tokens:             alpha       beta        gamma       delta       epsilon
        positive refs:      1           2           3           4           5
        negative refs:      -5          -4          -3          -2          -1

        tokens:             alpha       (beta)      gamma       (delta)     epsilon
        positive refs:                  1                       2
        negative refs:      -5          -4          -3          -2          -1

        tokens:             alpha       (beta)      ?gamma¿     (delta)     epsilon
        positive refs:                  1                       2
        negative refs:      (-4/-5)     (-3/-4)     (-3/none)   -2          -1








|













|





|



|






|
>


|



>
|
















>
|










|
|
|

|
|
|

|
|
|

498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585

`morph(n, regex[, neg_regex][, no_word=False])`

>   checks if all tags of the word in group n match the regex.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.

`morph0(n, regex[, neg_regex][, no_word=False])`

>   checks if all tags of the word in group n match the regex.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.


`option(option_name)`

>   returns True if option_name is activated else False

Note: the analysis is done on the preprocessed text.


### Default variables

`sCountry`

>   It contains the current country locale of the checked paragraph.

    colour <<- sCountry == "US" ->> color       # Use American English spelling.



### Expressions in suggestion or replacement

Suggestions started by an equal sign are Python string expressions
extended with possible back references and named definitions:

Example:

    <<- ->> ='"' + \1.upper() + '"'         # With uppercase letters and quotation marks
    <<- ~>> =\1.upper()


## Token rules

Token rules must be defined within a graph.


### Tokens

Tokens can be defined in several ways:

* Value (meaning the text of the token). Examples: `word`, `<start>`, `<end>`, `,`.
* Lemma: `>lemma`.
* Regex: `~pattern`, `~pattern¬antipattern`.
* Regex on morphologies: `@pattern`, `@pattern¬antipattern`.
* Tags: `/tag`.
* Metatags: *NAME. Examples: `*WORD`, `*NUM`, `*SIGN`, etc.

Selection of tokens: `[token1|token2|>lemma1|>lemma2|~pattern1|@pattern1|…]`

Conditional token: `?token¿`

Conditional selection of token: `?[token1|token2|…]¿`


### Token references

Positive references are defined by a positive integer `>= 1`. Examples: \1, \2, \3, etc.
If there is at least one token set between parenthesis, these numbers refer to tokens between parenthesis, ignoring all others.
If there is no token between parenthesis, these numbers refer to tokens found in order defined by the rule triggered.

Negative references are defined by a negative integer `<= -1`. Examples: \-1, \-2, \-3, etc.
These numbers refer to the tokens beginning by the last one found by the rule triggered.

Examples:

    tokens:             alpha       beta        gamma       delta       epsilon
    positive refs:      1           2           3           4           5
    negative refs:      -5          -4          -3          -2          -1

    tokens:             alpha       (beta)      gamma       (delta)     epsilon
    positive refs:                  1                       2
    negative refs:      -5          -4          -3          -2          -1

    tokens:             alpha       (beta)      ?gamma¿     (delta)     epsilon
    positive refs:                  1                       2
    negative refs:      (-4/-5)     (-3/-4)     (-3/none)   -2          -1