Grammalecte  Diff

Differences From Artifact [223b361257]:

To Artifact [f728cc54d9]:


1
2
3

4
5
6
7
8

9
10
11
12
13
14





15

16
17
18
19
20
21



















22
23
24
25
26

27
28
29
30

31
32
33
34


35
36
37
38
39

40













41

42
43


44
45
46
47
48
49
50
1
2

3
4
5
6
7

8
9
10
11
12
13
14
15
16
17
18
19

20
21
22
23



24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46

47
48
49
50
51
52
53
54


55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76

77
78

79
80
81
82
83
84
85
86
87


-
+




-
+






+
+
+
+
+
-
+



-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+




-
+




+


-
-
+
+





+

+
+
+
+
+
+
+
+
+
+
+
+
+
-
+

-
+
+







WRITING RULES FOR GRAMMALECTE

Note: This documentation is obsolete right now.
Note: This documentation is a draft. Information may be obsolete or incomplete.

# Principles #

Grammalecte is a bi-passes grammar checker engine. On the first pass, the
engine checks the text paragraph by paragraph. On the second passe, the engine
engine checks the text paragraph by paragraph. On the second pass, the engine
check the text sentence by sentence.

The command to switch to the second pass is `[++]`.

In each pass, you can write as many rules as you need.

There are two kinds of rules:

* regex rules (triggered by a regular expression)
* token rules (triggered by a succession of tokens)

A rule is defined by:
A regex rule is defined by:

* [optional] flags “LCR” for the regex word boundaries and case sensitiveness
* a regex pattern trigger
* a list of actions (can’t be empty)
* [optional] user option name for activating/disactivating the rule
* [optional] rule name
* a list of actions
* [optional] option name (the rule is active only if the option defined by user or config is active)
* [optional] rule name (named rules can be disabled by user or by config)

A token rules is defined by:

* rule name
* one or several lists of tokens (triggers)
* a list of actions (the action is active only if the option defined by user or config is active)

Token rules must be defined within a graph.

Each graph is defined within the second pass with the command:

        @@@@GRAPH: graph_name

A graph ends when another graph is defined or when is found the command:

        @@@@END_GRAPH

There is no limit to the number of actions and the type of actions a rule can
launch. Each action has its own condition to be triggered.

There are three kind of actions:
There are several kinds of actions:

* Error warning, with a message, and optionally suggestions, and optionally an URL
* Text transformation, modifying internally the checked text
* Disambiguation action, setting tags on a position
* Tagging


The rules file for your language must be named rules.grx.
The settings file must be named config.ini.
The rules file for your language must be named `rules.grx`.
The settings file must be named `config.ini`.

All these files are simple utf-8 text file.
UTF-8 is mandatory.


# Comments #

Lines beginning with `#` are comments.


# End of file #

With the command:

        #END

at the beginning of a line, the parser won’t go further.
Whatever is written after will be considered as comments.


# Rule syntax #
# Regex rule syntax #

        __LCR/option(rulename)__  pattern
        __LCR/option(rulename)__
            pattern
            <<- condition ->> error_suggestions  # message_error|http://awebsite.net...
            <<- condition ~>> text_rewriting
            <<- condition =>> commands_for_disambiguation
            ...

Patterns are written with the Python syntax for regular expressions:
http://docs.python.org/library/re.html
137
138
139
140
141
142
143
144

145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172

173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191












192
193
194
195
196
197
198
174
175
176
177
178
179
180

181

182
183
184
185















186
187
188
189
190
191
192

193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231







-
+
-




-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







-
+



















+
+
+
+
+
+
+
+
+
+
+
+








Examples:

        __<s>__ pattern
            <<- condition ->> replacement
            # message
            <<- condition ->> suggestion # message
            <<- condition
            <<- condition ~>> text_rewriting
            ~>> text_rewriting
            <<- =>> disambiguation

        __<s>__ pattern <<- condition ->> replacement # message


## Comments ##

Lines beginning with # are comments.


## End of file ##

With the command:

        #END

at the beginning of a line, the compiler won’t go further.
Whatever is written after will be considered as comments.


## Whitespaces at the border of patterns or suggestions ##

Example: Recognize double or more spaces and suggests a single space:

        __<s>__  "  +" <<- ->> " "      # Extra space(s).

ASCII " characters protect spaces in the pattern and in the replacement text.
Characters `"` protect spaces in the pattern and in the replacement text.


## Pattern groups and back references ##

It is usually useful to retrieve parts of the matched pattern. We simply use
parenthesis in pattern to get groups with back references.

Example. Suggest a word with correct quotation marks:

        \"(\w+)\" <<- ->> “\1”      # Correct quotation marks.

Example. Suggest the missing space after the !, ? or . signs:

        __<i]__ \b([?!.])([A-Z]+) <<- ->> \1 \2     # Missing space?

Example. Back reference in messages.

        (fooo) bar <<- ->> foo      # “\1” should be:


## Pattern matching ##

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

        (\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

use

        (\w+) <<- some_check(\1, word(1)) ->> \1, # foo


## Name definitions ##

Grammalecte supports name definitions to simplify the description of the
complex rules.

Example:
298
299
300
301
302
303
304

305



























































306
307
308

309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326

327
328

329
330
331
332
333
334
335


336
337
338
339
340
341

342
343
344
345
346
347

348
349
350
351
352
353
354
355
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400

401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418

419
420

421
422
423
424
425
426


427
428
429
430
431
432
433

434
435





436

437
438
439
440
441
442
443







+

+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+


-
+

















-
+

-
+





-
-
+
+





-
+

-
-
-
-
-
+
-







        Mr(. [A-Z]\w+) <<- ~1>> *

You can also call Python expressions.

        __[s]__ Mr. ([a-z]\w+) <<- ~1>> =\1.upper()


# Text preprocessing and multi-passes checking #

On each pass, Lightproof uses rules written in the text preprocessor to modify
internally the text before checking the text.

The text preprocessor is useful to simplify texts and write simplier checking
rules.

For example, sentences with the same grammar mistake:

        These “cats” are black.
        These cats are “black”.
        These cats are absolutely black.
        These stupid “cats” are all black.
        These unknown cats are as per usual black.

Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

To remove the chars “”, write:

        [“”] ->> *

The * means: replace text by whitespaces.

Similarly to grammar rules, you can add conditions:

        \w+ly <<- morph(\0, "adverb") ->> *

You can also remove a group reference:

        these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") -1>> *
        (am|are|is|were|was) (all) <<- -2>> *

With these rules, you get the following sentences:

        These  cats  are black.
        These cats are  black .
        These cats are            black.
        These         cats  are     black.
        These         cats are              black.

These grammar mistakes can be detected with one simple rule:

        these +(\w+) +are +(\w+s)
            <<- morph(\1, "noun") and morph(\2, "plural")
            -2>> _              # Adjectives are invariable.

Instead of replacing text with whitespaces, you can replace text with @.

        https?://\S+ ->> @

This is useful if at first pass you write rules to check successive whitespaces.
@ are automatically removed at the second pass.

You can also replace any text as you wish.

        Mister <<- ->> Mr
        (Mrs?)[.] <<- ->> \1


# Disambiguation #

When Grammalecte analyses a word with morph or morphex, before requesting the
When Grammalecte analyses a word with morph, before requesting the
POS tags to the dictionary, it checks if there is a stored marker for the
position where the word is. If there is a marker, Grammalecte uses the stored
data and don’t make request to the dictionary.

The disambiguation commands store POS tags at the position of a word.

There is 3 commands for disambiguation.

`select(n, pattern)`

>   stores at position n only the POS tags of the word matching the pattern.

`exclude(n, pattern)`

>   stores at position n the POS tags of the word, except those matching the
    pattern.

`define(n, definition)`
`define(n, [definitions])`

>   stores at position n the POS tags in definition.
>   stores at position n the POS tags in definitions (a list of strings).

Examples:

        =>> select(\1, "po:noun is:pl")
        =>> exclude(\1, "po:verb")
        =>> define(\1, "po:adv")
        =>> exclude(\1, "po:verb") and define(\2, "po:adv") and select(\3, "po:adv")
        =>> define(\1, ["po:adv"])
        =>> exclude(\1, "po:verb") and define(\2, ["po:adv"]) and select(\3, "po:adv")

Note: select, exclude and define ALWAYS return True.

If select and exclude generate an empty list, no marker is set.

With define, you can set a list of POS tags. Example:
With define, you must set a list of POS tags. Example:

        define(\1, "po:nom is:plur|po:adj is:sing|po:adv")

This will store a list of tags at the position of the first group:

        ["po:nom is:plur", "po:adj is:sing", "po:adv"]
        define(\1, ["po:nom is:plur", "po:adj is:sing", "po:adv"])



# Conditions #

Conditions are Python expressions, they must return a value, which will be
evaluated as boolean. You can use the usual Python syntax and libraries.

387
388
389
390
391
392
393
394

395
396
397
398


399
400

401
402
403
404




405
406
407
408
409
410
411
412
413

414
415
416
417
418
419
420
421
422
423
424
425

426
427
428
429
430

431
432
433
434
435

436
437

438
439
440

441
442
443

444
445
446
447



448
449

450
451
452
453
454
455

456
457
458

459
460
461
462

463
464
465
466

467
468
469
470

471
472
473
474

475
476
477
478



479
480
481
482
483

484
485
486
487

488
489
490
491

492
493
494
495
496

497
498
499
500


501
502
503
504
505
506



507
508
509
510
511

512
513

514
515

516
517
475
476
477
478
479
480
481

482
483
484


485
486
487

488
489



490
491
492
493
494
495
496
497
498
499
500
501

502
503
504
505
506
507
508
509
510
511
512
513

514
515
516
517
518

519
520


521

522
523

524

525

526
527


528
529



530
531
532


533






534



535

536


537

538


539

540


541


542

543
544



545
546
547


548


549




550

551


552


553


554




555
556
557





558
559
560

561



562


563


564

565







-
+


-
-
+
+

-
+

-
-
-
+
+
+
+








-
+











-
+




-
+

-
-

-
+

-
+
-

-
+

-
-
+

-
-
-
+
+
+
-
-
+
-
-
-
-
-
-
+
-
-
-
+
-

-
-
+
-

-
-
+
-

-
-
+
-
-

-
+

-
-
-
+
+
+
-
-

-
-
+
-
-
-
-
+
-

-
-
+
-
-

-
-
+
-
-
-
-
+
+

-
-
-
-
-
+
+
+
-

-
-
-
+
-
-
+
-
-
+
-


>   checks if the text before the pattern matches the regex.

`textarea(regex[, neg_regex])`

>    checks if the full text of the checked area (paragraph or sentence) matches the regex.

`morph(n, regex[, strict=True][, noword=False])`
`morph(n, regex[, neg_regex][, no_word=False])`

>   checks if all tags of the word in group n match the regex.
>   if strict = False, returns True only if one of tags matches the regex.
>   if there is no word at position n, returns the value of noword.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.

`morphex(n, regex, neg_regex[, noword=False])`
`analyse(n, regex[, neg_regex][, no_word=False])`

>   checks if one of the tags of the word in group n match the regex and
>          if no tags matches the neg_regex.
>   if there is no word at position n, returns the value of noword.
>   checks if all tags of the word in group n match the regex.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.


`option(option_name)`

>   returns True if option_name is activated else False

Note: the analysis is done on the preprocessed text.


## Default variables ##
# Default variables #

`sCountry`

>   It contains the current country locale of the checked paragraph.

        colour <<- sCountry == "US" ->> color       # Use American English spelling.



# Expressions in the suggestions #

Suggestions (and warning messages) started by an equal sign are Python string expressions
Suggestions started by an equal sign are Python string expressions
extended with possible back references and named definitions:

Example:

        foo\w+ ->> = '"' + \0.upper() + '"'     # With uppercase letters and quoation marks
        <<- ->> = '"' + \1.upper() + '"'     # With uppercase letters and quotation marks

All words beginning with "foo" will be recognized, and the suggestion is
the uppercase form of the string with ASCII quoation marks: eg. foom ->> "FOOM".


# Token rules


Token rules must be defined within a graph.
//////////////////////////////// OLD ///////////////////////////////////////

= Text preprocessing and multi-passes checking =
## Tokens

On each pass, Lightproof uses rules written in the text preprocessor to modify
internally the text before checking the text.
Tokens can be defined in several ways:

The text preprocessor is useful to simplify texts and write simplier checking
rules.

* Value (meaning the text of the token). Examples: `word`, `<start>`, `<end>`, `,`.
* Lemma: `>lemma`.
* Regex: `~pattern`, `~pattern¬antipattern`.
For example, sentences with the same grammar mistake:

* Regex on morphologies: `@pattern`, `@pattern¬antipattern`.
        These “cats” are blacks.
        These cats are “blacks”.
        These cats are absolutely blacks.
        These stupid “cats” are all blacks.
        These unknown cats are as per usual blacks.

* Tags: `/tag`.
Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

* Metatags: *NAME. Examples: `*WORD`, `*NUM`, `*SIGN`, etc.
To remove the chars “”, write:

        [“”] ->> *

Selection of tokens: `[token1|token2|>lemma1|>lemma2|~pattern1|@pattern1|…]`
The * means: replace text by whitespaces.

Similarly to grammar rules, you can add conditions:

Conditional token: `?token¿`
        \w+ly <<- morph(\0, "adverb") ->> *

You can also remove a group reference:

Conditional selection of token: `?[token1|token2|…]¿`
        these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") -1>> *
        (am|are|is|were|was) (all) <<- -2>> *

With these rules, you get the following sentences:
## Token references

        These  cats  are blacks.
        These cats are  blacks .
        These cats are            blacks.
Positive references are defined by a positive integer `>= 1`. Examples: \1, \2, \3, etc.
If there is at least one token set between parenthesis, these numbers refer to tokens between parenthesis, ignoring all others.
If there is no token between parenthesis, these numbers refer to tokens found in order defined by the rule triggered.
        These         cats  are     blacks.
        These         cats are              blacks.

These grammar mistakes can be detected with one simple rule:

Negative references are defined by a negative integer `<= -1`. Examples: \-1, \-2, \-3, etc.
        these +(\w+) +are +(\w+s)
            <<- morph(\1, "noun") and morph(\2, "plural")
            -2>> _              # Adjectives are invariable.

These numbers refer to the tokens beginning by the last one found by the rule triggered.
Instead of replacing text with whitespaces, you can replace text with @.

        https?://\S+ ->> @

Examples:
This is useful if at first pass you write rules to check successive whitespaces.
@ are automatically removed at the second pass.

You can also replace any text as you wish.

        tokens:             alpha       beta        gamma       delta       epsilon
        Mister <<- ->> Mr
        (Mrs?)[.] <<- ->> \1


        positive refs:      1           2           3           4           5
        negative refs:      -5          -4          -3          -2          -1

With the multi-passes checking and the text preprocessor, it is advised to
remove or simplify the text which has been checked on the previous pass.



        tokens:             alpha       (beta)      gamma       (delta)     epsilon
        positive refs:                  1                       2
        negative refs:      -5          -4          -3          -2          -1
== Pattern matching ==

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

        tokens:             alpha       (beta)      ?gamma¿     (delta)     epsilon
(\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

        positive refs:                  1                       2
use

        negative refs:      (-4/-5)     (-3/-4)     (-3/none)   -2          -1
(\w+) <<- some_check(\1, word(1)) ->> \1, # foo