Grammalecte  Check-in [6fb05b3dd9]

Overview
Comment:[doc] small documentation update
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk | doc
Files: files | file ages | folders
SHA3-256: 6fb05b3dd91f3c5ec398103597f5350bda80b530e89f697365366a30b6eacfb4
User & Date: olr on 2019-02-27 13:59:42
Other Links: manifest | tags
Context
2019-02-27
14:48
[doc] small documentation update check-in: ea6194b8b7 user: olr tags: trunk, doc
13:59
[doc] small documentation update check-in: 6fb05b3dd9 user: olr tags: trunk, doc
10:23
[fr] faux positif check-in: 8ef55c4469 user: olr tags: trunk, fr
Changes

Modified doc/build.md from [a9bb1bbf94] to [783266048e].

1
2
3
4
5
6

7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

# How to build Grammalecte

## Required ##

* Python 3.6

* Firefox Nightly
* NodeJS
  * npm
  * jpm : https://developer.mozilla.org/en-US/Add-ons/SDK/Tools/jpm
  * web-ext : https://developer.mozilla.org/fr/Add-ons/WebExtensions/Getting_started_with_web-ext
* Thunderbird


## Commands ##

**Build a language**

`make.py LANG`

> Generate the LibreOffice extension and the package folder.
> LANG is the lang code (ISO 639).

> This script uses the file `config.ini` in the folder `gc_lang/LANG`.

**First build**

`make.py LANG -js`

> This command is required to generate all necessary files.






>



<
|











<







1
2
3
4
5
6
7
8
9
10

11
12
13
14
15
16
17
18
19
20
21
22

23
24
25
26
27
28
29

# How to build Grammalecte

## Required ##

* Python 3.6
* Firefox Developper
* Firefox Nightly
* NodeJS
  * npm

  * web-ext : `https://developer.mozilla.org/fr/Add-ons/WebExtensions/Getting_started_with_web-ext`
* Thunderbird


## Commands ##

**Build a language**

`make.py LANG`

> Generate the LibreOffice extension and the package folder.
> LANG is the lang code (ISO 639).

> This script uses the file `config.ini` in the folder `gc_lang/LANG`.

**First build**

`make.py LANG -js`

> This command is required to generate all necessary files.
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70

`-i --install`

> Install the LibreOffice extension.

`-fx --firefox`

> Launch Firefox Developper (before Firefox 57).
> Unit tests can be lanched from Firefox, with CTRL+SHIFT+F12.

`-we --webext`

> Launch Firefox Nightly (Firefox 57+).
> Unit tests can be lanched from the menu.

`-tb --thunderbird`

> Launch Thunderbird.


## Examples ##







|
|



|
|







49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69

`-i --install`

> Install the LibreOffice extension.

`-fx --firefox`

> Launch Firefox Developper.
> Unit tests can be launched from the menu (Tests section).

`-we --webext`

> Launch Firefox Nightly.
> Unit tests can be launched from the menu (Tests section).

`-tb --thunderbird`

> Launch Thunderbird.


## Examples ##

Modified doc/syntax.txt from [223b361257] to [fe117296de].

1
2
3
4
5
6
7
8
9
10
11
12
13
14





15
16
17
18
19


20


21












22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

40













41
42
43

44
45
46
47
48
49
50
WRITING RULES FOR GRAMMALECTE

Note: This documentation is obsolete right now.

# Principles #

Grammalecte is a bi-passes grammar checker engine. On the first pass, the
engine checks the text paragraph by paragraph. On the second passe, the engine
check the text sentence by sentence.

The command to switch to the second pass is `[++]`.

In each pass, you can write as many rules as you need.






A rule is defined by:

* [optional] flags “LCR” for the regex word boundaries and case sensitiveness
* a regex pattern trigger
* a list of actions (can’t be empty)


* [optional] user option name for activating/disactivating the rule


* [optional] rule name













There is no limit to the number of actions and the type of actions a rule can
launch. Each action has its own condition to be triggered.

There are three kind of actions:

* Error warning, with a message, and optionally suggestions, and optionally an URL
* Text transformation, modifying internally the checked text
* Disambiguation action, setting tags on a position


The rules file for your language must be named “rules.grx”.
The settings file must be named “config.ini”.

All these files are simple utf-8 text file.
UTF-8 is mandatory.

















# Rule syntax #

        __LCR/option(rulename)__  pattern

            <<- condition ->> error_suggestions  # message_error|http://awebsite.net...
            <<- condition ~>> text_rewriting
            <<- condition =>> commands_for_disambiguation
            ...

Patterns are written with the Python syntax for regular expressions:
http://docs.python.org/library/re.html


|




|






>
>
>
>
>
|



|
>
>
|
>
>
|
>
>
>
>
>
>
>
>
>
>
>
>


















>

>
>
>
>
>
>
>
>
>
>
>
>
>
|

|
>







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
WRITING RULES FOR GRAMMALECTE

Note: This documentation is a draft. Information may be obsolete.

# Principles #

Grammalecte is a bi-passes grammar checker engine. On the first pass, the
engine checks the text paragraph by paragraph. On the second pass, the engine
check the text sentence by sentence.

The command to switch to the second pass is `[++]`.

In each pass, you can write as many rules as you need.

There is two kinds of rules:

* regex rules (triggered by a regular expression)
* token rules (triggered by a succession of tokens)

A regex rule is defined by:

* [optional] flags “LCR” for the regex word boundaries and case sensitiveness
* a regex pattern trigger
* a list of actions
* [optional] option name (the rule is active only if the option defined by user or config is active)
* [optional] rule name (named rules can be disabled by user or by config)

A token rules is defined by:

* rule name
* one or several lists of tokens (triggers)
* a list of actions (the action is active only if the option defined by user or config is active)

Token rules must be defined within a graph.

Each graph is defined within the second pass with the command:

        @@@@GRAPH: graph_name

A graph ends when another graph is defined or when is defined the command:

        @@@@END_GRAPH

There is no limit to the number of actions and the type of actions a rule can
launch. Each action has its own condition to be triggered.

There are three kind of actions:

* Error warning, with a message, and optionally suggestions, and optionally an URL
* Text transformation, modifying internally the checked text
* Disambiguation action, setting tags on a position


The rules file for your language must be named “rules.grx”.
The settings file must be named “config.ini”.

All these files are simple utf-8 text file.
UTF-8 is mandatory.


# Comments #

Lines beginning with `#` are comments.


# End of file #

With the command:

        #END

at the beginning of a line, the parser won’t go further.
Whatever is written after will be considered as comments.


# Regex rule syntax #

        __LCR/option(rulename)__
            pattern
            <<- condition ->> error_suggestions  # message_error|http://awebsite.net...
            <<- condition ~>> text_rewriting
            <<- condition =>> commands_for_disambiguation
            ...

Patterns are written with the Python syntax for regular expressions:
http://docs.python.org/library/re.html
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171

Examples:

        __<s>__ pattern
            <<- condition ->> replacement
            # message
            <<- condition ->> suggestion # message
            <<- condition
            ~>> text_rewriting
            <<- =>> disambiguation

        __<s>__ pattern <<- condition ->> replacement # message


## Comments ##

Lines beginning with # are comments.


## End of file ##

With the command:

        #END

at the beginning of a line, the compiler won’t go further.
Whatever is written after will be considered as comments.


## Whitespaces at the border of patterns or suggestions ##

Example: Recognize double or more spaces and suggests a single space:

        __<s>__  "  +" <<- ->> " "      # Extra space(s).








|
<




<
<
<
<
<
<
<
<
<
<
<
<
<
<
<







173
174
175
176
177
178
179
180

181
182
183
184















185
186
187
188
189
190
191

Examples:

        __<s>__ pattern
            <<- condition ->> replacement
            # message
            <<- condition ->> suggestion # message
            <<- condition ~>> text_rewriting

            <<- =>> disambiguation

        __<s>__ pattern <<- condition ->> replacement # message

















## Whitespaces at the border of patterns or suggestions ##

Example: Recognize double or more spaces and suggests a single space:

        __<s>__  "  +" <<- ->> " "      # Extra space(s).

185
186
187
188
189
190
191












192
193
194
195
196
197
198

        __<i]__ \b([?!.])([A-Z]+) <<- ->> \1 \2     # Missing space?

Example. Back reference in messages.

        (fooo) bar <<- ->> foo      # “\1” should be:














## Name definitions ##

Grammalecte supports name definitions to simplify the description of the
complex rules.

Example:







>
>
>
>
>
>
>
>
>
>
>
>







205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230

        __<i]__ \b([?!.])([A-Z]+) <<- ->> \1 \2     # Missing space?

Example. Back reference in messages.

        (fooo) bar <<- ->> foo      # “\1” should be:


## Pattern matching ##

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

(\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

use

(\w+) <<- some_check(\1, word(1)) ->> \1, # foo


## Name definitions ##

Grammalecte supports name definitions to simplify the description of the
complex rules.

Example:
298
299
300
301
302
303
304




























































305
306
307
308
309
310
311
        Mr(. [A-Z]\w+) <<- ~1>> *

You can also call Python expressions.

        __[s]__ Mr. ([a-z]\w+) <<- ~1>> =\1.upper()































































# Disambiguation #

When Grammalecte analyses a word with morph or morphex, before requesting the
POS tags to the dictionary, it checks if there is a stored marker for the
position where the word is. If there is a marker, Grammalecte uses the stored
data and don’t make request to the dictionary.







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
        Mr(. [A-Z]\w+) <<- ~1>> *

You can also call Python expressions.

        __[s]__ Mr. ([a-z]\w+) <<- ~1>> =\1.upper()


# Text preprocessing and multi-passes checking #

On each pass, Lightproof uses rules written in the text preprocessor to modify
internally the text before checking the text.

The text preprocessor is useful to simplify texts and write simplier checking
rules.

For example, sentences with the same grammar mistake:

        These “cats” are blacks.
        These cats are “blacks”.
        These cats are absolutely blacks.
        These stupid “cats” are all blacks.
        These unknown cats are as per usual blacks.

Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

To remove the chars “”, write:

        [“”] ->> *

The * means: replace text by whitespaces.

Similarly to grammar rules, you can add conditions:

        \w+ly <<- morph(\0, "adverb") ->> *

You can also remove a group reference:

        these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") -1>> *
        (am|are|is|were|was) (all) <<- -2>> *

With these rules, you get the following sentences:

        These  cats  are blacks.
        These cats are  blacks .
        These cats are            blacks.
        These         cats  are     blacks.
        These         cats are              blacks.

These grammar mistakes can be detected with one simple rule:

        these +(\w+) +are +(\w+s)
            <<- morph(\1, "noun") and morph(\2, "plural")
            -2>> _              # Adjectives are invariable.

Instead of replacing text with whitespaces, you can replace text with @.

        https?://\S+ ->> @

This is useful if at first pass you write rules to check successive whitespaces.
@ are automatically removed at the second pass.

You can also replace any text as you wish.

        Mister <<- ->> Mr
        (Mrs?)[.] <<- ->> \1


# Disambiguation #

When Grammalecte analyses a word with morph or morphex, before requesting the
POS tags to the dictionary, it checks if there is a stored marker for the
position where the word is. If there is a marker, Grammalecte uses the stored
data and don’t make request to the dictionary.
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404

405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517

>   checks if the text before the pattern matches the regex.

`textarea(regex[, neg_regex])`

>    checks if the full text of the checked area (paragraph or sentence) matches the regex.

`morph(n, regex[, strict=True][, noword=False])`

>   checks if all tags of the word in group n match the regex.
>   if strict = False, returns True only if one of tags matches the regex.
>   if there is no word at position n, returns the value of noword.

`morphex(n, regex, neg_regex[, noword=False])`

>   checks if one of the tags of the word in group n match the regex and
>          if no tags matches the neg_regex.
>   if there is no word at position n, returns the value of noword.


`option(option_name)`

>   returns True if option_name is activated else False

Note: the analysis is done on the preprocessed text.


## Default variables ##

`sCountry`

>   It contains the current country locale of the checked paragraph.

        colour <<- sCountry == "US" ->> color       # Use American English spelling.



# Expressions in the suggestions #

Suggestions (and warning messages) started by an equal sign are Python string expressions
extended with possible back references and named definitions:

Example:

        foo\w+ ->> = '"' + \0.upper() + '"'     # With uppercase letters and quoation marks

All words beginning with "foo" will be recognized, and the suggestion is
the uppercase form of the string with ASCII quoation marks: eg. foom ->> "FOOM".




//////////////////////////////// OLD ///////////////////////////////////////

= Text preprocessing and multi-passes checking =

On each pass, Lightproof uses rules written in the text preprocessor to modify
internally the text before checking the text.

The text preprocessor is useful to simplify texts and write simplier checking
rules.

For example, sentences with the same grammar mistake:

        These “cats” are blacks.
        These cats are “blacks”.
        These cats are absolutely blacks.
        These stupid “cats” are all blacks.
        These unknown cats are as per usual blacks.

Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

To remove the chars “”, write:

        [“”] ->> *

The * means: replace text by whitespaces.

Similarly to grammar rules, you can add conditions:

        \w+ly <<- morph(\0, "adverb") ->> *

You can also remove a group reference:

        these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") -1>> *
        (am|are|is|were|was) (all) <<- -2>> *

With these rules, you get the following sentences:

        These  cats  are blacks.
        These cats are  blacks .
        These cats are            blacks.
        These         cats  are     blacks.
        These         cats are              blacks.

These grammar mistakes can be detected with one simple rule:

        these +(\w+) +are +(\w+s)
            <<- morph(\1, "noun") and morph(\2, "plural")
            -2>> _              # Adjectives are invariable.

Instead of replacing text with whitespaces, you can replace text with @.

        https?://\S+ ->> @

This is useful if at first pass you write rules to check successive whitespaces.
@ are automatically removed at the second pass.

You can also replace any text as you wish.

        Mister <<- ->> Mr
        (Mrs?)[.] <<- ->> \1



With the multi-passes checking and the text preprocessor, it is advised to
remove or simplify the text which has been checked on the previous pass.



== Pattern matching ==

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

(\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

use

(\w+) <<- some_check(\1, word(1)) ->> \1, # foo








|


|
|

|

|
|
|
>








|











|




|

<
<

<
<
<
<
<
<
<
<
<
<
<
|

<
|
<
<
<
<
<

<
<
|
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<

<
<
<
<
<
|
<

<
<
<
|
<
|
<
|
<
<
|
<
|
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524


525











526
527

528





529


530
















531





532

533



534

535

536


537

538






















>   checks if the text before the pattern matches the regex.

`textarea(regex[, neg_regex])`

>    checks if the full text of the checked area (paragraph or sentence) matches the regex.

`morph(n, regex[, neg_regex][, no_word=False])`

>   checks if all tags of the word in group n match the regex.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.

`analyse(n, regex[, neg_regex][, no_word=False])`

>   checks if all tags of the word in group n match the regex.
>   if neg_regex = "*", returns True only if all morphologies match the regex.
>   if there is no word at position n, returns the value of no_word.


`option(option_name)`

>   returns True if option_name is activated else False

Note: the analysis is done on the preprocessed text.


# Default variables #

`sCountry`

>   It contains the current country locale of the checked paragraph.

        colour <<- sCountry == "US" ->> color       # Use American English spelling.



# Expressions in the suggestions #

Suggestions started by an equal sign are Python string expressions
extended with possible back references and named definitions:

Example:

        <<- ->> = '"' + \1.upper() + '"'     # With uppercase letters and quotation marks















# Token rules


Token rules must be defined within a graph.








## Tokens






















Tokens can be defined in several ways:





* Value (meaning the text of the token). Examples: `word`, `<start>`, `<end>`, `,`.

* Lemma: `>lemma`

* Rege: `~pattern`


* Regex on morphologies: `@pattern`, `@pattern¬antipattern`.

* Metatags: *NAME. Examples: `*WORD`, `*SIGN`, etc.