Grammalecte  Diff

Differences From Artifact [ae208bbf9d]:

To Artifact [f728cc54d9]:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
WRITING RULES FOR GRAMMALECTE

Note: This documentation is a draft. Information may be obsolete.

# Principles #

Grammalecte is a bi-passes grammar checker engine. On the first pass, the
engine checks the text paragraph by paragraph. On the second pass, the engine
check the text sentence by sentence.

The command to switch to the second pass is `[++]`.

In each pass, you can write as many rules as you need.

There is two kinds of rules:

* regex rules (triggered by a regular expression)
* token rules (triggered by a succession of tokens)

A regex rule is defined by:

* [optional] flags “LCR” for the regex word boundaries and case sensitiveness


|











|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
WRITING RULES FOR GRAMMALECTE

Note: This documentation is a draft. Information may be obsolete or incomplete.

# Principles #

Grammalecte is a bi-passes grammar checker engine. On the first pass, the
engine checks the text paragraph by paragraph. On the second pass, the engine
check the text sentence by sentence.

The command to switch to the second pass is `[++]`.

In each pass, you can write as many rules as you need.

There are two kinds of rules:

* regex rules (triggered by a regular expression)
* token rules (triggered by a succession of tokens)

A regex rule is defined by:

* [optional] flags “LCR” for the regex word boundaries and case sensitiveness
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

52
53
54
55
56
57
58
59
60
61
62

Token rules must be defined within a graph.

Each graph is defined within the second pass with the command:

        @@@@GRAPH: graph_name

A graph ends when another graph is defined or when is defined the command:

        @@@@END_GRAPH

There is no limit to the number of actions and the type of actions a rule can
launch. Each action has its own condition to be triggered.

There are three kind of actions:

* Error warning, with a message, and optionally suggestions, and optionally an URL
* Text transformation, modifying internally the checked text
* Disambiguation action, setting tags on a position



The rules file for your language must be named rules.grx.
The settings file must be named config.ini.

All these files are simple utf-8 text file.
UTF-8 is mandatory.


# Comments #








|






|




>


|
|







33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

Token rules must be defined within a graph.

Each graph is defined within the second pass with the command:

        @@@@GRAPH: graph_name

A graph ends when another graph is defined or when is found the command:

        @@@@END_GRAPH

There is no limit to the number of actions and the type of actions a rule can
launch. Each action has its own condition to be triggered.

There are several kinds of actions:

* Error warning, with a message, and optionally suggestions, and optionally an URL
* Text transformation, modifying internally the checked text
* Disambiguation action, setting tags on a position
* Tagging


The rules file for your language must be named `rules.grx`.
The settings file must be named `config.ini`.

All these files are simple utf-8 text file.
UTF-8 is mandatory.


# Comments #

185
186
187
188
189
190
191
192
193
194
195
196
197
198
199

## Whitespaces at the border of patterns or suggestions ##

Example: Recognize double or more spaces and suggests a single space:

        __<s>__  "  +" <<- ->> " "      # Extra space(s).

ASCII " characters protect spaces in the pattern and in the replacement text.


## Pattern groups and back references ##

It is usually useful to retrieve parts of the matched pattern. We simply use
parenthesis in pattern to get groups with back references.








|







186
187
188
189
190
191
192
193
194
195
196
197
198
199
200

## Whitespaces at the border of patterns or suggestions ##

Example: Recognize double or more spaces and suggests a single space:

        __<s>__  "  +" <<- ->> " "      # Extra space(s).

Characters `"` protect spaces in the pattern and in the replacement text.


## Pattern groups and back references ##

It is usually useful to retrieve parts of the matched pattern. We simply use
parenthesis in pattern to get groups with back references.

211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229


## Pattern matching ##

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

(\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

use

(\w+) <<- some_check(\1, word(1)) ->> \1, # foo


## Name definitions ##

Grammalecte supports name definitions to simplify the description of the
complex rules.








|



|







212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230


## Pattern matching ##

Repeating pattern matching of a single rule continues after the previous matching, so
instead of general multiword patterns, like

        (\w+) (\w+) <<- some_check(\1, \2) ->> \1, \2 # foo

use

        (\w+) <<- some_check(\1, word(1)) ->> \1, # foo


## Name definitions ##

Grammalecte supports name definitions to simplify the description of the
complex rules.

340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
internally the text before checking the text.

The text preprocessor is useful to simplify texts and write simplier checking
rules.

For example, sentences with the same grammar mistake:

        These “cats” are blacks.
        These cats are “blacks”.
        These cats are absolutely blacks.
        These stupid “cats” are all blacks.
        These unknown cats are as per usual blacks.

Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

To remove the chars “”, write:

        [“”] ->> *







|
|
|
|
|







341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
internally the text before checking the text.

The text preprocessor is useful to simplify texts and write simplier checking
rules.

For example, sentences with the same grammar mistake:

        These “cats” are black.
        These cats are “black”.
        These cats are absolutely black.
        These stupid “cats” are all black.
        These unknown cats are as per usual black.

Instead of writting complex rules or several rules to find mistakes for all possible
cases, you can use the text preprocessor to simplify the text.

To remove the chars “”, write:

        [“”] ->> *
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
You can also remove a group reference:

        these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") -1>> *
        (am|are|is|were|was) (all) <<- -2>> *

With these rules, you get the following sentences:

        These  cats  are blacks.
        These cats are  blacks .
        These cats are            blacks.
        These         cats  are     blacks.
        These         cats are              blacks.

These grammar mistakes can be detected with one simple rule:

        these +(\w+) +are +(\w+s)
            <<- morph(\1, "noun") and morph(\2, "plural")
            -2>> _              # Adjectives are invariable.








|
|
|
|
|







367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
You can also remove a group reference:

        these (\w+) (\w+) <<- morph(\1, "adjective") and morph(\2, "noun") -1>> *
        (am|are|is|were|was) (all) <<- -2>> *

With these rules, you get the following sentences:

        These  cats  are black.
        These cats are  black .
        These cats are            black.
        These         cats  are     black.
        These         cats are              black.

These grammar mistakes can be detected with one simple rule:

        these +(\w+) +are +(\w+s)
            <<- morph(\1, "noun") and morph(\2, "plural")
            -2>> _              # Adjectives are invariable.

393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447

        Mister <<- ->> Mr
        (Mrs?)[.] <<- ->> \1


# Disambiguation #

When Grammalecte analyses a word with morph or morphex, before requesting the
POS tags to the dictionary, it checks if there is a stored marker for the
position where the word is. If there is a marker, Grammalecte uses the stored
data and don’t make request to the dictionary.

The disambiguation commands store POS tags at the position of a word.

There is 3 commands for disambiguation.

`select(n, pattern)`

>   stores at position n only the POS tags of the word matching the pattern.

`exclude(n, pattern)`

>   stores at position n the POS tags of the word, except those matching the
    pattern.

`define(n, definition)`

>   stores at position n the POS tags in definition.

Examples:

        =>> select(\1, "po:noun is:pl")
        =>> exclude(\1, "po:verb")
        =>> define(\1, "po:adv")
        =>> exclude(\1, "po:verb") and define(\2, "po:adv") and select(\3, "po:adv")

Note: select, exclude and define ALWAYS return True.

If select and exclude generate an empty list, no marker is set.

With define, you can set a list of POS tags. Example:

        define(\1, "po:nom is:plur|po:adj is:sing|po:adv")

This will store a list of tags at the position of the first group:

        ["po:nom is:plur", "po:adj is:sing", "po:adv"]



# Conditions #

Conditions are Python expressions, they must return a value, which will be
evaluated as boolean. You can use the usual Python syntax and libraries.








|

















|

|





|
|





|

<
<
<
<
|
<







394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435




436

437
438
439
440
441
442
443

        Mister <<- ->> Mr
        (Mrs?)[.] <<- ->> \1


# Disambiguation #

When Grammalecte analyses a word with morph, before requesting the
POS tags to the dictionary, it checks if there is a stored marker for the
position where the word is. If there is a marker, Grammalecte uses the stored
data and don’t make request to the dictionary.

The disambiguation commands store POS tags at the position of a word.

There is 3 commands for disambiguation.

`select(n, pattern)`

>   stores at position n only the POS tags of the word matching the pattern.

`exclude(n, pattern)`

>   stores at position n the POS tags of the word, except those matching the
    pattern.

`define(n, [definitions])`

>   stores at position n the POS tags in definitions (a list of strings).

Examples:

        =>> select(\1, "po:noun is:pl")
        =>> exclude(\1, "po:verb")
        =>> define(\1, ["po:adv"])
        =>> exclude(\1, "po:verb") and define(\2, ["po:adv"]) and select(\3, "po:adv")

Note: select, exclude and define ALWAYS return True.

If select and exclude generate an empty list, no marker is set.

With define, you must set a list of POS tags. Example:





        define(\1, ["po:nom is:plur", "po:adj is:sing", "po:adv"])



# Conditions #

Conditions are Python expressions, they must return a value, which will be
evaluated as boolean. You can use the usual Python syntax and libraries.