Аннотация:We present a new corpus for the task of grammatical error correction for the Russian language. In contrast to previous works, our data consists of middle school essays written by native speakers. In total, the training data includes more than 4500 sentences and the test partition – more than 1000 sentences. The corpus contains a detailed annotation of grammatical errors in .M2 format, fine-grained error types are also available. The distribution of errors in our data differs from other corpora, containing more punctuation and less wordform errors. We study the performance of several models on our data and find that the finetuned YandexGPT model is the best. It shows F0.5-score about 73%. Among the models of smaller size, the highest score is reached by the ranker-generator pipeline, whose score is 71%.