Compiled on 2023-12-17 from rev dcda231188dc.
Warning: Build is far from latest v0.21 release state by 2 changes.
We use major.minor schema.
Until we reach 5000 words major is 0. minor updated from time to time.
Cloning repository:
$ hg clone http://hg.defun.work/gadict gadict $ hg clone http://hg.code.sf.net/p/gadict/code gadict-hg
Pushing changes:
$ hg push ssh://$USER@hg.defun.work/gadict $ hg push ssh://$USER@hg.code.sf.net/p/gadict/code $ hg push https://$USER:$PASS@hg.code.sf.net/p/gadict/code
- http://hg.defun.work/gadict
- hgweb at home page.
- http://hg.code.sf.net/p/gadict/code
- hgweb at old home page (but supported as mirror).
- https://sourceforge.net/p/gadict/code/
- Sourceforge Allure interface (not primary, a mirror).
gadict project provides dictionaries encoded in custom format. In order to precess them you need GNU Make and Python 2.7 and possibly other tools.
To produce dictionaries in dictd format you need to install dictd dictribution with dictfmt and dictzip utilities:
sudo apt install dictfmt dictzip
and run:
$ make dict
To make Anki decks checkout Anki sources:
$ git clone https://github.com/dae/anki.git $ cd anki
and update to specific revision (before strong dependency to pyaudio which is not available on Cygwin):
$ git co 1d75cff5e7458c6538a4e75728c16bef8b7adb3e^ $ git show 1d75cff5e7458c6538a4e75728c16bef8b7adb3e commit 1d75cff5e7458c6538a4e75728c16bef8b7adb3e Author: Damien Elmes <git@ichi2.net> Date: 2016-06-23 12:04:48 +1000 pyaudio is no longer optional
Previously build uses Python 2 and depends on earlier source revitions (before port to Python 3):
$ git co 15b349e3^ $ git show 15b349e3 commit 15b349e3a8b34bf80c134b406c9b90f61250ee9e Author: Damien Elmes <git@ichi2.net> Date: 2016-05-12 14:45:35 +1000 start port to python 3
and put path to Anki project source dir inside Makefile.config:
ANKI_PY_DIR := $(HOME)/devel/anki
Build command to make Anki deks is:
$ make anki
gadict project uses dictd C5 source file format in the past. C5 format have several issues:
- C5 is not structural format. So producing another forms and conversion to other formats is not possible.
- C5 have no markup for links neither for any other markups.
Before that project used dictd TAB file format which require placing article in a single long line. That format is not for human editing at all.
Other dictionary source file formats are considered as choice, like TEI, ISO, xdxf, MDF. XML like formats also are not for human editing. Also XML lack of syntax locality and full file should be scanned to validate local changes...
Note that StarDict, AbbyLinguo, Babylon, dictd formats are not considered because they all about a presentation but not a structure. They are target formats for compilation.
Fancy looking analog to MDF + C5 was developed.
Beginning of file describe dictionary information.
Each article separated by \n__\n\n and consists of two parts:
- word variations with pronunciation
- word translations, with supplementary information like part of speach, synonyms, antonyms, example of usage
Word variation are:
Parts of speech (ordered by preference):
Note
I try to keep word meanings in article in above POS order.
Each meaning may refer to topics, like:
Word relation (ordered by preference):
Translation marked by lowercase ISO 639-1 code with : (colon) character, like:
Example marked by lowercase ISO 639-1 code with > (greater) character.
Explanation or glossary are marked by lowercase ISO 639-1 code with = (equal) character.
Pronunciation variants marked by:
rare attribute to first headword used as marker that word has low frequency. SRS file writers skip entries marked as rare. I found it convenient to check frequency with:
For cut-off point I chose beseech word. All less frequent words receive rare marker.
gaphrase & gadialog files keeps data for generating one side Anki cards.
Both use same numbering schema that allows to merge updated articles with original without losing learning progress:
gadialog additionally maintains dialog, each part is marked by line starting with - TEXT.
For source file format used dictd C5 file format. See:
$ man 1 dictfmt
Shortly:
- Headwords was preceded by 5 or more underscore characters _ and a blank line.
- Article may have several headwords, in that case they are placed in one line and separated by ;<SPACE>.
- All text until the next headword is considered as the definition.
- Any leading @ characters are stripped out, but the file is otherwise unchanged.
- UTF-8 encoding is supported at least by Goldendict.
gadict project used C5 format in the past but switched to own format.
Entries or parts of text that was not completed marked by keywords:
- TODO
- incomplete
- XXX
- urgent incomplete
Makefile rules todo find this occurrence in sources:
$ make todo
- http://en.wikipedia.org/wiki/Dictionary_writing_system
- Dictionary writing system
- http://www.sil.org/computing/shoebox/mdf.html
- Multi-Dictionary Formatter (MDF). It defines about 100 data field markers.
- http://fieldworks.sil.org/flex/
- FieldWorks Language Explorer (or FLEx, for short) is designed to help field linguists perform many common language documentation and analysis tasks.
- http://code.google.com/p/lift-standard/
- LIFT (Lexicon Interchange FormaT) is an XML format for storing lexical information, as used in the creation of dictionaries. It's not necessarily the format for your lexicon.
- http://www.lexiquepro.com/
- Lexique Pro is an interactive lexicon viewer and editor, with hyperlinks between entries, category views, dictionary reversal, search, and export tools. It's designed to display your data in a user-friendly format so you can distribute it to others.
- http://deb.fi.muni.cz/index.php
- DEBII — Dictionary Editor and Browser
National corpus of Russian language. There is parallel Russian-Ukrainian texts. Search by keywords, grammatical function, thesaurus properties and other properties.
Corpus of mova.info project. Thtere are literal search and search by word family.
Frequency wordlists use several statistics:
number of word occurrences in corpus, usually marked by F
adjusted number of occurrences per 1.000.000 in corpus, usually marked by U
Standard Frequency Index (SFI) is a:
SFI | Freq |
---|---|
90 | 1 per 10 |
80 | 1 per 100 |
70 | 1 per 1000 |
60 | 1 per 10.000 |
50 | 1 per 100.000 |
40 | 1 per 1.000.000 |
30 | 1 per 10.000.000 |
deviation of word frequency across documents in corpus, usually marked by D
Sorting numerically on first column:
$ sort -k 1nr,2 <$IN >$OUT
The Open American National Corpus (OANC) is a roughly 15 million word subset of the ANC Second Release that is unrestricted in terms of usage and redistribution.
I've got OANC from link: http://www.anc.org/OANC/OANC-1.0.1-UTF8.zip
After unpacking only .txt files:
$ unzip OANC-1.0.1-UTF8.zip '*.txt' $ cd OANC; find . -type f | xargs cat | wc 2090929 14586935 96737202
I built frequency list with:
manually removed single and double letter words, filter out misspelled words with en_US hunspell spell-checker and merged word variations to baseform with using WordNet. See details in obsolete/oanc.py.
https://en.wikipedia.org/wiki/Word_lists_by_frequency
Useful word lists:
Obsolete or proprietary word list:
Updated GSL (General Service List) was obtained from:
First column represents the number of occurrences per 1,000,000 words of the Brown corpus based on counting word families.
NGSL was obtained from:
First column represents the adjusted frequency per 1,000,000 words and counting base word families.
The Academic Word List (AWL) was published in the Summer, 2000 issue of the TESOL Quarterly (v. 34, no. 2). It was devloped by Averil Coxhead, of Victoria University of Wellington, in New Zealand. The AWL is a replacement for the University Word List (published by Paul Nation in 1984).
AWL (Academic Word List) is obtained from:
Its structure is headword following by frequency level (from 1 as most frequent to 10 as least frequent).
Frequency word list was obtained from:
SFI and D columns was deleted and U and Word column was swapped. Data was sorted by U column (adjusted frequency per 1,000,000 words).
NSWL headword list with word variations was obtained from:
It is encoded in latin-1 and recoded into utf-8 (because of É symbol).
See also:
The 1700 words of the BSL 1.01 version gives up to 97% coverage of general business English materials when combined with the 2800 words of the NGSL.
Wordlist with variations was obtained from:
Based on a 1.5 million word corpus of various TOEIC preparation materials, the 1200 words of the TSL 1.1 version gives up to 99% coverage of TOEIC materials and tests when combined with the 2800 words of the NGSL.
Wordlist with variations was obtained from:
The KET Vocabulary List gives teachers a guide to the vocabulary needed when preparing students for the KET and KET for Schools examinations.
The list covers vocabulary appropriate to the A2 level on the CEFR.
Preliminary and Preliminary for Schools Vocabulary List gives teachers a guide to the vocabulary needed when preparing students for the Preliminary and Preliminary for Schools exam inations.
The list covers vocabulary appropriate to the B1 level on the CEFR.
Paul Nation prepared a frequency wordlist from combined BNC and COCA corpus:
It has 25000 basewords (and each baseword comes with variations) splited into chunks by 1000 words.
I get list from:
The Dolch word list is a list of frequently used English words compiled by Edward William Dolch. The list was prepared in 1936 and was originally published in his book Problems in Reading in 1948. Dolch compiled the list based on children's books of his era. The list contains 220 "service words". The compilation excludes nouns, which comprise a separate 95-word list.
Dolch wordlist already covered by gadict.
The Leipzig-Jakarta list is a 100-word word list used by linguists to test the degree of chronological separation of languages by comparing words that are resistant to borrowing. The Leipzig-Jakarta list became available in 2009.
Leipzig-Jakarta wordlist already covered by gadict.
The words in the Swadesh lists were chosen for their universal, culturally independent availability in as many languages as possible. Swadesh's final list, published in 1971, contains 100 terms.
Swadesh wordlist already covered by gadict except some rare words.
For entering IPA chars use IPA input method. To enable it type:
C-u C-\ ipa <enter>
All chars from alphabet typed as usual. To type special IPA chars use next key bindings (or read help in Emacs by M-x describe-input-method or C-h I).
For vowel:
æ ae ɑ o| or A ɒ |o or /A ʊ U ɛ /3 or E ɔ /c ə /e ʌ /v ɪ I
For consonant:
θ th ð dh ʃ sh ʧ tsh ʒ zh or 3 ŋ ng ɡ g ɹ /r
Special chars:
ː : (semicolon) ˈ ' (quote) ˌ ` (back quote)
Alternatively use ipa-x-sampa or ipa-kirshenbaum input method (for help type: C-h I ipa-x-sampa RET or C-h I ipa-kirshenbaum RET).