Projektdetaljer
Beskrivelse
This project focuses on one of the basic assumptions of linguistics: the linguistic equi-complexity dogma.
When asked if all languages are equally complex, twentieth century most theoretical linguists has answered with the principle of invariance in the level of complexity, defending the equilibrium hypothesis, which states that the total complexity of a language is fixed because sub-complexities in linguistic sub-systems trade off. This idea of equi-complexity, seen for decades as an indisputable axiom of linguistics, has begun to be explicitly questioned in recent years.
Many models have been proposed to confirm or refute the hypothesis of linguistic equi-complexity. The tools, criteria and measures to quantify the level of complexity of languages vary and depend both on the specific research interests and on the definition of complexity adopted. Currently, there is no clear solution to quantify the complexity of languages and each of the proposed models has advantages and disadvantages.
The main objective of our project is to show the differences in the levels of complexity of natural languages by providing an objective and meaningful method to calculate linguistic complexity. To achieve this goal, we propose an interdisciplinary solution that uses a solidly defined computational model to quantify the cost/difficulty in the process of acquisition of different languages, showing that it is not identical in all cases. The computational model we propose is inspired by the process of language acquisition and is included in the field of grammatical inference, a machine learning subdiscipline.
The model we propose can be seen as an alternative to the methods that have been used so far to calculate the linguistic complexity and presents, with respect to them, the following advantages: its interdisciplinary, the model combines ideas from linguistics with computational models; its motivation, the tool we propose is a computational model based on how humans acquire language; its results, it provides quantifiable experimental results; its ability to perform crosslinguistic analysis.
Given the referred characteristics of the proposed model, with its implementation we expect to show that languages vary in their level of complexity; that it is possible to design tools to quantify the linguistic complexity; and that the difference in the level of complexity of languages -and its measurement- is relevant to the understanding of natural languages.
When asked if all languages are equally complex, twentieth century most theoretical linguists has answered with the principle of invariance in the level of complexity, defending the equilibrium hypothesis, which states that the total complexity of a language is fixed because sub-complexities in linguistic sub-systems trade off. This idea of equi-complexity, seen for decades as an indisputable axiom of linguistics, has begun to be explicitly questioned in recent years.
Many models have been proposed to confirm or refute the hypothesis of linguistic equi-complexity. The tools, criteria and measures to quantify the level of complexity of languages vary and depend both on the specific research interests and on the definition of complexity adopted. Currently, there is no clear solution to quantify the complexity of languages and each of the proposed models has advantages and disadvantages.
The main objective of our project is to show the differences in the levels of complexity of natural languages by providing an objective and meaningful method to calculate linguistic complexity. To achieve this goal, we propose an interdisciplinary solution that uses a solidly defined computational model to quantify the cost/difficulty in the process of acquisition of different languages, showing that it is not identical in all cases. The computational model we propose is inspired by the process of language acquisition and is included in the field of grammatical inference, a machine learning subdiscipline.
The model we propose can be seen as an alternative to the methods that have been used so far to calculate the linguistic complexity and presents, with respect to them, the following advantages: its interdisciplinary, the model combines ideas from linguistics with computational models; its motivation, the tool we propose is a computational model based on how humans acquire language; its results, it provides quantifiable experimental results; its ability to perform crosslinguistic analysis.
Given the referred characteristics of the proposed model, with its implementation we expect to show that languages vary in their level of complexity; that it is possible to design tools to quantify the linguistic complexity; and that the difference in the level of complexity of languages -and its measurement- is relevant to the understanding of natural languages.
Akronym | INGRACOMLEN |
---|---|
Status | Afsluttet |
Effektiv start/slut dato | 01/01/2016 → 31/12/2018 |
Samarbejdspartnere
- Roskilde Universitet
- Jean Monnet University (Projektpartner)
- Universitat Rovira i Virgili (Projektpartner) (leder)
- Stockholm University (Projektpartner)