Вы находитесь на странице: 1из 17

Corpus Linguistics

[Please note: these pages are no longer maintained and may be out of date.]



INTRODUCTION
GLOSSARY
CORPORA
COURSES
BIBLIOGRAPHY
RELATED SITES
SOFTWARE
SEARCH
ENGINE
TUTORIAL
COMMENTS


These pages were
created as part of
the
W3-Corpora
Project
at the
University of
Essex.


Corpus-Related
Research
This is a short introduction to some of the research areas where corpora can
be and have been used.

Computational Linguistics
Cultural Studies
Discourse Analysis and Pragmatics
Grammar/Syntax
Historical Linguistics
Language Acquisition
Language Teaching
Language Variation
Lexicography
Linguistics
Machine Translation
Natural Language Processing (NLP)
Psycholinguistics
Semantics
Social Psychology
Sociolinguistics
Speech
Stylistics
* Links to web-pages made to supplement the book "Corpus Linguistics" by
Tony McEnery and Andrew Wilson.

Computational Linguistics
"Computational Linguistics is an interdisciplinary field which
centers around the use of computers to process or produce human
language"C. Ball In some ways, computational linguistics and
corpus linguistics can be seen as overlapping disciplines.
Computational linguists are dependent on computer-readable
linguistic data to use in their research, while corpus linguists often
use computational methods when analysing their data. One main
difference can be said to be that in corpus linguistics it is the data in
the corpus that is the main object of study. In computational
linguistics, corpora are not studied as such but used as a resource to
solve various problems.
Computational Linguistics is a broad term. You can read more about
it in Catherine Ball's lecture About Computational Linguistics or
at this page (University of Saarland).
Cultural Studies (*)
The existence of comparable corpora makes it possible to compare
the language use in, for example, different countries. The result of
such comparisons can point to differences in culture. It has been
suggested, for example, that the lower proportion of expressions of
future in the Kolhapur Corpus of Indian English, as compared to
LOB and Brown Corpora, can be explained with cultural
differences. "Maybe the Indian mind is not given to thinking much in
terms of the future..." (Shastri 1988:18 in ICAME Journal 12:15-26).
So far, the use of corpora in cultural studies is not a particular well
developed field. Perhaps the ongoing work of compiling 20 corpora
of different varieties of English within the ICE project (International
Corpora of English) will help make this a more fruitful research area
in the future.
Discourse Analysis and Pragmatics (*)
"Pragmatics is the study of the way language is used in particular
situations, and is therefore concerned with the functions of words as
opposed to their forms. It deals with the intentions of the speaker,
and the way in which the hearer interprets what is said"
(from The Collins Cobuild English Language Dictionary (1987).
Corpora have not been used much in discourse analysis or pragmatic
studies. One explanation to that is that it has been difficult to find
material suitable for this kind of research. As more corpora are
being compiled and annotated with the relevant information, more
corpus based research is also being performed in this area. Examples
of such studies are can be found among the work done by scholars
in Bergen (Norway), on their corpus of London teenage
language, COLT. See, for example, "They like wanna see how we
talk and all that.The use of like as a discourse marker in London
teenage speech", and "More trends in teenage talk. A corpus-based
investigation of the discourse items cos and innit".
Read more about using corpora in discourse analysis and pragmatics
in, for example,
o An Introduction to Spoken Interaction by Anna-Brita
Stenstrom.
o Questions and Responses in English Conversation by Anna-
Brita Stenstrom.
o The Discourse Resource Initiative project
o COLT-based research with abstracts of papers/articles based
on COLT material (The Bergen Corpus of London Teenage
Language).
Grammar/Syntax (*)
Much research on grammar and syntax has been based on the
researcher's intuition about the language, on his/her 'competence'.
The existence of large corpora has made it easier to study the
language as it is produced, to study the 'performance' of many
people.
"Every (formal) grammar is initially written on the basis of intuitive
data; by confronting the grammar with unrestricted corpus data it
can be tested on its correctness and its completeness". Jan
Aarts (1991)
Corpus data are being used to a larger or smaller extent for the
production of grammar books. One example of a book, based
completely on corpus evidence is An Empirical Grammar of the
English Verb: Modal Verbs by Dieter Mindt (1995). (external link)
An other example of how corpora can be used to corpus-based
research on grammar and syntax is, for example, Clause patterns in
Modern British English: A corpus-based (quantitative) study by N.
Oostdijk and P. de Haan (1994). In ICAME Journal 18, (abstract)
Historical Linguistics (*)
The possibility of having representative samples of the language at
different points in history in machine-readable form allows
historical linguists to conduct their research faster and more
efficiently. The Helsinki Corpus is a well-known and much used
corpus of texts from different periods.
The Lampeter Corpus of Early Modern English Tracts contains a
collection of pamphlets published between 1640 and 1740. For
example of research on the Lampeter Corpus, click here.
Language Acquisition
The ICLE corpus (International Corpus of Learner English) contains
data produced by learners of English as a foreign language from
different countries. It is being use for a variety of research purposes,
some of which were presented at the AILA96 conference (abstracts).
Learn more about this research in the book Learner English on
Computer by Sylviane Granger.
The CHILDES database contains transcripts of language spoken by
children. This material can be used for research in a number of
fields, language acquisition being one. An annotated bibliography of
research in child language and language disorders can be found by
using this link.
Language Teaching (*)
There are several examples of how corpora can be, and gave been,
used in language teaching. See, for example: Classroom
Concordancing / Data-driven learning Bibliography "references to
the direct use of data from linguistic corpora for language teaching
and language learning".
Language Variation (*)
Much work with corpora concerns language variation. Corpora are
used to study how laguage varies between different text types,
domains, times, regions, speakers, writers, etc. In these kinds of
studies, one variant of the language is compared to another. These
'variants' can be different parts of one and the same corpus or similar
parts of different corpora. An example of the former would be, for
example, the Science Fiction texts in the LOB corpus compared to
the Romantic Fiction tests in the same corpus. An example of a
study of variation between two corpora would be, for example, an
examination of the Science fiction texts in the LOB corpus as
compared to the Science fiction texts in the Brown corpus.
Language variation can also concern how speakers vary their
production depending on the situation, how the language has
changed over time, or how the language varies within an area
(dialect).
Lexicography (*)
Corpora are increasingly used in lexicography today. The first to
start making extensive use of large corpora in dictionary and
grammar book production were the Collins Cobuild. Longman has
consulted the British National Corpus (BNC) and the Longman
Corpus Network for their latest edition of the Longman Dictinary of
Contemporary English.
You can read more about the use of corpora in dictionary making
in Computer Corpus Lexicography by Vincent B. Y. Ooi. For
examples of corpus based studies in the field of lexicography, see,
for example, the Bibliography of papers by Cobuild staff members.
Linguistics
Corpora are important sources of data for a number of areas within
the wide scope of Linguistics. Some of these are dealt with in more
detail elswhere on this page.
"[A]nalyses of language use provide an important complementary
perspective to traditional linguistic descriptions". Douglas Biber,
in IJCL 1:2
Psycholinguistics (*)
Observing the language found in a corpus can contribute to the
creation of hypothesis about the way language is processed by the
mind.
The use of corpora can also contribute to research in language
pathologies. In order to analyse a particular language impairment it
is important to have a very clear picture of the structural and formal
differences between the impairment and its correct form.
Semantics (*)
There are various ways in which you can study the meaning of
words/utterances. One way is to look at the context in which the
word/phrase occurs. Concordances and collocations (*) are often
used for this.
Attempts have been made at annotating corpora with semantic
annotation, but the avaialbility of such material is limited. Example
of how such information can be given can be found by looking at
the Word Net, a semantically annotated lexical database for English.
Social Psychology (*)
Sociolinguistics (*)
With the existence of corpora provided with sociolinguistic
information about the speakers and/or authors of a text has come the
possibility of using corpora in sociolinguistic research.
The British National Corpus BNC has been extensively annotated
for various sociolinguistic parameters, such as speakers' age, sex,
and social class, writers' age, sex, domicile, etc. This information is
used in a number of studies, for example by Paul Rayson et al
in Social Differentiation in the Use of English Vocabulary: Some
Analyses of the Conversational Component of the British National
Corpus (IJCL 1997:2:1). Another corpus with sociolinguistic
annotation is the COLT Corpus of London teenager language. See,
for example, Girls' conflict talk: a sociolinguistic investigation of
variation in the verbal disputes of adolescent females by A-B
Stenstrom and I.K. Hasund.
Historical corpora are also being used for sociolinguistic research.
See, for example, the Sociolinguistics and Language
History Project.
Speech (*)
The first computer-readable corpus of spoken discourse was
the London-Lund Corpus (LLC). It contains about 500,000 words of
spoken British English, transcribed and provided with prosodic
annotation. The LLC was compiled to be used for linguistic research
primarily. Since then, many corpora of spoken text have been
compiled for various other research tasks, especially for use within
the fields of speech science and speech technology.
There are several areas where spoken corpora are used. One is
looking at spoken language as a variety of natural language,
sometimes in comparison to written language. In such studies, a
corpus of orthographically transcribed language, such as the BNC,
can be (and has been) used.
Spoken corpora are also used within the fields of speech technology
and speech science. Example of such research tasks are teaching
computers to produce and understand speech. The study of acoustic
and phonetic phenomena of speech is important in, for example, the
expanding commercial sphere of telecommunication.
Producing transcribed spoken corpora with detailed annotation is a
time-consuming and, therefore, costly procedure, which is why few
such corpora are freely available.
Read more about spoken language corpora activities here (external
link)
Stylistics (*)
The availability of corpora with large collections of texts from
different genres, authors, media, etc. opens up new possibilities in
the research area of Stylistics. Texts of different kinds can be
compared to each other to find text type specific features. General
corpora can serve as a frame of reference, something to compare
other texts with.
One area of Stylistics where corpora have been used is in authorship
attribution (see, for example, P. de Haan, 1997 ).
Another example of how corpora can be used in Stylistics is given in
T. Tabata's essay on the use of statistical methods to investigate the
changes of style in a corpus of Dicken's writings.

INTRODUCTION TUTORIAL
SEARCH
ENGINE

W3-Corpora project 1996-98: This page is no longer maintained.


Using the Google Ngram Corpus to Measure
Cultural Complexity

o home
o conference
o venue
o social
o contact
Home conference programme abstracts Using the Google Ngram Corpus to Measure Cultural Complexity

Search...
Submit Query

Using the Google Ngram Corpus to Measure
Cultural Complexity
XML
Juola, Patrick, Duquesne University, USA, juola@mathcs.duq.edu
It is common to hear assertions that culture is complex, that language is complex, and that while the
complexity of language is a universal constant, the complexity of culture is increasing as technology and the
increased pace of modern life creates new complexities. These statements are usually based on subjective
assessments, often tinged by nostalgia for the good old days. Can questions of cultural complexity be
addressed quantitatively?
Previous work (e.g. Juola 1997, 2008) has been able to use information theory to address this question. The
basic idea is that a sample of language is complex if it contains a lot of information, defined formally as the
size of the computer program that would be necessary to (re)create the sample from scratch, a measure
more formally known as Kolmogorov complexity. This can be approximated by compressing the text sample
and looking at the size of the resulting file the larger the resulting file, the more complex the original.
Alternatively, one can compute complexity directly using Shannons (1948) famous formula for information
entropy based on a concept of the underlying linguistic events. In any case, linguistic complexity can be
measured observing discourse-controlled samples of language, essentially by comparing several (linguistic)
versions of the same text, such as translations of the Bible or of a specific novel, and observing whether one
language yields systematically larger measurements than another. Previous work suggests that no such
systematic pattern exists, and that all languages are indeed roughly equal in complexity.
Key to this approach is the idea of discourse control; we are measuring how difficult it is to express a specific
fixed concept in a given language and comparing it to the same concept expressed in an another language.
Culture, however, can be treated as the set of concepts that people choose to express. By eliminating the
restriction of discourse control and instead investigating language chosen freely by the cultural participants,
we may be able to tease apart the interaction between cultural and linguistic complexity. In particular, we
can distinguish between linguistic and cultural complexity as follows: a language is complex if there is a lot of
information contained in a topic-controlled discourse. A culture is complex if there is a large range of topics
for discourse, or alternatively a lot of information contained in topical choice. Therefore, if we compare the
complexity (however measured) of two language samples that are not topic-controlled, but instead are in
some sense representative of the breadth of discourse present in a culture, we can calculate the differences
attributable to discourse variation, and hence to cultural complexity.
As an illustrative example, we follow the approach of Spenser (1900; cited by Denton 2004), in that complex
means containing many different interdependent parts. A complex political system has many parties and
power groups, many different roles and offices, and many relationships among them. In a political discourse,
many if not most of these parties and power groups would need to be explicitly named and distinguished
from each other. By contrast, an autocratic monarchy is relatively simple: there is the monarch and then
everyone else. A game is complex if it has many rules and strategies. A culture is complex if it contains many
heterogeneous aspects such as technological specifications, social stratification, multilevel administrative
hierarchies, or a large amount of object or object-types. Continuing this line of reasoning, a complex culture
is one with lots of stuff and where people do lots of things to or with stuff, where stuff here refers not
only to physical objects but also to people, groups, activities, abstractions, and so forth anything that can be
discussed among the group.
We therefore apply the previous methodology to a different sort of corpus; an uncontrolled corpus that
represents the breadth of cultural experience. If the information contained in such a corpus is high, then we
can say the culture is complex. Several corpora may be suitable for this purpose; we have chosen to study the
Google Books Ngram Corpus (Michel et al. 2010). This contains all of the n-grams from the millions of books
in the Google Books database, something like 20 million books, or approximately 4% of all books ever
printed. While not strictly speaking representative (for example, publishing was a relatively rare event in the
16th and 17th centuries, and many more books are published in modern years), and of course typically only
the literate can write or publish books, this nevertheless gives us a time-stamped window into the scope of
culture. Furthermore, by focusing on n-grams (and specifically on 2-grams, word pairs), we can observe not
only the distribution of stuff, but also some of the relationships between stuff for example, the number
and range of word pairs beginning with expensive will inform us about changing opinions regarding money
and the types of goods considered luxurious and pricey.
We therefore used the Google Books American 2-Gram Corpus to measure changes in the complexity of
American culture at ten-year intervals between 1900 and 2000. This corpus simply contains a frequency list
of all two word phrases used in American-published books in any given year. For example, the phrase
hamburgers with appeared only 8 times in print in 1940, compared to 45 in the year 2000. Focusing strictly
on the US during the 20th century avoids many of the problems with mass culture, as publishing was a well-
established industry and literacy was widespread. However, the number of books published in this time of
course tended to increase. Our first observation, then, is that culture may be increasing simply from the
number of different things to talk about. The number of different word pair types per year increased
dramatically, nearly doubling from 1900 to 2000, as given in table 1.
Table 1
Year # types
1900 17,769,755
1910 22,834,741
1920 22,409,426
1930 19,745,549
1940 20,369,679
1950 23,632,749
1960 27,379,411
1970 34,218,686
1980 34,458,083
1990 37,796,626
2000 41,654,264
This alone indicates an increase in the complexity of written culture, although this process is not continuous
and some years during the Depression show a loss. To confirm the overall upward trend, we have also
calculated the Shannon-entropy of the 2-gram distributions, attached as table 2.
Table 2
Year Entropy (bits)
1900 17.942357
1910 18.072880
1920 18.072325
1930 18.133058
1940 18.241048
1950 18.336162
1960 18.391872
1970 18.473447
1980 18.692278
1990 18.729807
2000 18.742085
This further analysis illustrates that a more sophisticated measure of complexity shows a continuous process
of increasing complexity, even in times when (for example due to economic downturn) the actual volume of
words published decreases. Even when people are writing less, they still have more stuff about which to
write, showing the cumulative nature of culture (todays current events are tomorrows history, but still
suitable material for discussion and analysis part of culture).
We acknowledge that this is a preliminary study only. Google Books offers cultural snapshots at much greater
frequency than ten-year intervals. Google Books also offers corpora in other languages (including German,
French, Spanish, Russian, and Hebrew) as well as another English-speaking culture. Use of a more balanced
corpus (such as the Google Books English Million corpus, a corpus balanced at about 1 million words/year to
offset increased publication), or the BYU Corpus of Historical American English might help clarify the effects
of publication volume. Analysis of n-grams at sizes other than 2 would illustrate other types of complexity
in particular, 1-grams (words) would show changes in lexical but not syntactic complexity and hence an
analysis of stuff but not what people do with stuff. Despite these weaknesses, we still feel this paper
illustrates that culture-wide analysis of abstractions like increasing complexity is both practical and fruitful.
References
Denton, T. (2004). Cultural Complexity Revisited. Cross-Cultural Research 38(1): 3-26.
Juola, P. (1998). Measuring Linguistic Complexity : The Morphological Tier. Journal of Quantitative Linguistics
5(3): 206-213.
Juola, P.. (2008). Assessing Linguistic Complexity. In M. Miestamo, K. Sinnemaki, and F. Karlsson (eds.),
Language Complexity: Typology, Contact, Change. Amsterdam: John Benjamins.
Michel, Jean-Baptiste, Y. Kui Shen, A. Presser Aiden, A. Veres, M. K. Gray, W. Brockman, The Google Books
Team, J. P. Pickett, D. Hoiberg, D. Clancy, P. Norvig, J. Orwant, St. Pinker, M. A. Nowak, and E. Lieberman
Aiden (2010). Quantitative Analysis of Culture Using Millions of Digitized Books. Science (Published online
ahead of print: 12/16/2010).
Spencer, H. (1900). First principles (6th ed.). Akron, OH: Werner.

abstracts



TRIAL EXPIRED
Digital Humanities 2014 (Digital Cultural Empowerment) Call for Pre-conference workshops
Hosted by the University of Lausanne (UNIL) and the Ecole Polytechni...
RSSbox powered by rssinclude.com
DH2012 Mailing List Subscription: Keep updated with the latest news. Click here to subscribe
Conference
about
news
dh deutschland
digital diversity in practice some experiments
registration: come aboard!
programme
schedule
keynote speakers
guest exhibition
video lectures
Venue
university of hamburg
the city
travel
accommodation
restaurants
local weather
Social
social events
tours & excursions
event calendar
child care at DH2012
Contact
about us
faqs
conference team
contact form
supporters
sponsor and exhibitor information
affiliate organizations
External Resources
ADHO Website
ALLC Website
DH past conferences
Hamburg Tourism
University of Hamburg
COPYRIGHT DIGITAL HUMANITIES 2012. WEBSITE BY FRUITBAT DESIGN AND GREENBOX DESIGNS

Assembling a corpus
A language corpus (pl. corpora) is a collection of language data selected according to some organising
principle. This organising principle is enshrined in the sampling frame which is used to select materials
for the corpus. The materials gathered are typically stored in the form of machine readable texts to
facilitate rapid searching of the data.
For example, the sampling frame may be newspaper materials of late twentieth century Britain. This
sampling frame both aids the builder of the corpus begin to decide what may or may not be included in
the corpus. It also aids the eventual end user of the corpus, as if there is a mismatch between the
sampling frame of the corpus and theresearch question the user wishes to investigate then the corpus
will not allow the pursuit of that research question. So, for example, if a researcher looked at our putative
corpus of late twentieth century newspapers and was interested in exploring academic discourse
practices, the newspaper corpus would not allow that research question to be pursued directly.
The nature of the research question
Assuming that the researcher's sampling frame and the corpus match, the ease with which a corpus can
be used, at this moment in time, depends largely on the nature of the research question. Corpus
retrieval tools, such as concordance programmes, are typically very good at searching for words and
groups of words. Hence if ones research question can be expressed via simple lexical searches, the
corpus will prove a relatively easy to use research tool. However, where linguistic processing is
required in order to exploit a corpus, the research question may be much harder to pursue using corpus
data. Some corpora include linguistic analyses, encoded in the corpus by means of some markup
language such as SGML. Such corpora are called annotated corpora. Corpus annotation may aid the
process of corpus exploitation. For example, consider a researcher who is interested in looking for all
modal verbs in a corpus of modern British English. Rather than looking for all of the word forms
associated with modal verbs in English, the researcher can instead look for all words which have the part
of speech 'modal verb' associated with them. Corpus annotation comes in many forms and in principle
any linguistic analysis could be encoded in a corpus.
Annotating a corpus
What happens if a researcher finds a corpus with the right sampling frame but it is unannotated? Where
the annotation is necessary to the pursuit of the research question, the researcher may add the
annotations themself or seek to add the annotations using a computer program which can undertake
the annotation automatically. At present, some forms of linguistic automated annotation, such as part-
of-speech analysis, is available for a wide range of languages. These programs are typically quite
accurate, reporting success rates in the region of 90%+. Other forms of annotation, such as automated
word-sense analysis are now becoming available also. However, for many forms of linguistic annotation
automated analysis will not be readily available in the foreseeable future. Hence to pursue certain
research questions with specific un-annotated corpora a great deal of time may have to be invested by
the researcher in annotating the corpus.
So far I have only considered monolingual corpora. Yet corpus data is becoming increasingly multilingual.
More and more corpus data is becoming available in an ever wider range of languages. Corpora are also
being developed which encode an original text and its translation into one or more other languages.
These so-called parallel corpora are increasingly being used in contrastive language studies, language
pedagogy and translation studies.
Using corpora for teaching
The use of corpora in teaching allows students to test linguistic hypotheses against large bodies of
naturally occurring language data. The capacity to do this may, for example, guide students in developing
a description of some linguistics feature, studying language variation or contrasting two languages.
Beyond linguistics, students of modern languages may find that corpora have a role to play in the
language teaching classroom, where corpora can be used as a guide to curriculum planning or act as a
resource for students.
The use of corpora in the classroom presents a challenge - does one teach students to exploit corpora to
allow them to undertake discovery learning using corpora, or does the teacher exploit the corpus in order
to inform their own teaching? While taking both approaches in combination is a possibility, many teachers
simply exploit corpora to teach. However, where teaching students to exploit is a preferred option, the use
of corpora is best introduced early in the teaching curriculum so that students can use corpora, on their
own initiative, across the curriculum of a degree scheme from as early a stage in their degree as possible.
If teaching students to exploit corpora, one should minimally teach them how to select corpus data to
match a research question and how to use corpus retrieval software to interrogate a corpus appropriately.
It may also be desirable to teach students about how to construct and annotate their own corpora.
Bibliography
There are now a number of basic textbooks covering corpus-based approaches to the study of language.
Of the following suggestions, Biber et al is of most interest to those wishing to pursue an approach to
corpus linguistics based upon Biber's multi-feature/multi-dimension approach to corpus data. Kennedy is
of most interest to those readers interested in the use of corpora in ELT. McEnery and Wilson is probably
of most interest to those approaching corpora from computational linguistics, or readers with an interest in
multilingual corpora. Meyer's book is slim, informative and a good introduction to English corpus
linguistics. Finally, Stubbs volume is a neo-Firthian account of the use of corpus data in linguistics.
Biber, D., S. Conrad & R. Reppen (1998) Corpus Linguistics: investigating language structure and use.
Cambridge: CUP.
Kennedy, G.D. (1998) An Introduction to Corpus Linguistics. London: Longman.
McEnery, T. & A. Wilson (2001, 2nd ed) Corpus Linguistics. Edinburgh : Edinburgh University Press.
Meyer, C. (2002) English Corpus Linguistics: an introduction. Cambridge: Cambridge University Press.
Stubbs, M. (1996) Text and Corpus Analysis: computer assisted studies of language and culture. Oxford:
Blackwell.
Related links
Two generally useful URLs to follow are:
http://www.ling.lancs.ac.uk/monkey/ihe/linguistics/contents.htm
http://devoted.to/corpora
The first of these URLs is an on-line accompaniment to the McEnery & Wilson Corpus Linguistics
textbook. The second URL is a collection of useful URLs covering a wide range of topics of interest to
people using, or interested in using corpora.
For those interested in exploring the role of corpora in teaching, Tim Johns' data driven learning page
(http://web.bham.ac.uk/johnstf/timconc.htm) is a valuable resource. Michael Barlow's page
(http://www.ruf.rice.edu/~barlow/corpus.html) is also valuable, both because it contains some information
regarding teaching and language corpora and because it contains a host of links to corpora in a number
of languages, amongst other things.
Mike Scott's homepage is a good place to visit to explore a popular concordancer, WordSmith
(http://www.liv.ac.uk/~ms2928/homepage.html) while the possibility of using the Sara programme,
released with the British National Corpus, is best explored by visiting http://www.hcu.ox.ac.uk/BNC/.
Referencing this article
Below are the possible formats for citing Good Practice Guide articles. If you are writing for a journal,
please check the author instructions for full details before submitting your article.
MLA style:
Canning, John. "Disability and Residence Abroad". Southampton, 2004. Subject Centre for
Languages, Linguistics and Area Studies Guide to Good Practice. 7 October 2008.
http://www.llas.ac.uk/resources/gpg/2241.
Author (Date) style:
Canning, J. (2004). "Disability and residence abroad." Subject Centre for Languages,
Linguistics and Area Studies Good Practice Guide. Retrieved 7 October 2008, from
http://www.llas.ac.uk/resources/gpg/2241.

Вам также может понравиться