Page Content

Presentation on “Assessing pragmatic competence in EFL users”

For our presentation at the GAL conference 2016 in Koblenz, we presented initial results based on the first round of data collection in our perception study. There were not enough participants at this point in time to be able to generalize from these first findings, but a number of intriguing tendencies emerged from the analysis that we now intent to confirm by gathering more data.

The basic issue raised in our talk was the question of how to assess pragmatic competence in a systematic and controlled fashion based only on linguistic performance, which we see as an inherent requirement of language testing and certification. Despite the criticism raised against a “native speaker ideal”, we argued that if pragmatic competence is to be connected to the goal of communicative success, then accordance with native speaker norms and/or expectations is an essential aspect of pragmatically competent linguistic behavior. In a first step, we tested this by analyzing the lexical material used to formulate requests by a group of German learners (elicited in a DCT format that is comprised in the QEU) and comparing the lemmas used with the range of lemmas occurring in corresponding requests by native speakers of English. This also allowed us to select a number of items from the learner and native speaker data pools to be used as stimuli in the perception study, with the selection based on similarity to the target native speaker group (for the learners) and representativeness for their own linguistic community (for the native speakers).

The items thus selected were then presented to native speakers in our online study, asking them whether a given item was produced by a native speaker or learner of English, without any additional information provided. Initial results show that this is not a straightforward task, indicating e.g. that learners that used lexical material similar to the native speaker group are likely to be misidentified as native speakers in such a test. In a second step, we asked native speakers to rate the same requests on a number of rating scales, testing for a number of perceptual dimensions, e.g. for politeness, intelligibility, acceptability, perceived frequency and expected probability of communicative success. Our initial findings imply that positive or negative evaluation on these dimensions alone is not enough to identify native speakers and learners, respectively. However, several feature dimensions predict an attribution of native speaker status for positively rated items, i.e. learner requests perceived as highly intelligible, acceptable and frequently occurring will often be classified as having been produced by native speakers.

In our view, these are striking findings that are relevant both for language learning and teaching as well as concerning their potential application in language testing. Most importantly, this tells us that native speaker intuition alone is not enough to distinguish (advanced) learners from native speakers based on written performance alone, and that a model of pragmatic competence compatible with our data needs to centrally account for the role of normative expectations in native speakers. More detailed information on this research project will be published in the near future.

Related Content