Вы находитесь на странице: 1из 17

Computer Assisted Language Learning

ISSN: 0958-8221 (Print) 1744-3210 (Online) Journal homepage: http://www.tandfonline.com/loi/ncal20

Developing a technological pedagogical content


knowledge (TPACK) assessment for preservice
teachers learning to teach English as a foreign
language

Derya Baser, Theodore J. Kopcha & M. Yasar Ozden

To cite this article: Derya Baser, Theodore J. Kopcha & M. Yasar Ozden (2016) Developing a
technological pedagogical content knowledge (TPACK) assessment for preservice teachers
learning to teach English as a foreign language, Computer Assisted Language Learning, 29:4,
749-764, DOI: 10.1080/09588221.2015.1047456

To link to this article: http://dx.doi.org/10.1080/09588221.2015.1047456

Published online: 04 Jun 2015.

Submit your article to this journal

Article views: 597

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=ncal20

Download by: [Eskisehir Osmangazi Universitesi] Date: 03 January 2017, At: 03:35
Computer Assisted Language Learning, 2016
Vol. 29, No. 4, 749 764, http://dx.doi.org/10.1080/09588221.2015.1047456

Developing a technological pedagogical content knowledge (TPACK)


assessment for preservice teachers learning to teach English as a
foreign language
a,b
Derya Baser *, Theodore J. Kopchac and M. Yasar Ozdend
a
Department of Computer Education and Instructional Technology, Middle East Technical
University, Ankara, Turkey; bDepartment of Computer Education and Instructional Technology,
Abant Izzet Baysal University, Bolu, Turkey; cDepartment of Career and Information Studies,
Learning, Design, and Technology Program, University of Georgia, Athens, United States;
d
Department of Computer Education and Instructional Technology, Eastern Mediterranean
University, Famagusta-TRNC, Mersin-10, Turkey

This paper reports the development and validation process of a self-assessment survey
that examines technological pedagogical content knowledge (TPACK) among
preservice teachers learning to teach English as a foreign language (EFL). The survey,
called TPACK-EFL, aims to provide an assessment tool for preservice foreign
language teachers that addresses subject-specific pedagogies and technologies. Using
mixed methods approach, survey items were generated first using qualitative methods
(e.g. expert interviews and document analysis). The content validity of the items was
established through expert and preservice teacher reviews. The survey was then
validated through two rounds of exploratory factor analysis (EFA), the first with 174
preservice EFL teachers and the second with 204 preservice EFL teachers. The results
of the first round indicated a five-factor structure: technological knowledge (TK),
content knowledge (CK), pedagogical knowledge (PK), pedagogical content
knowledge (PCK) and a fifth factor that combined TCK, TPK, and TPACK items.
After revising the survey, the second round of EFA results showed a seven-factor
structure that was consistent with the TPACK framework. The final TPACK-EFL
survey included a total of 39 items: 9 TK, 5 CK, 6 PK, 5 PCK, 3 TCK, 7 TPK, and 4
TPACK. The results offer survey developers and teacher educators insight into
establishing clear boundaries between the TPACK constructs. In particular, subject-
specific strategies were used to generate clear and distinct items within the TCK and
TPK constructs. Implications for developing other subject-specific TPACK surveys
and using the TPACK-EFL survey in other countries are discussed.
Keywords: English as a foreign language; technology integration; technological
pedagogical content knowledge; teacher education; assessment

1. Introduction
There has been tremendous interest from the teacher education community in the techno-
logical pedagogical content knowledge (TPACK) framework since its popularization by
Mishra and Koehler (2006). The framework offers teacher educators a way of conceptual-
izing and assessing preservice teachers knowledge and abilities to integrate technology
into their own teaching. This is important in the context of teacher education. With the
rapid increase of technology in todays K-12 classrooms, there is a need for a framework
like TPACK to support teachers as they learn to use the technologies that are available
for learning.

*Corresponding author. Email: dbaser@metu.edu.tr

2015 Taylor & Francis


750 D. Baser et al.

According to Mishra and Koehler (2006), the TPACK framework consists of seven
types of knowledge associated with the integration of technology in instruction (see
Figure 1): technological knowledge (TK), pedagogical knowledge (PK), content knowl-
edge (CK), pedagogical content knowledge (PCK), technological pedagogical knowledge
(TPK), technological content knowledge (TCK), and TPACK. TPACK itself focuses on
teachers knowledge and use of technology, pedagogy, and content interactively that
is, meaningful uses of technology to support instructional practices within a particular
content area. This stands in contrast to frameworks that oversimplify technology integra-
tion by separating technology from pedagogy and content knowledge (Koehler & Mishra,
2008). The constituent pieces of TPACK include the overlapping elements of TK, PK,
and CK. These represent ones knowledge of subject-specific curriculum and strategies
(PCK), the affordances of subject-specific technology (TCK), and the use of technology
to improve instructional strategies (TPK). These constructs are created by combining the
separate components of technological, pedagogical, and construct knowledge.

Figure 1. The TPACK framework (reproduced by permission of the publisher, 2012 by http://
tpack.org).
Computer Assisted Language Learning 751

One of the most popular forms of assessing preservice teacher TPACK is with surveys
(Koehler, Shin, & Mishra, 2012). A TPACK survey offers teacher educators a quick and
cost-effective method for assessing knowledge and skills among a very large audience
(Graham, 2011). An effective TPACK survey is important in teacher education because it
provides a mechanism for measuring and improving our own effectiveness as teacher
educators. There are currently several surveys that have been developed as valid and reli-
able measures of TPACK (Koehler et al., 2012).
There are, however, several known issues associated with existing TPACK surveys.
Chief among those issues is a lack of clarity between the boundaries of the TPACK con-
structs (Angeli & Valanides, 2009; Archambault & Barnett, 2010; Cox, 2008; Cox & Gra-
ham, 2009; Graham, 2011). As a result, the existing TPACK surveys often fail to assess
each construct independently (see Archambault & Barnett, 2010). Even among the sur-
veys that attend to validity and reliability (e.g. Archambault & Barnett, 2010; Chai, Koh,
& Tsai, 2010; Koh, Chai, & Tsai, 2010; Sahin, 2011; Schmidt et al., 2009; Yurdakul et
al., 2012), the items are often written in a way that is somewhat general. While this
improves their applicability across a wider audience, items written in a broad or general
manner are also more susceptible to bias and more often misrepresent the constructs upon
which they are built (Desimone, 2009). More importantly, each content area values differ-
ent pedagogical strategies when integrating technology (Graham et al., 2009). Items that
are written to apply to multiple content areas fail to address content-specific pedagogical
and technological practices associated within a given subject matter. There is a current
interest in developing surveys that are content specific and address the known issues of
validity and reliability.
The purpose of this research was to develop a TPACK survey that assesses preservice
teachers knowledge of using technology to teach English as a foreign language (EFL).
The survey, called TPACK-EFL, is a self-assessment for preservice teachers that focuses
specifically on TPACK within the EFL content area. The need for such a survey is high.
Although some studies (Abbitt, Perry, & Edwards, 2011; Graham et al., 2009; Handal,
Campbell, Cavanagh, Petocz, & Kelly, 2012; Landry, 2010) have reported on the success-
ful development of content-specific TPACK surveys in a variety of subject-matter areas
such as science and mathematics, there has yet to be a valid and reliable TPACK survey
developed for EFL teachers specifically. English is the most frequently learned foreign
language in the country where this study took place, as well as many other parts of the
world such as China, Russia, Germany, Spain, Egypt, and Brazil (Crystal, 2003). An
EFL-specific TPACK survey would give teacher educators a tool for assessing future
teachers and their potential to integrate technology into their teaching of EFL.
Additionally, technology is valued in EFL for specific and unique types of learning
activities. For example, social media such as blogs and wikis are promoted as a way for
learners to use language in a meaningful way while constructing knowledge with other
learners (Jonassen, Howland, Marra, & Crismond, 2008). Computer-based communica-
tion tools can enhance students language skills by enabling interaction between students
and native speakers (Golonka, Bowles, Frank, Richardson, & Freynik, 2014). Audio and
video materials allow students to see and hear language used in real contexts by different
speakers including native and non-native speakers (Bernhardt, 2010). A survey such as
TPACK-EFL offers teacher educators a valid and reliable instrument that addresses the
specific pedagogical and technological approaches to instruction that the EFL community
values most. Such a survey can be translated into multiple languages and used to assess
TPACK among a variety of students who will teach English in the future.
752 D. Baser et al.

In addition to offering an EFL-specific TPACK survey, this study suggests a robust


process for developing valid and reliable subject-specific TPACK surveys. The TPACK-
EFL survey was developed using Creswell and Plano Clarks (2007) instrument develop-
ment model. The model may help reduce the ambiguity among construct boundaries
because it uses mixed methods to improve both the internal and external validity of the
survey items. Few if any TPACK surveys have been developed with this level of attention
to the issues of validity and reliability (Koehler et al., 2012). The results of this study
offer other researchers much needed insight into a process for developing subject-specific
TPACK surveys in a way that attends to validity and reliability throughout the develop-
ment process.

2. Method
The researchers in this study were interested in developing and validating a subject-spe-
cific survey to assess preservice teachers perceptions of their TPACK within the context
of teaching EFL. To do this, we created the TPACK-EFL survey to collect data on preser-
vice teachers self-assessment of the seven types of knowledge represented within
TPACK.

2.1. Instrument development


Creswell and Plano Clarks (2007) instrument development model consists of two distinct
phases development and validation. During the development phase, qualitative meth-
ods are used to develop survey items. During the validation phase, quantitative methods
are used to test the survey items generated during the development phase. Creswell and
Plano Clark suggested that using qualitative methods during the development phase
would improve the validity of the survey because survey developers would more fully
understand the phenomena of interest before attempting to validate the items associated
with the phenomena.
For our survey development, we engaged in both phases of Creswell and Plano
Clarks instrument development model. Although not specified in the model, we applied
multiple qualitative and quantitative data sources during each phase in an effort to
improve the validity of the survey. Figure 2 contains the two phases of the instrument
development model and descriptions of the specific qualitative and quantitative methods
that we applied within each phase. Those methods are described in detail below.

2.1.1. Development phase


The development phase of the TPACK survey began with qualitative data collection (see
Figure 2), including interviews with experts and a literature and document analysis. The
purpose of this phase was to understand how the TPACK constructs were defined by dif-
ferent researchers, how the constructs differed from each other in EFL, and how the con-
structs were measured in other surveys. Survey items were then generated from this data
and the content validity of the items was established through expert review and a cogni-
tive interview with a member of the target audience.

2.1.1.1. Interviews. The development phase began with semi-structured interviews con-
ducted with six instructors, five from the Department of Computer Education and Instruc-
tional Technology and one from Foreign Language Education. The interviews explored
Computer Assisted Language Learning 753

Figure 2. The phases of development for the TPACK-EFL survey.

the knowledge and skills that preservice foreign language teachers need to integrate tech-
nology. Instructors were drawn through purposeful sampling such that they had experi-
ence with technology-integration coursework within the Foreign Language Education
department. Because interviews were purposefully selected, six instructors provided a
level of redundancy in responses that indicated adequate sample for obtaining this qualita-
tive data (Patton, 1990).

2.1.1.2. Literature and document analysis. The development phase also included a lit-
erature and document analysis. As shown in Table 1, this included 11 national and inter-
national standards that addressed technology integration broadly and/or the use of
technology to support EFL (e.g. ISTE NETS, Turkish Ministry of Education teacher qual-
ifications, etc.). Existing TPACK instruments (e.g. Chai et al., 2010; Jamieson-Proctor,
Finger, & Albion, 2010; Koh et al., 2010; Lee & Tsai, 2010; Sahin, 2011; Schmidt et al.,
2009) were also analyzed in terms of their content, structure, item specificity, develop-
ment procedures, validity, and reliability evidences.

2.1.1.3. Item generation and content validity. The qualitative data were then analyzed
to generate items for each of the seven proposed constructs in the TPACK framework.
An initial item pool was generated through content analysis of expert interviews. Items in
the initial pool were then revised or eliminated based on an analysis of the language of
754 D. Baser et al.

Table 1. National and international standards used to generate the TPACK-EFL survey.

Sources TK CK PK PCK TCK TPK TPACK

Technology competencies of X
INTIME: Facilitator of
Quality Education Model
Standards for Foreign Language X X
Learning Preparing for the
Twenty-first Century
The Partnership for Twenty-first X X X
Century Skills (P21) for the
core subject of world
languages
The ISTE NETS and X X X
Performance Indicators for
Teachers
Turkish Ministry of Education X
Teacher Qualifications
National Board for Professional X X
Teaching Standards
The framework of professional X X
standards for teachers the
Training and Development
Agency
Iowa Teaching Standards X
Wisconsin Educator Standards X
Teachers
Turkish Ministry of Education X
English Teacher
Qualifications
Teachers of English to Speakers X X X X
of Other Languages (TESOL)
Technology Standards
Framework

the standards and the items contained in current TPACK surveys. These analyses resulted
in the generation of 50 items, including 13 TK items, 7 CK items, 6 PK items, 6 PCK
items, 6 TCK items, 6 TPK items, and 6 TPACK items.
A group of seven instructors then reviewed the items to improve the content validity
of the survey. These experts were independent from the six instructors who were inter-
viewed earlier in the development phase and held expertise in at least one of the following
areas: foreign language, pedagogy, instructional technology, and measurement and evalu-
ation. Each instructor was provided with a form in which s/he could provide suggestions
for improving each item. The researchers used the reviews to make revisions to the items
such as providing examples for technological terms, clarifying the nature of specific
items, and avoiding pedagogical terms for the TPACK domains not related to pedagogy.
A preservice foreign language teacher participated in a cognitive interview to provide
the researchers with an understanding of how the intended audience might interpret the
items. The purpose of this interview was to improve the structure and content of items
that were confusing to or misinterpreted by a member of the intended audience. Because
Computer Assisted Language Learning 755

the researchers aimed to improve the interpretation and readability of the items rather
than validate them, only one interview was needed to identify areas where the language
of the items needed clarification or revision. The preservice teacher read each item aloud
and explained her understanding of each item. The results of the cognitive interview sug-
gested making several changes such as reducing the length of the survey instructions and
eliminating, replacing, or explaining ambiguous terms used within several items.
The survey in this study employed a nine-point rating scale that ranged from nothing/
none (1) to very little (3) to some (5) to quite a bit (7) to a great deal (9). Although
other TPACK surveys use a five-point scale, a nine-point scale can help improve the accu-
racy of preservice teachers self-assessments. Other measures that use a nine-point scale
to assess a teachers sense of their own knowledge and abilities include Tschannen-Moran
and Hoys (2001) Teacher Sense of Efficacy Scale.

2.1.2. Validation phase


Validation occurred over two rounds of testing; both rounds employed the quantitative
method of exploratory factor analysis (EFA) with maximum likelihood estimation (MLE)
and oblique rotation. The first round of validation is described here as part of the Instru-
ment Development section because it led to revisions in the survey. The survey was tested
a second time after making revisions based on the results of the first round. The results of
the second round of validation are presented in of the Results section.

2.1.2.1. Round one. In the first round of validation, 174 preservice foreign language
teachers in their third and fourth year of an EFL program at a major university in Turkey
completed the TPACK-EFL survey. The teachers had completed a semester-long course
on technology integration in the previous year of the program. The results of an EFA indi-
cated a five-factor solution that explained 69.21% of the variance. Similar to Schmidt et
al. (2009), factors were determined by applying the Kaiser Goodman Rule that is, by
selecting factors with an eigenvalue greater than one. The first four factors were TK, CK,
PK, and PCK, and the fifth factor merged the items from TCK, TPK, and TPACK.
Participant feedback was collected from six of the students who participated in a 20-
minute interview after completing the TPACK-EFL survey. The focus of the interview
was on how the students perceived TCK, TPK, and TPACK. Their responses indicated
that the TCK, TPK, and TPACK items needed clarity and improvements in wording and
structure. In particular, participants found it difficult to see how the TCK items were dis-
tinct from TPK and TPACK items. When interpreting the TCK items, the participants
envisioned the technologies being used for teaching purposes. For example, they inter-
preted the TCK item, I can design multimedia (slide show, video, etc.) to present an
English topic, as an assessment of whether they could design multimedia to teach that
English topic.
The TCK items were rephrased or removed entirely. The rephrased items were written
such that they focused on participants own abilities to use technology to learn EFL-
related content (e.g. software to practice pronunciation skills; online tools to practice
speaking with others; and watching movies in a foreign language to develop listening
skills). This specific and narrower focus on TCK falls within Mishra and Koehlers
(2006) definition of TCK, which includes knowledge of tools such as discussion boards
and other social media and Web 2.0 tools. It also reflects ones TCK within the content
area of EFL. The manner in which a teacher learns to speak a language strongly influen-
ces the way in which he/she teaches that language to others (Lin, 2010). The researchers
756 D. Baser et al.

therefore anticipated that the focus on personal use of technology to develop preservice
teachers own EFL knowledge and skills would assess TCK while more clearly articulat-
ing the distinction between the use of technology for presenting specific content (TCK)
and the use of technology for teaching that content (TPACK).
In addition to new TCK items, some items within the other TPACK domains were
revised, deleted, or reworded based on the EFA results of the first round of validation.
This resulted in a 50-item survey for validation in Round Two: 10 TK items, 7 CK items,
7 PK items, 6 PCK items, 5 TCK items, 5 TPK items, and 10 TPACK items.

2.1.2.2. Round two. The 50-item survey was then tested in round two of the validation
phase. The participants in the second round of validation were 204 foreign language pre-
service teachers in their third and fourth years of an EFL program at a major university in
Turkey. Hair, Black, Babin, and Anderson (2010) suggest that an appropriate sample size
for factor analysis is five times as many cases as the number of items in the survey thus
250 participants are recommended for a 50-item survey. While the number of participants
was lower than recommended by Hair et al., it was more than adequate for the final num-
ber of items a 39-item survey would need 195 participants for factor analysis. The num-
ber of 204 participants also fits the general recommendation that factor analysis should be
conducted with a minimum sample size of 200 (Meyers, Gamst, & Guarino, 2006).
Therefore, sample size in the second round of validation was considered appropriate for
factor analysis. The results of the analysis are presented below.

3. Results
Before conducting EFA on the survey items from Round Two of the validation phase (see
Figure 2) in this study, we conducted Kaiser Meyer Olkin (KMO) and Bartletts test of
sphericity values. These tests provided evidence of the appropriateness of factor analysis
and the presence of correlations among variables. The KMO value was calculated as .93.
Tabachnick and Fidell (2001) suggest that when this value is relatively large (greater than
.60), there is an underlying structure of the survey and that factor analysis is warranted
for the sample size associated with the measure. Bartletts test of sphericity (BTS value
D 5837.00, p < 0.001) was found to be significant, supporting the claim that correlations
among factors were not zero. The internal consistency reliability coefficients (Cronbachs
alphas) within each construct were also satisfactory (see Table 2); Fraenkel and Wallen
(2008) suggested that values above .70 are acceptable.
The 50 items on the survey were analyzed through EFA with MLE. Oblique rotation
with direct oblimin method was used to interpret the factor structure of the scale rather
than orthogonal rotation because orthogonal rotation produces factors that are uncorre-
lated (Costello & Osborne, 2005). Given the nature of the TPACK constructs, the likeli-
hood of correlations among factors was strong and warranted an analysis that considered
the relationships among the factors. The initial results indicated that 11 items had load-
ings that were too low or loaded on two factors. These items were eliminated and the
remaining 39 items (see Appendix 1) were reanalyzed using the same analysis. Table 2
indicates factor loadings and reliability coefficients of the items.
A visual examination of the scree plot indicated that a seven-factor structure exhibited
the best fit for our 39 items. However, the Kaiser Goodman Rule (eigenvalues greater
than 1) suggested that a six-factor solution was present. After exploring both, the seven-
factor solution exhibited the best fit for several reasons. First, the seven-factor solution
was more consistent with the framework upon which the survey was constructed. Second,
Computer Assisted Language Learning 757

Table 2. Survey factors, factor loadings, and reliability coefficients by TPACK construct.
# Items Factors

1 2 3 4 5 6 7

TK: alpha D .89

3 Use computer peripherals .84


4 Troubleshoot problems .83
2 Adjust settings .81
5 Use classroom equipment .68
1 Use technological terms .67
6 Use Office programs .62
9 Learn how to use a software .61
7 Create multimedia .48
8 Use collaboration tools .47
CK: alpha D .88

10 Speaking .90
13 Understanding .78
11 Writing .76
12 Pronouncing .74
14 Listening .42
PK: alpha D .92

18 Collaborate with stakeholders .91


17 Deal with students diversity .66
19 Develop professionally .65
20 Support out-of-class learning .46
16 Design a learning environment .45
15 Use pedagogical strategies .44
PCK: alpha D .91

24 Prepare curricular activities .93


23 Support language development .78
25 Adapt a lesson plan .77
21 Manage classroom .50
22 Assess learning .38
TCK: alpha D .81

26 Benefit from multimedia to express ideas .26


28 Cooperate with foreign persons distantly .84
27 Contribute to multilingual communities distantly .81
TPK: alpha D .91

30 Guide students for ethical technology usage .63


31 Use technology to develop higher order skills .62

(continued)
758 D. Baser et al.

Table 2. (Continued )

TPK: alpha D .91

32 Manage technology integrated classroom .55


29 Use technology to meet individualized needs .51
TPACK: alpha D .86

38 Use Web 2.0 to develop students language .69


36 Use collaboration tools for language learning .52
39 Use technology for professional development .50
37 Use technology for students self-developing .47
34 Use technology to design learning materials .47
33 Decide to use technology in which standards .43
35 Use multimedia to support language learning .42

researchers recommend using solutions that maximize the number of loadings higher than
.30 (Hair, Anderson, Tatham, & Black, 1995; Stevens, 2002). In our case, this was the
seven-factor solution. Finally, the seven-factor solution explained the largest percent of
the variance in the model (70.42%) while having the fewest number of cross-loaded
items. Combined, these reasons suggested that the seven-factor solution was most reason-
able interpretation (Meyers et al., 2006) for the TPACK-EFL survey.
The seven factors were labeled in accordance with the TPACK framework (i.e. TK,
CK, PK, PCK, TCK, TPK, and TPACK). Items with loading coefficients at or below .30
were dropped. The final TPACK-EFL survey included a total of 39 items: 9 TK items, 5
CK items, 6 PK items, 5 PCK items, 3 TCK items, 7 TPK items, and 4 TPACK items.
The items are presented in Table 2 by the constructs under which they loaded. All of the
items loaded on a single factor. However, three items written as TPACK items instead
loaded onto the factor of TPK: I can decide when technology would benefit my teaching
of specific English curricular standards, I can design learning materials by using tech-
nology that supports students language learning, and I can use multimedia such as vid-
eos and Web sites to support students language learning.
Evidence for internal consistency of the developed TPACK instrument was main-
tained through Cronbachs alpha. When the items for each factor were analyzed sepa-
rately, the reliability coefficients for the TPACK factors ranged from .81 to .92 (see
Table 2). These scores indicate a high level of reliability associated with the items in each
construct.

4. Discussion
In this research paper, the TPACK-EFL survey was developed to provide foreign lan-
guage preservice teachers with a way to assess their TPACK knowledge. The results of
our EFA supported a seven-factor structure. In the past, several researchers have strug-
gled to develop a TPACK survey that exhibits an underlying seven-factor structure. It is
possible that the seven-factor structure in this study was due to the use of the instrument
development model proposed by Creswell and Plano Clark (2007). The Creswell and
Plano Clark model provides a rigorous and robust process for instrument development
through the use of multiple methods. It is possible that drawing from in-depth qualitative
Computer Assisted Language Learning 759

data to develop TPACK items associated with EFL led to a survey that was designed well
conceptually. Others have similarly reported positive results from developing survey
items from qualitative data (Bilici Canbazoglu, Yamak, Kavak, & Guzey, 2013; Karaca,
Can, & Yildirim, 2013; Schmidt et al., 2009).
The level of specificity of the items in the TPACK-EFL survey may also have contrib-
uted to the underlying seven-factor structure in the survey. The EFL subject-matter area
has specific practices and skills that are valued and recognized among its practitioners. In
particular, the focus of language teaching is often on the process of using content to learn
rather than on learning the content alone (Borg, 2006). The unique nature of foreign lan-
guage instruction may have played a role in how EFL teachers perceived the TPACK con-
structs. By applying rich qualitative methods (e.g. expert interviews, analysis of existing
standards and surveys) when developing the instrument, we were able to include multiple
items within each TPACK construct from an EFL point of view rather than writing items
broadly or writing one item for each subject-matter area within each construct (e.g. Kaya
& Dag, 2013; Schmidt et al., 2009). This likely brought more stability and consistency to
the survey and its underlying structure. Others have suggested that improving the specific-
ity of items within TPACK surveys could improve the manner in which participants inter-
pret the items properly (Angeli & Valanides, 2009; Koh et al., 2010).
Despite the level of specificity included in the TPACK-EFL survey and survey items,
the TCK items in the first round of the validation phase either failed to load or were not
perceived as separate from the other TPACK constructs. The final TCK items were
revised in a way that focused on the use of technologies to develop preservice teachers
own language skills rather than focusing on how to represent EFL more broadly. This
revision was based on the idea that, when an EFL teacher can use specific technologies
for his/her own learning, he/she can then determine strategies for using those technologies
to support the learning of others (Lin, 2010). This different point of view may have made
it easier for preservice teachers to perceive the TCK items as being independent from ped-
agogy and educational context. It is important to note, however, that only two of the final
TCK items had clear loadings with the third item exhibiting a weaker loading. It may be
that foreign language preservice teachers still struggled to perceive TCK as a separate
and distinct factor despite our efforts to improve construct clarity. This issue is consistent
with reports from other researchers (Archambault & Barnett, 2010; Koh et al., 2010; Zel-
kowski, Gleason, Cox, & Bismarck, 2013) who found it difficult to establish TCK as a
distinct construct through factor analysis.
In addition to the issues we faced with TCK items, several items that were conceptual-
ized initially as TPACK items loaded distinctly on the TPK factor. The most likely reason
for this result is that terms like decide to use technology (Item 33) and design learning
materials (Item 34) better represent a use of technology that supported instructional prac-
tices alone (TPK) rather than the integration of instructional practices and content
(TPACK). It is important to note, however, that these items were generated from multiple
data sources with expertise in the use of technology to teach EFL including interviews
with experts and a variety of national standards. The content validity of the items was fur-
ther established through expert reviews and a cognitive interview with an EFL preservice
teacher. Despite these efforts, we encountered several difficulties with generating clear
examples of TCK, TPK, and TPACK in this study. This suggests that even for experts,
the boundaries among some of the TPACK constructs may be difficult to distinguish in a
practical sense. Additional attention from the scholarly community may be needed to
clarify these constructs in practice and develop effective survey items for each TPACK
construct (Angeli & Valanides, 2009; Brantley-Dias & Ertmer, 2013).
760 D. Baser et al.

5. Implications
The results of this study suggest that using an instrument development model like Cres-
well and Plano Clarks (2007) might be a good way to develop future subject-matter spe-
cific TPACK surveys. Despite some difficulties with establishing TCK and some TPACK
items, a seven-factor structure was found. This suggests that conducting a qualitative
examination of TPACK in a subject-specific manner may, in turn, improve the clarity and
validity of survey items that emerge from that examination. The TPACK-EFL survey pre-
sented in this paper is among the first developed and validated specifically for the teach-
ing of EFL. Translating the survey into other languages would provide international
audiences with an opportunity to assess rapidly EFL preservice teachers TPACK in a
valid and reliable manner.

6. Limitations
One limitation of this study is the use of EFA to validate the survey. While the results are
promising, confirmatory factor analysis would provide additional evidence of the validity
of the survey. Additionally, the focus of the final TCK items should be considered when
interpreting the results of this and future uses of this survey. As written, the items specifi-
cally assess ones own ability to use technology to learn content. Since this reflects a spe-
cific aspect of TCK, it may be beneficial to interpret the TCK items in conjunction with
the related construct of TPACK. This would provide a broader understanding of a preser-
vice teachers knowledge of both the technologies and pedagogical strategies that support
learning specific content.
That being said, the study has several strengths that suggest the survey is both valid
and reliable. Rather than conducting EFA on a construct-by-construct basis (e.g. Schmidt
et al., 2009), this study explored the factor structure among all of the developed items.
The fact that the items loaded distinctly onto the factors in a manner that was consistent
with the underlying theory supports the validity of the survey. Additionally, several
TPACK surveys were developed using orthogonal rotation (see Koh et al., 2010; Lee &
Tsai, 2010), which assumes that there is no relationship among the factors. The use of
oblique rotation in this study improves our confidence in the underlying factor structure
because it accounts for the relationships among the factors (Costello & Osborne, 2005).

7. Conclusion
Preparing teachers to integrate technology into Foreign Language Education is an impor-
tant goal. This paper offers a TPACK survey to foreign language teacher educators that
can be used to improve the way we teach preservice teachers to integrate technology as
well as assess the quality of our technology integration coursework in EFL.

Disclosure statement
No potential conflict of interest was reported by the authors.

Notes on contributors
Derya Baser is a visiting scholar at UGA, a graduate student and a research assistant at the Middle
East Technical University, and a research assistant at the Abant Izzet Baysal University.
Computer Assisted Language Learning 761

Theodore J. Kopcha is an assistant professor of learning, design, and technology at the University
of Georgia.

M. Yasar Ozden is a professor of computer education and instructional technology at the Eastern
Mediterranean University.

ORCID
Derya Baser http://orcid.org/0000-0002-9562-3707

References
Abbitt, J., Perry, B., & Edwards, T. (2011). Development and validation of a survey to measure
TPACK for preservice science educators. Society for Information Technology & Teacher Edu-
cation International Conference, 2011(1), 4238 4241.
Angeli, C., & Valanides, N. (2009). Epistemological and methodological issues for the conceptuali-
zation, development, and assessment of ICT-TPCK: Advances in technological pedagogical
content knowledge (TPCK). Computers & Education, 52(1), 154 168.
Archambault, L.M., & Barnett, J.H. (2010). Revisiting technological pedagogical content knowl-
edge: Exploring the TPACK framework. Computers & Education, 55, 1656 1662.
Bernhardt, E.B. (2010). Teaching other languages. Educational Practices Series, 20, 1 29.
Bilici Canbazoglu, S., Yamak, H., Kavak, N., & Guzey, S.S. (2013). Technological pedagogical
content knowledge self-efficacy scale (TPACK-SeS) for pre-service science teachers: Construc-
tion, validation, and reliability. Eurasian Journal of Educational Research, (52), 37 60.
Borg, S. (2006). The distinctive characteristics of foreign language teachers. Language Teaching
Research, 10(1), 3 31.
Brantley-Dias, L., & Ertmer, P. (2013). Goldilocks and TPACK: Is the construct just right?. Jour-
nal of Research on Technology in Education, 46(2), 103 128.
Chai, C.S., Koh, J.h.l., & Tsai, C.-C. (2010). Facilitating preservice teachers development of tech-
nological, pedagogical, and content knowledge (TPACK). Educational Technology & Society,
13(4), 63 73.
Costello, A.B., & Osborne, J.W. (2005). Best practices in exploratory factor analysis: Four recom-
mendations for getting the most from your analysis. Practical Assessment, Research & Evalua-
tion, 10, 1 9.
Cox, S. (2008). A conceptual analysis of technological pedagogical content knowledge (unpub-
lished doctoral dissertation). Brigham Young University, Utah.
Cox, S., & Graham, C.R. (2009). Diagramming TPACK in practice: Using an elaborated model of
the TPACK framework to analyze and depict teacher knowledge. TechTrends, 53(5), 60 69.
Creswell, J.W., & Plano Clark, V.L. (2007). Designing and conducting mixed methods research.
Thousand Oaks, CA : SAGE Publications.
Crystal, D. (2003). English as a global language (2nd ed.). Cambridge, UK: Cambridge University
Press.
Desimone, L.M. (2009). Improving impact studies of teachers professional development: Toward
better conceptualizations and measures. Educational Researcher, 38(3), 181 199.
Fraenkel, J.R., & Wallen, N.E. (2008). How to design and evaluate research in education (7th ed.).
Boston, MA: McGraw-Hill.
Golonka, E.M., Bowles, A.R., Frank, V.M., Richardson, D.L., & Freynik, S. (2014). Technologies
for foreign language learning: A review of technology types and their effectiveness. Computer
Assisted Language Learning, 27(1), 70 105.
Graham, C.R. (2011). Theoretical considerations for understanding technological pedagogical con-
tent knowledge (TPACK). Computers & Education, 57(3), 1953 1960.
Graham, C.R., Burgoyne, N., Cantrell, P., Smith, L., Clair St., L., & Harris, R. (2009). TPACK
development in science teaching: Measuring the TPACK confidence of inservice science teach-
ers. TechTrends, 53(5), 70 79.
Hair, J.F., Anderson, R.E., Tatham, R.L., & Black, W.C. (1995). Multivariate data analysis with
readings (4th ed.). New Jersey: Prentice-Hall.
762 D. Baser et al.

Hair, J.F., Black, W.C., Babin, B.J., & Anderson, R.E. (2010). Multivariate data analysis: A global
perspective (7th ed.). Upper Saddle River, NJ: Pearson.
Handal, B., Campbell, C., Cavanagh, M., Petocz, P., & Kelly, N. (2012). Integrating technology,
pedagogy and content in mathematics education. Journal of Computers in Mathematics and Sci-
ence Teaching, 31(4), 387 413.
Jamieson-Proctor, R., Finger, G., & Albion, P. (2010). Auditing the TK and TPACK confidence of
preservice teachers: Are they ready for the profession? Australian Educational Computing, 25
(1), 8 17.
Jonassen, D., Howland, J., Marra, R., & Crismond, D. (2008). Meaningful learning with technology.
Upper Saddle River, NJ : Pearson Education.
Karaca, F., Can, G., & Yildirim, S. (2013). A path model for technology integration into elementary
school settings in Turkey. Computers & Education, 68, 353 365.
Kaya, S., & Dag, F. (2013). Turkish adaptation of technological pedagogical content knowledge
survey for elementary teachers. Educational Sciences: Theory & Practice, 13(1), 302 306.
Koehler, M. J., & Mishra, P. (2008). Introducing TPCK. In AACTE Committee on Innovation &
Technology (Eds.), Handbook of technological pedagogical content knowledge (TPCK) for
educators (pp. 3 29). New York: Routledge.
Koehler, M.J., Shin, T.S., & Mishra, P. (2012). How do we measure TPACK? Let me count the
ways. In R.N. Ronau, C.R. Rakes, & M.L. Niess (Eds.), Educational technology, teacher knowl-
edge, and classroom impact: A research handbook on frameworks and approaches. Hershey,
PA: IGI Global.
Koh, J.H.L., Chai, C.S., & Tsai, C.C. (2010). Examining the technological pedagogical content
knowledge of Singapore pre-service teachers with a large-scale survey. Journal of Computer
Assisted Learning, 26(6), 563 573.
Landry, G.A. (2010). Creating and validating an instrument to measure middle school mathematics
teachers technological pedagogical content knowledge (TPACK) (unpublished doctoral disser-
tation). University of Tennessee, Knoxville, TN.
Lee, M.-H., & Tsai, C.-C. (2010). Exploring teachers perceived self efficacy and technological
pedagogical content knowledge with respect to educational use of the world wide web. Instruc-
tional Science: An International Journal of the Learning Sciences, 38(1), 1 21.
Lin, F.-A. (2010). Those who entered through the back door: Characterizing adult ESL teachers
and their knowledge (unpublished doctoral dissertation). The University of Texas, Austin, TX.
Meyers, L.S., Gamst, G., & Guarino, A.J. (2006). Applied multivariate research design and inter-
pretation. Newbury Park, CA: Sage.
Mishra, P., & Koehler, M.J. (2006). Technological pedagogical content knowledge: A framework
for teacher knowledge. Teachers College Record, 108(6), 1017 1054.
Patton, M. Q. (1990). Qualitative evaluation and research methods. Newbury Park, CA: Sage, Inc.
Sahin, I. (2011). Development of survey of technological pedagogical and content knowledge
(TPACK). Turkish Online Journal of Educational Technology TOJET, 10(1), 97 105.
Schmidt, D.A., Baran, E., Thompson, A.D., Mishra, P., Koehler, M.J., & Shin, T.S. (2009). Techno-
logical pedagogical content knowledge (TPACK): The development and validation of an
assessment instrument for preservice teachers. Journal of Research on Technology in Educa-
tion, 42(2), 123 149.
Stevens, J. (2002). Applied multivariate statistics for the social sciences. Mahwah, NJ: Lawrence
Erlbaum Associates.
Tabachnick, B.G., & Fidell, L.S. (2001). Using multivariate statistics. Boston, MA: Allyn and
Bacon.
Tschannen-Moran, M., & Hoy, A.W. (2001). Teacher efficacy: Capturing an elusive construct.
Teaching and Teacher Education, 17(7), 783 805.
Yurdakul, I.K., Odabasi, H.F., Kilicer, K., Coklar, A.N., Birinci, G., & Kurt, A.A. (2012). The
development, validity and reliability of TPACK-deep: A technological pedagogical content
knowledge scale. Computers & Education, 58(3), 964 977.
Zelkowski, J., Gleason, J., Cox, D.C., & Bismarck, S. (2013). Developing and validating a reliable
TPACK instrument for secondary mathematics preservice teachers. Journal of Research on
Technology in Education, 46(2), 173 206.
Computer Assisted Language Learning 763

Appendix 1. TPACK-EFL survey items


Constructs Items

Technological knowledge (1) I can use basic technological terms (e.g. operating system,
(TK) wireless connection, virtual memory, etc.) appropriately.
(2) I can adjust computer settings such as installing software and
establishing an Internet connection.
(3) I can use computer peripherals such as a printer, a headphone, and
a scanner.
(4) I can troubleshoot common computer problems (e.g. printer
problems, Internet connection problems, etc.) independently.
(5) I can use digital classroom equipment such as projectors and smart
boards.
(6) I can use Office programs (i.e. Word, PowerPoint, etc.) with a high
level of proficiency.
(7) I can create multimedia (e.g. video, web pages, etc.) using text,
pictures, sound, video, and animation.
(8) I can use collaboration tools (wiki, edmodo, 3D virtual
environments, etc.) in accordance with my objectives.
(9) I can learn software that helps me complete a variety of tasks more
efficiently.
Content knowledge (CK) (10) I can express my ideas and feelings by speaking in English.
(11) I can express my ideas and feelings by writing in English.
(12) I can read texts written in English with the correct pronunciation.
(13) I can understand texts written in English.
(14) I can understand the speech of a native English speaker easily.
Pedagogical knowledge (15) I can use teaching methods and techniques that are appropriate
(PK) for a learning environment.
(16) I can design a learning experience that is appropriate for the level
of students.
(17) I can support students learning in accordance with their
physical, mental, emotional, social, and cultural differences.
(18) I can collaborate with school stakeholders (students, parents,
teachers, etc.) to support students learning.
(19) I can reflect the experiences that I gain from professional
development programs to my teaching process.
(20) I can support students out-of-class work to facilitate their self-
regulated learning.
Pedagogical content (21) I can manage a classroom learning
knowledge (PCK) environment.
(22) I can evaluate students learning processes.
(23) I can use appropriate teaching methods and techniques to support
students in developing their language skills.
(24) I can prepare curricular activities that develop students language
skills.
(25) I can adapt a lesson plan in accordance with students language
skill levels.
Technological content (26) I can take advantage of multimedia (e.g. video, slideshow, etc.)
knowledge (TCK) to express my ideas about various topics in English.

(continued)
764 D. Baser et al.

Constructs Items

(27) I can benefit from using technology (e.g. web conferencing and
discussion forums) to contribute at a distance to multilingual
communities.
(28) I can use collaboration tools to work collaboratively with foreign
persons (e.g. Second Life, wiki, etc.).
Technological pedagogical (29) I can meet students individualized needs by using information
knowledge (TPK) technologies.
(30) I can lead students to use information technologies legally,
ethically, safely, and with respect to copyrights.
(31) I can support students as they use technology such as virtual
discussion platforms to develop their higher order thinking
abilities.
(32) I can manage the classroom learning environment while using
technology in the class.
(33) I can decide when technology would benefit my teaching of
specific English curricular standards.
(34) I can design learning materials by using technology that supports
students language learning.
(35) I can use multimedia such as videos and websites to support
students language learning.
Technological pedagogical (36) I can use collaboration tools (e.g. wiki, 3D virtual environments,
content knowledge etc.) to support students language learning.
(TPACK) (37) I can support students as they use technology to support their
development of language skills in an independent manner.
(38) I can use Web 2.0 tools (animation tools, digital story tools, etc.)
to develop students language skills.
(39) I can support my professional development by using
technological tools and resources to continuously improve the
language teaching process.

Вам также может понравиться