Вы находитесь на странице: 1из 9

Computers & Education 55 (2010) 733e741

Contents lists available at ScienceDirect

Computers & Education

journal homepage: www.elsevier.com/locate/compedu

Learner outcomes and satisfaction: A comparison of live video-streamed instruction, satellite broadcast instruction, and face-to-face instruction
Mhammed Abdous a, *, Miki Yoshimura b
a b

Center for Learning Technologies, Old Dominion University, Norfolk, VA 23529, USA College of Education, Old Dominion University, Norfolk, VA 23529, USA

a r t i c l e i n f o
Article history: Received 13 December 2009 Received in revised form 10 March 2010 Accepted 11 March 2010 Keywords: Distance education Live video-streaming Student satisfaction Learning effectiveness

a b s t r a c t
This study examined the nal grade and satisfaction level differences among students taking specic courses using three different methods: face-to-face in class, via satellite broadcasting at remote sites, and via live video-streaming at home or at work. In each case, the same course was taught by the same instructor in all three delivery methods, and an attempt was made to survey students taking the course via the three different delivery methods. MANOVA results indicated no grade or satisfaction level differences among the three populations. Self-reported computer literacy skills revealed a slight t between the chosen delivery mode and the reported computer literacy skills. These results provide additional evidence to support both the no signicant difference phenomenon and the use of distance education as a viable, convenient and exible alternative delivery mode capable of extending learning opportunities to non-traditional students. Published by Elsevier Ltd.

1. Introduction In light of the rapid expansion of distance education (DE) offerings, a recent survey by the National Center for Education Statistics indicates that two thirds (66 percent) of US postsecondary institutions offered some type of DE courses during the academic year 2006e07 (Parsad & Lewis, 2008). Due to the pressure exerted by technology innovations, student demand for convenient and exible access, and both student and institutional nancial constraints, this DE growth momentum is likely to increase during the coming years. Indeed, current budgetary constraints are already forcing universities to reexamine traditional delivery system modalities and to explore alternative and cost-effective ways of delivering education. In this context, DE has long been viewed as the default alternative mode of teaching and learning (Tallent-Runnels et al., 2006), sometimes even driving organizational and pedagogical changes within institutions of higher education (de Freitas & Oliver, 2005; Johnson & Aragon, 2003). In spite of the denitional ambiguity surrounding the meaning of DE, it is justiable (given the current trend toward technology convergence) to presume that DE subsumes several related concepts, among them distance learning, distributed learning, online learning, e-learning, virtual education, web-based learning, computer-based training, and blended or hybrid learning (Abdous, 2009). Attempting a comprehensive denition capturing time, space, and the technology variables associated with DE, we concur with the description of DE provided by Carnes, Awang, and Marlow (2003) as: education or training courses delivered to remote (off-campus) sites via audio, video (live or prerecorded), or computer technologies, including both synchronous (i.e., simultaneous) and asynchronous (i.e., not simultaneous) instruction (p. 162). However, we should point out that DE delivery modes (DM) are being challenged and recongured by the convergence of recent hardware, software, and telecommunications innovations. In addition to their ability to expand the realm of DM capabilities, these innovations are reshaping learning, interaction, and collaboration opportunities, especially in light of recent advances in IP Video (Internet Protocol Video) and IPTV (Internet Protocol Television). In fact, recent developments in the sophistication of video compression algorithms, wireless bandwidth, and computational horsepower are strengthening traditional video conferencing applications, allowing for real time collaborative application sharing. Telepresence video conferencing technologies (with full-scale ultra-high denition video and CD-quality audio) are clearly moving traditional video

* Corresponding author. Tel.: 1 757 683 6378. E-mail address: mabdous@odu.edu (Mhammed Abdous). 0360-1315/$ e see front matter Published by Elsevier Ltd. doi:10.1016/j.compedu.2010.03.006


M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741

conferencing from semi-static spaces to collaborative spaces capable of encouraging meaningful cognitive engagement (Bernard et al., 2009; Burbules & Lambeir, 2003). By taking advantage of the current expansion in high-bandwidth availability, live video-streaming enables instructors to deliver high-quality audio and video presentations while enabling students to view, interact, and connect with their instructors and classmates. This addresses one of the main disadvantages associated with distance education: the lack of interaction and human contact between students and instructors (Bernard et al., 2009; Muilenburg & Berge, 2001; Mullins-Dove, 2006). Amid the long tradition of comparative studies aimed at proving the equivalence (Bernard et al., 2009; Mullins-Dove, 2006) of mediated instruction to traditional methods of education (Bernard et al., 2004; Lou, Bernard, & Abrami, 2006; Maushak, Chen, Martin, Shaw, & Unfred, 2001), several studies have tackled the question of distance educations effectiveness in comparison to traditional classroom instruction. These studies are framed into the broader media-versus-method debate, or perhaps into the Russell (Russell, 1999) versus Clark (Clark, 1983) debate. In this regard, the no signicant difference studies are often criticized for their design and methodological aws (Joy & Garcia, 2000; Surry & Ensminger, 2001), including their inability to disentangle the effects of the DM, instructor and learner characteristics, instructional method, and media attributes (Howsen & Lile, 2008). Without underestimating the design and methodological challenges of comparative studies, such as weak methodology, compared population, treatment, analysis, validity, reliability, and the generalizability of ndings (Bullock & Ory, 2000), we should be reminded, with Ringstaff and Kelley (2002), that classrooms are not experimental laboratories. Hence, creating randomly assigned treatment and control groups is rather difcult in an open environment such as online learning (Collins & Pascarella, 2003; Mandinach, 2005). In spite of these criticisms, we believe that the proliferation of comparative studies describing DMs as conveyer belts for content is outmoded, especially considering the recent profusion of both new technologies and social networks into the teaching and learning landscape. Furthermore, hinting that comparative studies are used to legitimize distance learning investments (Lockee, Burton, & Cross, 1999) is no longer accurate, particularly as newer DM tools are gaining a rm foothold in the higher education landscape, and as they blur the distinction between traditional on-campus and off-campus course offerings (Burbules & Callister, 2000; Zhao, Lei, Yan, Lai, & Tan, 2005). Alternatively, we believe, with Kozma (1994) that the debate should be refocused on comprehending the ways in which these various tools mediate the cognitive, affective, or social processes of learning. More precisely, the debate should move toward thinking creatively about ways to use these DMs to design and blend teaching and learning strategies capable of harnessing effective learning experiences and capable of reaching and satisfying various student styles, needs, and interests (Bernard et al., 2004, 2009). As universities expand and diversify access options, understanding the uses for and the effectiveness of these various DMs in terms of learning outcomes and student satisfaction becomes more and more critical. This is particularly true as students become more immersed in the rich, technology-enhanced learning environment used to support all of the aspects of their learning: directed study, resource discovery, preparation and completion of assignments, communication and collaboration, and presentation and reection (Conole, de Laat, Dillon, & Darby, 2008). Additionally, understanding the effectiveness of various DMs has signicant policy implications for practitioners developing, adopting, and deploying technologies and programs (Locatis, 2007). With these considerations in mind, the purpose of this study is to examine the outcomes of the in class face-to-face (F2F) DM in comparison with two other distance education delivery methods: satellite broadcasting (SB) and live video-streaming (LVS) among a population taking the same courses in the three different delivery modes. As a traditional DE delivery mode, satellite broadcasting (or interactive television) enables live audio/video broadcasting to remote sites where students are able to view and interact, in real time, with their instructors and classmates. In contrast, live video-streaming enables audio/video broadcasting to personal computers, which allows students to view and interact, in real time, with their instructors and classmates and/or to view class archives if they have missed a class (or for an exam review). By allowing students to attend class remotely, LVS expands classroom walls in real time while providing students with opportunities to interact with the content through easy and convenient on-demand access to class archives. In order to identify differences in the outcomes of these three modes of course delivery (as revealed by nal grades and student satisfaction), the following research questions guided this study: 1. Is there a statistically signicant difference in nal grades among students who took the same course via each of three different delivery methods: face-to-face in class, via satellite broadcasting at a remote site, or via live video-streaming at home or at work? 2. Is there a statistically signicant difference in the satisfaction level among students who took the same course via each of three different delivery methods: face-to-face in class, via satellite broadcasting at a remote site, or via live video-streaming at home or at work? As we ask these two questions, we are mindful that we face some of the same methodological pitfalls discussed earlier; however, we believe with others that the use of nal grades (Kochman & Maddux, 2001; Larson & Chung-Hsien, 2009; Shachar & Neumann, 2003; Summers, Waigandt, & Whittaker, 2005) and student satisfaction (Allen, Bourhis, Burrell, & Mabry, 2002; Johnson, 2002; Larson & Chung-Hsien, 2009; Phillips & Peters, 1999; Skylar, Higgins, Boone, & Jones, 2005) as criteria is still relevant and hence has been widely used in comparative studies. Yet we must concede with Donavant (2009) that, when examining data regarding learning effectiveness largely based on student satisfaction, student response will often be colored by the convenience associated with the DM itself.

2. Review of the literature As stated previously, a number of studies have compared the effectiveness of distance education courses in contrast to traditional classroom courses (Jennings & Bayless, 2003; Kochman & Maddux, 2001; Leasure, Davis, & Thievon, 2000; ONeal, Jones, Miller, Campbell, & Pierce, 2007). Looking at various meta-analyses which reviewed hundreds of comparative studies in the distance education literature, the prevailing conclusion seems to be that there is no signicant difference in DMs when comparing the outcomes of distance education to those of traditional classroom instruction (Bernard et al., 2004, 2009; Lou et al., 2006; Means, Toyama, Murphy, Bakia, & Jones, 2009; ONeal et al., 2007; Schmid et al., 2009; Shachar & Neumann, 2003; Sitzmann, Kraiger, Stewart, & Wisher, 2006; Zhao et al., 2005). A consensus is articulated around three key points: (1) DE students seem to learn equally well regardless of their physical location (Lou et al., 2006), (2)

M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741


classroom instruction and distance instruction are comparable (Bernard et al., 2004), and (3) both methods of delivery are adequate (Tallent-Runnels et al., 2006). Nonetheless, in spite of this consensus, most meta-analysis studies point to the great variability surrounding the perception and the measurement of the outcome, achievement, and effectiveness of DE. Kekkonen-Moneta and Moneta (2002) compared the effectiveness of classroom instruction to instruction via e-learning modules by measuring both factual learning and applied conceptual learning. They found no difference between the two modes of delivery regarding factual learning, but they discovered that online students outperformed traditional lecture students in applied conceptual learning. Similarly, Buckley (2003) compared the effectiveness of traditional classroom, web-enhanced, and web-based courses by evaluating midterm and nal examination scores and course grades, as well as students self-reports regarding their courses, and found no difference among the delivery methods in student learning outcomes. Iverson, Colky, and Cyboran (2005) compared the outcomes of online and traditional course deliveries. Online learners were found to have signicantly more positive levels of enjoyment and utility and a signicantly stronger intent to transfer their learning. Anderson, Banks, and Leary (2002) studied differences between traditional on-campus classes and interactive, televised, distance learning classes. Results indicated that remote site students expressed signicantly lower satisfaction on every examined item on the rating scale. Similarly, Kearns, Shoaf, and Summey (2004) used student course survey data to compare the satisfaction levels of students in a web-based program with the satisfaction levels of students in a traditional classroom setting and determined that students in the traditional classroom were more satised than those in the web-based classes. In contrast, Roach and Lemasters (2006) analyzed student satisfaction in seven online courses and two campus-based courses. Their comparison suggested that students in a graduate educational administration program could be equally satised with either course delivery method. Recently, Larson and Chung-Hsien (2009) conducted a comparison of three DMs (face-to-face, hybrid, and online) using student exams and nal grades. Their analysis supported the no signicant difference nding, despite DM, regarding student satisfaction, learning effectiveness, and faculty satisfaction. Likewise, Skylar et al. (2005) concluded that online and CD-ROM DMs were equally as effective as the traditional face-to-face delivery method when presenting instructional content. Nevertheless, it is important to point out that simple comparisons often fail to accurately capture and compare the outcomes of different DMs. In this regard, Lim and Morris (2009) suggested that the quality of the online instructor and the students learning motivation and learning involvement were the signicant variables inuencing the course outcomes of any online learning program. Klein, Noe, and Wang (2006) examined the ways in which the learning goal orientation, the delivery method (classroom vs. blended learning), and the perception of barriers and enablers related both to students motivation to learn and to their learning outcomes. Their study concluded that motivation to learn was signicantly related to course outcome, and explained the effects of learner characteristics, instructional characteristics, and perceived environmental barriers/enablers on learning outcomes. Sankaran and Bui (2001) studied the ways in which learning strategies and motivation affect learning performance, both in web-based instruction and in a traditional lecture setting. Their study conrmed that learning strategy and motivation are correlated to performance and found that motivation was associated with the use of deep learning strategy just as low motivation was associated with undirected strategy. In a distance education setting, motivation is a driving factor; as Sankaran and Bui (2001) noted, students undergo many sacrices to get an education and motivation is a driving factor that inuences their performance (p. 196). Despite the overwhelming evidence which supports the no signicant difference view, it is important to recognize the existing counterevidence literature. For example, Urtel (2008) reported that DWF (a grade of D, Withdrawal, or Fail) grades were signicantly higher among freshmen taking distance education courses in comparison to the grades of face-to-face students. Similarly, Howsen and Lile (2008) reported lower average test scores for students taking a Principles of Macroeconomics course online, in comparison to their classmates taking the course in a traditional face-to-face format. For their part, Summers et al. (2005) concluded that students enrolled in online statistics courses were less satised than students taking those courses in a traditional classroom setting. In summary, despite some inadequacies in research design, measurement, and analysis, the literature provides overwhelming evidence to support the effectiveness of DE delivery modes in comparison with traditional face-to-face delivery. Tallent-Runnels et al. (2006) found a clear consensus that students taking courses in a distance learning format have the same learning experience as their counterparts taking courses in a traditional face-to-face (F2F) setting. In spite of the large number of comparative studies available, none of the studies we reviewed compared the effectiveness of face-to-face delivery to delivery via satellite broadcasting at remote sites and to delivery via live video-streaming at home or at work. Our study bridged this gap by conducting a three-way comparison of a large number of courses delivered simultaneously in three different formats, by examining the nal grades and overall satisfaction among students using these three different DMs.

3. Method 3.1. Background This study was conducted in a public research university in the mid-Atlantic region which serves 17,000 undergraduate and 6000 graduate students and offers more than 70 bachelors degree programs, 60 masters degree programs, and 35 doctoral degree programs in a variety of elds. Located in a major maritime, military, and commerce hub, this institution offers strong emphases in science, engineering, and technology, especially in the maritime and aerospace sciences. The university is also known as a national leader in technology-mediated distance learning, having served students in over 50 sites in Virginia, Arizona, and Washington for more than twenty-ve years. This extended distance learning capability provides the university with multiple DM options by which a course can be offered simultaneously in three delivery formats: face-to-face, via satellite broadcasting, and via live video-streaming: a) At the site for the face-to-face section of the course, students and faculty meet on-campus in a classroom equipped with a two-way video system. From this classroom, the instructor is able to interact with of his or her students in all three DMs: face-to-face, via two-way audio and video, or via a chat window on the students personal computers.


M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741

Table 1 List of courses examined and their survey response rate. Course CEE CEE CEE CET CET CET CET CET COMM COMM CS CS CS DSCI ECI ECI ECI ECI ECI ECI ECON EET EET ENGL ENGL ENMA ESSE ESSE ESSE ESSE GEOG HMSV MATH MGMT MGMT MGMT MKTG MPHO MPHO NURS NURS OTED OTED OTS OTS PSYC ID 452 552 582 305 340 360 410 445 323 412W 312 334 451 306 432 433 435 436 640 680 301 300 485 336 350 713 400 621 679 714 412 444 335 350 360 485W 311 613 614 402 490W 635 788 351 402 303 Course name Air quality Air quality Intro to coastal engineering Principles of surveying Soils and foundations Plans and specications Reinforced concrete design Construction planning/scheduling Leadership & events management Interpersonal communication theory/research Internet concepts Computer architecture fundamentals Software engineering survey Statistical data analysis e management science Developing instructional strategies Pk6-language arts Developing instructional strategies Pk6-math Developing instructional strategies Pk6-social studies Classroom management/practice Prek-6 Management of learning & instruction Reading to learn across curriculum Managerial economics Advanced circuit analysis Electrical power systems The short story Aspects of English language Integrating ethics & engineering management Foundations special education: legal aspects Alternative strategies-management challenging behavior Advanced classroom management/practicum Prek-6 Alternative strategies e secondary students Cities of the world Psycho-educational groups Number systems & discrete math Employee relations Labor management relations Business strategy and policy Marketing principles & problems Environmental science for public health practitioners Epidemiology for public health practitioners Career pathway development Nursing leadership Research methods in occupational and technical studies Instructional strategies and innovations in training and occupational education Communication technology Instructional methods in occupational and technical studies Industrial/organizational psychology Response rate 12% 36% 22% 13% 14% 5% 12% 4% 20% 0% 23% 5% 26% 19% 10% 20% 16% 10% 14% 18% 14% 33% 16% 25% 15% 19% 14% 14% 12% 15% 13% 31% 11% 15% 15% 9% 12% 25% 14% 24% 0% 31% 18% 18% 16% 10%

b) In the satellite broadcasting section of the same course, students participate from a remote site at which they watch a live video feed from the main campus. At each of the sites, student desks are equipped with microphones, enabling an audio connection by which students can ask questions and can interact with their instructor and their distant classmates. c) In the live video-streaming section of the same course, students participate in the class, in real time, via personal computer, over which they view a live feed of the class lecture. To interact with their instructor, LVS students send text messages through the LVS course interface. Additionally, using the same interface, LVS students are able to chat with their LVS classmates. To reduce the teacher effect (which is often mentioned as a weakness in comparative studies design), we identied courses taught by instructors using all three of these DMs simultaneously to reach all of their students, regardless of delivery mode. These courses spanned a variety of colleges and disciplines, from Economics and Nursing to Engineering and Psychology, and included both undergraduate and graduate courses. Table 1 shows the list of courses sampled in this study along with the response rate of the survey conducted for this study.

3.2. Sample With prior approval from the universitys Institutional Review Board, an online survey was sent, during Spring 2009, to 3258 students who were enrolled in either the face-to-face, satellite broadcasting, or live video-streaming sections of the courses listed in Table 1. 513 students responded to the survey. However, 17 responses were deemed invalid, as the respondents either specied other delivery formats or listed more than one delivery format. As a result, the remaining 496 survey responses (15.22%) were used for the analyses. Although this rate is relatively low, the total number of respondents (513) is reasonably representative of the overall population. Non-involvement in the survey of faculty teaching the courses and survey fatigue among the students are perhaps behind the lower participation rate.

M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741


Of the respondents, 21% (104/496) were enrolled in the face-to-face sections, 58% (290/496) were in enrolled in the satellite broadcasting (SB) sections, and 21% (102/496) were enrolled in the live video-streaming sections. The characteristics of each group (age group, gender, enrollment status, employment status, and number of distance learning courses taken) are shown in Tables 2e6.
Table 2 Participants age distribution (Percentage using each DM). Age F2F SB LVS 20s or younger 72.1 50 40.2 30s 13.5 24.1 33.3 40s 11.5 16.9 21.6 50s 2.9 8.6 4.9 60s or older 0 0.3 0

Table 3 Participants gender distribution (Percentage using each DM). Gender F2F SB LVS Male 38.5 19.7 45.1 Female 61.5 80.3 54.9

Table 4 Participants enrollment status distribution (Percentage using each DM). Enrollment status F2F SB LVS Full-time 77.9 55.5 30.4 Part-time 22.1 44.5 69.6

Table 5 Participants employment status distribution (Percentage using each DM). Employment status F2F SB LVS Full-time 34.6 48.3 70.6 Part-time 38.5 26.9 15.7 Unemployed 26.9 24.8 13.7

Table 6 Participants number of DL courses previously taken (Percentage using each DM). Number of courses taken F2F SB LVS 0 41.3 15.9 16.7 1 11.5 7.6 8.8 2 16.3 11 9.8 3 10.6 7.9 8.8 4 or more 20.2 57.6 55.9

As shown in the tables above, the face-to-face sections consisted primarily of full-time students working part-time jobs; these students were more likely to be female (61.5%) and in their 20s or younger (72.1%). In contrast, the satellite broadcasting sections encompassed the largest proportion of older students among the three groups, and those sections were predominantly female (80%). The live video-streaming sections were chosen by mostly part-time students in their 20s and 30s, most of whom (70%) work full-time jobs. The live video-streaming sections had an approximately equal gender proportion (45% male and 55% female). This gender distribution is reected in the total DL student population as well, with 68% of female students enrolled in satellite broadcast courses, 49.7% in video-streamed courses, and 71.9% in online courses. 3.3. Instrument In this study, the outcomes were assessed in two domains: by students nal course grades and by their reported course satisfaction (based on survey results). The survey instrument was used to collect the satisfaction data as well as the demographic data of the participants. The survey results were then sent to the registrars ofce, where the students nal grades and Grade Point Averages (GPAs) were added. In order to measure student satisfaction, a survey instrument was adopted from Shin and Chan (2004). Intended to measure students satisfaction with their overall learning experience, the survey included several items (see Table 7) measured on a ve-point Likert scale (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). As previously noted, the instrument also collected student demographic data such as gender, age group, current enrollment and employment status, prior distance learning experience, and computer skills.

738 Table 7 Survey items used to measure satisfaction.

M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741

1. 2. 3. 4. 5. 6. 7. 8.

Taking this course has been a valuable experience for me. I have been able to learn a lot from this course. I like the fact that I am taking this course. The course has enhanced my thinking skills. The course has helped my intellectual growth. The course has helped me to look at things in different ways. The course has provided me with knowledge to work more effectively. The course has enabled me to enhance my learning ability.

In the following Fig. 1 we visually summarize the relationships between the different variables under study.

Fig. 1. Summary of variables under study.

3.4. Data analysis Before primary research questions were addressed, preliminary factor analysis on the satisfaction survey items was performed. The dimensionality of the eight items was analyzed using a maximum likelihood factor analysis. Three criteria were used to determine the number of factors to rotate: the a priori hypothesis that the measure was unidimensional, the scree test, and the interpretability of the factor solution. The scree plot indicated that our initial hypothesis of unidimensionality was correct. The factor satisfaction accounted for 81.5% of the item variance. An internal consistency estimate of reliability was computed for the satisfaction scale using coefcient alpha. Coefcient alpha was .97, indicating the satisfactory reliability of the scale. As for the distribution of nal course grades for each delivery method, participants who received a W (Withdrawal), WF (Unofcial withdrawal from course), I (Incomplete), or Z (Missing/unreported grade) were excluded from the analysis. There were 14 such participants (two in face-to-face sections, nine in satellite broadcasting sections, and three in live video-streaming sections). Another three participants (from the satellite broadcasting section) who did not complete the satisfaction portion of the survey were excluded from the analysis. The multivariate analyses therefore included 496 cases in total (See Table 8). Following the internal consistency and data cleaning steps, one-way analyses of variance (ANOVA) were conducted on the survey participants Grade Point Averages and reported computer skills. This analysis was intended to verify the existence of statistically signicant differences in the participants characteristics. As indicated in Table 9, no statistically signicant differences were observed in the participants Grade Point Averages among the three groups, F(2, 493) 1.25, p .286. Similarly, no statistically signicant differences were observed in participants reported levels of computer skills among the three groups, F(2, 493) 2.90, p .056. The ANOVA results indicated relative similarity in the characteristics of the survey respondents in terms of their GPAs and computer skills. However, the p-value of .056 for computer skills deserves further discussion, even though it demonstrated no statistically signicant differences in computer skills among survey participants. The mean scores for computer skills for each delivery method were 3.95 (F2F), 4.02 (SB), and 4.21 (LVS), respectively. These scores indicate that the students using the delivery method that requires the most computer skills (LVS) rated their computer skills the highest. Likewise, students using the delivery method that requires the least amount of computer skill (F2F) rated their computer skills the lowest. The difference in the scores may be related to the delivery method, suggesting a relatively good t between the students computer literacy and their chosen method of course delivery.

M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741 Table 8 Distribution of nal course grades per delivery method. Final grade F2F N A A B B B C C C D D D F I W WF Z Total 53 7 15 3 3 10 1 7 3 % 51 7 14 3 3 9 1 7 3 SB N 141 16 38 11 20 20 8 13 9 3 1 1 3 5 1 290 % 49 5 13 4 7 7 3 4 3 1 0.3 0.3 1 2 0.3 100 VS N 39 9 18 3 7 5 3 8 2 1 2 2 1 2 1 104 100 102 % 37 9 18 3 7 5 3 8 2 1 2 2 1 1 1 100 Total N 233 32 71 17 30 35 12 28 14 4 3 3 4 8 1 1 496 %


47 6 14 3 6 7 2 6 3 1 1 1 1 2 0 0 100

Table 9 Descriptive statistics and ANOVA results for GPA and computer skills. N GPA F2F SB LVS Computer skills F2F SB LVS 104 290 102 3.95 4.02 4.21 0.81 0.81 0.63 104 290 102 3.22 3.17 3.29 0.62 0.68 0.63 3.71 2 1.86 2.9 0.056 Mean SD SS 1.08 df 2 MS 0.54 F 1.25 Sig 0.286

Table 10 Descriptive statistics and MANOVA results for grades and satisfaction. N Final grades F2F SB VS F2F SB VS 102 278 99 Mean 3.35 3.3 3.16 SD 0.84 0.91 0.97 Value F df Error df Sig PES


102 32.53 6.03 278 31.46 7.58 99 32.51 7.12 Delivery format Wilks lambda






A one-way multivariate analysis of variance (MANOVA) was conducted to determine the effect of the three types of class delivery methods (F2F, SB, and LVS) on the two dependent variables (students nal course grades and level of satisfaction). As reported in Table 10, no signicant difference was found among the three delivery methods on the dependent measures, Wilks Lambda .99, F(4, 950) 1.39. p > .23, Partial Eta Squared .006. 4. Discussion The results of the data analysis in this study indicate that there were no statistically signicant differences in the nal course grades or in the satisfaction level among learners enrolled in the face-to-face, satellite broadcasting, and live video-streaming sections of the examined courses. These ndings echo much of the previous research that reports no signicant difference in the outcomes of distance learning DMs in comparison with face-to-face or hybrid education (Larson & Chung-Hsien, 2009; Roach & Lemasters, 2006; Skylar et al., 2005; TallentRunnels et al., 2006). This study adds to the existing literature as it compares traditional face-to-face instruction with two different distance education DMs (satellite broadcasting and live video-streaming) used in the same courses, simultaneously. As the portfolio of distance education offerings expands, it becomes necessary to ensure that all learners, regardless of delivery method, achieve equivalent outcomes. Our study shows that satellite broadcasting and video-streaming technologies can bridge the (sometimes thousands of) miles of distance between instructor and students in such a way that the students can achieve equivalent grades and satisfaction levels. Although the delivery methods were quite different, all three sections of the surveyed courses were taught by the same instructor at the same time using the same textbooks. Students in the various sections participated, in real time, in the same discussion sessions and received the same assignments and


M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741

exams, just as though they were all in one physical classroom. The only difference was the delivery method by which they participated in the class. The fact that students in all of the sections of one course, participating via three different DMs, achieved equivalent nal course grades and satisfaction levels may be due to the fact that there was a good t between the course delivery methods and the expectations and needs of the students. The majority of the students in the satellite broadcasting and live video-streaming sections had taken four or more distance learning courses previously, making them relatively experienced at using distance education DMs. More than 70% of the students in the live video-streaming sections, attending class via PC, had full-time jobs, which may have put them on a tight schedule and made them more likely to appreciate the convenience offered by the video-streaming delivery method. Satellite broadcasting provides distance education within a classroom setting, which may appeal to students from generations more accustomed to learning within a classroom. This appreciation of exibility and convenience is reported in numerous studies (Luppicini, 2007; Mullins-Dove, 2006; Tallent-Runnels et al., 2006; Wuensch, Aziz, Ozan, Kishore, & Tabrizi, 2008) as a key advantage associated with DE, in addition to students ability to access educational opportunities from remote locations (Donavant, 2009). Although the nal course grade comparison did not reveal statistically signicant differences among the three groups as a whole, the minority of students who performed very poorly in DL sections deserve special attention. Table 8 shows a clear distinction between the nal grades of those in the face-to-face section and the nal grades of those in other two DL sections. The lowest grade that face-to-face students received was a D, whereas students in the two DL sections received D-s and F0 s along with other grades such as WF (Unofcial withdrawal from course), I (Incomplete), and Z (Missing/unreported grade). We believe that this may suggest a tendency of some students to withdraw from DL sections due to lack of proper advising and a lack of formal and informal interaction with teachers and classmates. 5. Limitations Like most comparative studies, this study has several limitations. First, the study was quasi-experimental, since random selection and assignment of participants to groups was not possible. The sample was selected based on the survey response, which creates a self-selection bias that may have affected the results. For example, the delivery of the survey exclusively by electronic means may have created a bias toward tech-savvy students. In addition, the level of overall student response was rather low (less than 33% per course), which limited the results generalizability (as this was a one-institution study). Second, as noted earlier, the authors are also aware of the limitations inherent in using nal grades and satisfaction levels as the measurement of the outcomes of courses delivered by various course DMs. Although nal grades and satisfaction levels are the most relevant and widely used measures of educational assessment today, the authors suggest that future studies consider measures of cognitive, affective, and social processes of learning. 6. Conclusion The purpose of this study was to examine the relative reported student outcomes across three educational DMs: face-to-face, satellite broadcasting, and live video-streaming. The results of the MANOVA suggested that, using nal course grades and level of satisfaction as criteria, all three modes achieved similar results. This nding was in line with much of the existing literature that reports that there is no signicant difference in student outcome among various distance and face-to-face course delivery methods (Kekkonen-Moneta & Moneta, 2002; Larson & Chung-Hsien, 2009; Skylar et al., 2005). Despite the encouraging results of this analysis, the no signicant difference result must be understood with caution. Sankaran and Bui (2001) indicated that matching course formats with students attitudes and learning strategies can enhance learning performance. Future study should consider students attitudes and learning strategies in relation to their performance in respective course formats in order to further understand the effect of the right t (i.e., the right delivery method(s) for any specic course). Regarding implications, our ndings support the idea that diversifying DMs can meet the increasing demand for distance education by expanding learning opportunities in remote locations. Further research should examine more closely the teaching and learning strategies associated with each DM, in order to strengthen both faculty and student use of the various DMs. Further research should consider ways to enable faculty to manage various audiences, regardless of delivery method, in order to ensure a successful learning experience for all students. In addition, further research should focus on assisting students by developing self-diagnosing tools capable of guiding them to choose the right DM to match their learning styles and preferences (although the authors recognize that student choices in this arena are often driven by convenience and exibility). Finally, a thorough examination of the dynamics of the interaction patterns occurring in these different DMs is likely to provide insights that will engender more effective course design and will enhance the students learning experience. References
Abdous, M. (2009). E-learning quality assurance: a process-oriented lifecycle model. Quality Assurance in Education, 17, 281e295. Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: a meta-analysis. American Journal of Distance Education, 16(2), 83e97. Anderson, L. P., Banks, S. P., & Leary, P. A. (2002). The effect of interactive television courses on student satisfaction. Journal of Education for Business, 77(3), 164e168. Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., et al. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79(3), 1243e1289. Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., et al. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379e439. Buckley, K. M. (2003). Evaluation of classroom-based, web-enhanced, and web-based distance learning nutrition courses for undergraduate nursing. The Journal of Nursing Education, 42(8), 367e370. Bullock, C., & Ory, J. (2000). Evaluating instructional technology implementation in a higher education environment. The American Journal of Evaluation, 21(3), 315e328. Burbules, N. C., & Callister, T. A., Jr. (2000). Universities in transition: the promise and the challenge of new technologies. Teachers College Record, 102(2), 271e293. Burbules, N. C., & Lambeir, B. (2003). The importance of new technologies in promoting collaborative educational research. In P. Smeyers, & M. Depaepe (Eds.), Beyond empiricism: On criteria for educational research (pp. 41e52). Leuven, Belgium: University Press.

M. Abdous, M. Yoshimura / Computers & Education 55 (2010) 733e741


Carnes, L. W., Awang, F., & Marlow, J. (2003). Can instructors ensure the integrity and quality of online courses? Delta Pi Epsilon Journal, 45(3), 162e172. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445e459. Collins, J., & Pascarella, E. T. (2003). Learning on campus and learning at a distance: a randomized instructional experiment. Research in Higher Education, 44(3), 315e326. Conole, G., de Laat, M., Dillon, T., & Darby, J. (2008). Disruptive technologies, pedagogical innovation: whats new ndings from an in-depth study of students use and perception of technology. Computers & Education, 50(2), 511e524. Donavant, B. W. (2009). The new, modern practice of adult education: online instruction in a continuing professional education setting. Adult Education Quarterly, 59(3), 227e245. de Freitas, S., & Oliver, M. (2005). Does e-learning policy drive change in higher education? A case study relating models of organisational change to e-learning implementation. Journal of Higher Education Policy and Management, 27(1), 81e96. Howsen, R. M., & Lile, S. (2008). A comparison of course delivery methods: an exercise in experimental economics. Journal of Economics and Finance Education, 7(1), 21e28. Iverson, K. M., Colky, D. L., & Cyboran, V. L. (2005). E-learning takes the lead: an empirical investigation of learner differences in online and classroom delivery. Performance Improvement Quarterly, 18(4), 5e18. Jennings, S., & Bayless, M. (2003). Online vs. traditional instruction: a comparison of student success. Delta Pi Epsilon Journal, 45(3), 183e190. Johnson, M. (2002). Introductory biology online: assessing outcomes of two student populations. Journal of College Science Teaching, 31(5), 312e317. Johnson, S., & Aragon, S. (2003). An instructional strategy framework for online learning environments. New Directions for Adult and Continuing Education, 2003(100), 31e43. Joy, E. H., & Garcia, F. E. (2000). Measuring learning effectiveness: a new look at no-signicant-difference ndings. Journal for Asynchronous Learning Networks, 4(1), 33e39. Kearns, L. E., Shoaf, J. R., & Summey, M. B. (2004). Performance and satisfaction of second-degree BSN students in web-based and traditional course delivery environments. Journal of Nursing Education, 43(6), 280e284. Kekkonen-Moneta, S., & Moneta, G. B. (2002). Learning in Hong Kong: comparing learning outcomes in online multimedia and lecture versions of an introductory computing course. British Journal of Educational Technology, 33(4), 423e433. Klein, H. J., Noe, R. A., & Wang, C. (2006). Motivation to learn and course outcomes: the impact of delivery mode, learning goal orientation, and perceived barriers and enablers. Personnel Psychology, 59(3), 665e702. Kochman, A., & Maddux, C. D. (2001). Interactive televised distance learning versus on-campus instruction: a comparison of nal grades. Journal of Research on Technology in Education, 34(1), 87e91. Kozma, R. B. (1994). Will media inuence learning? reframing the debate. Educational Technology, Research and Development, 42(2), 7e19. Larson, D., & Chung-Hsien, S. (2009). Comparing student performance: online versus blended versus face-to-face. Journal of Asynchronous Learning Networks, 13(1), 31e42. Leasure, A. R., Davis, L., & Thievon, S. L. (2000). Comparison of student outcomes and preferences in a traditional vs. world wide web-based baccalaureate nursing research course. Journal of Nursing Education, 39(4), 149e154. Lim, D. H., & Morris, M. L. (2009). Learner and instructional factors inuencing learning outcomes within a blended learning environment. Educational Technology & Society, 12 (4), 282e293. Locatis, C. (2007). Why media matter. Performance Improvement Quarterly, 20(1), 9e22. Lockee, B. B., Burton, J. K., & Cross, L. H. (1999). No comparison: distance education nds a new use for no signicant difference. In. Proceedings of selected research and development papers presented at the National Convention of the Association for Educational Communication and Technology [AECT], Houston, TX, February, 10e14. Retrieved on November 20, 2009 from. http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/15/f6/97.pdf. Lou, Y., Bernard, R., & Abrami, P. (2006). Media and pedagogy in undergraduate distance education: a theory-based meta-analysis of empirical literature. Educational Technology Research and Development, 54(2), 141e176. Luppicini, R. (2007). Review of computer mediated communication research for education. Instructional Science, 35(2), 141e185. Mandinach, E. B. (2005). The development of effective evaluation methods for e-learning: a concept paper and action plan. Teachers College Record, 107(8), 1814e1835. Maushak, N. J., Chen, H.-H., Martin, L., Shaw, B. C., Jr., & Unfred, D. (2001). Distance education: looking beyond no signicant difference. Quarterly Review of Distance Education, 2(2), 119e140. Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. US Department of Education. Retrieved on November 20, 2009 from. http://www.ed.gov/rschstat/eval/tech/evidence-based-practices/nalreport.pdf. Muilenburg, L., & Berge, Z. L. (2001). Barriers to distance education: a factor-analytic study. American Journal of Distance Education, 15(2), 7e22. Mullins-Dove, T. G. (2006). Streaming video and distance education. Distance Learning, 3(4), 63e71. ONeal, K., Jones, W. P., Miller, S. P., Campbell, P., & Pierce, T. (2007). Comparing web-based to traditional instruction for teaching special education content. Teacher Education and Special Education, 30(1), 34e41. Parsad, B., & Lewis, L. (2008). Distance education at degree-granting postsecondary institutions: 2006e07. First Look. NCES 2009-044. National Center for Education Statistics. Retrieved on November 20, 2009 from. http://nces.ed.gov/pubSearch/pubsinfo.asp?pubid2009044. Phillips, M. R., & Peters, M. J. (1999). Targeting rural students with distance learning courses: a comparative study of determinant attributes and satisfaction levels. Journal of Education for Business, 74(6), 351e356. Ringstaff, C., & Kelley, L. (2002). The learning return on our educational technology investment: A review of ndings from research. San Francisco: WestEd RTEC. Retrieved on November 20, 2009 from. http://www.wested.org/online_pubs/learning_return.pdf. Roach, V., & Lemasters, L. (2006). Satisfaction with online learning: a comparative descriptive study. Journal of Interactive Online Learning, 5(3), 317e332. Russell, T. L. (1999). The no signicant difference phenomenon: A comparative research annotated bibliography on technology for distance education. Montgomery, AL: International Distance Education Certication Center. Sankaran, S. R., & Bui, T. (2001). Impact of learning strategies and motivation on performance: a study in web-based instruction. Journal of Instructional Psychology, 28(3), 191e198. Schmid, R., Bernard, R., Borokhovski, E., Tamim, R., Abrami, P., Wade, C., et al. (2009). Technologys effect on achievement in higher education: a stage I meta-analysis of classroom applications. Journal of Computing in Higher Education, 21(2), 95e109. Shachar, M., & Neumann, Y. (2003). Differences between traditional and distance education academic performances: a meta-analytic approach. The International Review of Research in Open and Distance Learning, 4(2). Retrieved on November 20, 2009 from. http://www.irrodl.org/index.php/irrodl/article/view/153/234. Shin, N., & Chan, J. K. Y. (2004). Direct and indirect effects of online learning on distance education. British Journal of Educational Technology, 35(3), 275e288. Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: a meta-analysis. Personnel Psychology, 59(3), 623e664. Skylar, A. A., Higgins, K., Boone, R., & Jones, P. (2005). Distance education: an exploration of alternative methods and types of instructional media in teacher education. Journal of Special Education Technology, 20(3), 25e33. Summers, J. J., Waigandt, A., & Whittaker, T. A. (2005). A comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. Innovative Higher Education, 29(3), 233e250. Surry, D. W., & Ensminger, D. (2001). Whats wrong with media comparison studies? Educational Technology, 41(4), 32e35. Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M., et al. (2006). Teaching courses online: a review of the research. Review of Educational Research, 76(1), 93e135. Urtel, M. G. (2008). Assessing academic performance between traditional and distance education course formats. Educational Technology & Society, 11(1), 322e330. Wuensch, K., Aziz, S., Ozan, E., Kishore, M., & Tabrizi, M. H. N. (2008). Pedagogical characteristics of online and face-to-face classes. International Journal on E-Learning, 7(3), 523e532. Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H. S. (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record, 107(8), 1836e1884.