1. JI. of Technology and Teacher Education (2007) 15(2), 233-246
Two Peas in a Pod? AComparison of Face-to-Face and
Web-Based Classrooms
GALE A. MENTZER, JOHNROBERT CRYAN, AND BERHANE
TECLEHAIMANOT
Universio,of Toledo
Toledo, OH USA
gmentze@utnet.utoledo.edu
jcryan@utnet.utoledo.edu
btecleh @utnet.utoledo.edu
This study compared student learning outcomes and student
perceptions of and satisfaction with the learning process be-
tween two sections of the same class-a web-based section
and a traditional face-to-face (f2f) section. Using a quasi-ex-
perimental design, students were randomly assigned to the
two course sections. Group equivalency was established us-
ing an instrument designed to determine learning preferences
and both versions of the course were delivered by the same
instructor. Student learning outcomes compared student test
grades and overall grades (included all assignments). To
measure student perceptions of student-teacher interactions
as well as satisfaction with the course as a whole, identical,
end-of-semester evaluations were completed and compared.
Finally, to provide an unbiased measure of student-teacher
interaction, a modified interaction analysis instrument based
upon the work of N. Flanders was used. Findings revealed
that student performance on tests was equivalent; however
student final grades were lower in the web-based course due
to incomplete assignments. Classroom interaction analysis
found differences due to delivery methods. Finally, while all
student perceptions of the course and the instructor were
above average, the f2f group rated both variables statistically
significantly higher. Conclusions suggest that the f2f en-
counter motivates students to a higher degree and also pro-
vides students with another layer of information concerning
the instructor that is absent in the web-based course.
2. Mentzer, Cryan, and Teclehaimanot
Recent research in online education has focused upon whether web-
based courses provide students with the same degree of personalized learn-
ing and content mastery that students experience in face-to-face (f2f) classes
(Parkinson, Greene, Kim, & Marioni, 2003). While the trend is moving to-
wards more rigorous design, few studies to date, however, used experimen-
tal design across several variables including student learning as well as satis-
faction with the learning experience (Meyer, 2003). The purpose of this
study was to compare student learning outcomes and student perceptions of
and satisfaction with the learning process between two sections of the same
undergraduate class-Early Childhood Education: Philosophy and Prac-
tice-a web-based section and a traditional f2f section. The f2f sections of
the course are typically offered during the weekdays and one section in the
late afternoon. The web-based section is offered "anytime, anywhere."
Background
Progress and innovative use of technology in education has greatly im-
proved the quality of web-based delivered courses (Schott, Chernish,
Dooley, & Lindar, 2003). To determine whether web-based courses indeed
provide students with a comparable if not more superior learning experi-
ence, researchers over the past five years have conducted a plethora of stud-
ies comparing aspects of the traditionally delivered instruction with online
instruction (Rivera, McAlister, & Rice, 2002). Findings from this body of
research are mixed, but the general consensus is that students learn just as
well using web-based instruction, but are less satisfied with the learning ex-
perience. Miller, Rainer, and Corley (2003) noted that the more negative as-
pects experienced by students of web-based instruction include procrastina-
tion, poor attendance, and a sense of isolation. Another study noted that on-
line courses are more effective with particular personality types (Daughen-
baugh, R., Daughenbaugh, L, Surry, & Islam, 2002) and the Office of Insti-
tutional Planning and Research at Sinclair Community College (2000) found
that distance learning students achieve lower grades than those who attend
f2f classes. Their study suggested that the distance learning students are typ-
ically of a different ilk or from a different population than the traditional stu-
dent and have other obligations to juggle along with attending school.
Because the majority of studies noted compare existing online courses
with existing f2f courses, selection may threaten the internal validity of the
findings. When existing courses are used, the students themselves enroll in
either the online or the f2f class thereby selecting to which group they will
234
3. Two Peas in a Pod?
belong. This may result in a comparison of nonequivalent groups. Using ran-
dom assignment can minimize this threat thereby producing results that are
more indicative of treatment effects rather than group differences. To deter-
mine whether the "typical" student might fare just as well in an online
course as in an f2f course, this study randomly assigned students to one of
the two sections in order to compare equivalent groups thereby controlling
for predispositions towards one type of learning style over another.
The course in this study, Early Childhood Education: Philosophy and
Practice, is an entry level survey course required for early childhood educa-
tion (ECE) majors who just entered their preprofessional program (first year
students). The host university is a medium-sized campus (20,000+ students)
and the College of Education enrolls 2000+ students. Currently there are
more than 900 students enrolled in the Bachelor of Education in Early
Childhood Education program, which prepares students to teach children
ages 3-8 with a variety of learning styles including those at-risk, typically
developing, mild to moderately disabled, and gifted. The f2f sections of the
course are scheduled to meet twice weekly in seminar fashion while the on-
line courses can be accessed at any time. Content covered in all sections of
the course ranges from ECE history, theorists, curriculum, inclusive learning
environments, designing and planning themes, evaluation, and parent in-
volvement. Central to the course is the development of reflective thinking
and application to reflective practice.
In an attempt to make both sections of the course "equivalent" in terms
of the teaching-learning process, the instructor used duplicate syllabi includ-
ing duplicate assignment requirements. For purposes of equating attendance
rates, students in the web-based section were required to attend at least two
"Live Chat" sessions per week. These Live Chats (1 hour each) served to re-
place the class discussion in the f2f section. Students in both sections were
given equal credit for attendance and the final weight for attendance in both
sections was the same. Further, all students in both sections were assigned to
small groups for in-class assignments (four to six students). Whereas f2f stu-
dents met regularly in class, those in the web-based class were required to
meet together online in group chat rooms to discuss the same small group
assignments that were given to the f2f students during lecture. Small groups
in both sections were required to submit summaries of the small group dis-
cussions Again, the attempt was made to control for similar experiences in
both sections. Additionally, all assignment due dates were the same and car-
ried the same weight in the overall grading scheme. Discussion topics and
reading assignments for each f2f meeting were the same as those for the
scheduled Live Chats in the web-based section. The instructor4's lecture
235
4. Mentzer, Cryan, and Teclehaimanot
notes were reproduced for each week and shared with both sections on the
first day of the week. Printed versions were given out in the f2f section,
Posted versions were available for the web-based section. Lastly, the in-
structor attempted to replicate the same instructional strategies throughout
the entire semester; those taken from the effective teaching literature, which
correlate with increased student learning such as seeking to encourage and
maximize student questions and student to student dialogue while minimiz-
ing extended lecture. Wherever possible the instructor used "praise" of stu-
dent answers and encouraged them to do the same toward peers.
METHOD
Often students who enroll in web-based courses have a predisposition
towards this means by which to learn (Diaz & Cartnall, 1999). In other
words, they choose online courses because they feel comfortable learning
online. In addition, students who are comfortable with their level of comput-
er competency are more likely to enroll in a web-based course than those
who are insecure with either the use of the computer or with a more genera-
tive learning environment (Parkinson et al., 2003). These issues threaten the
internal validity of findings based upon comparisons between web-based
and f2f courses. The groups, by nature of learning preference and computer
comfort levels, are not equivalent and therefore findings cannot be general-
ized beyond the restrictions of the studies. To address this weakness, this
study used a quasi-experimental design that infused nonrandom selection
with random assignment to the control (f2f) and experimental (web-based)
groups. To accomplish this, all ECE students wishing to enroll in the course
were required to contact the Department office prior to receiving access to
registration. Upon contacting the office, they were asked whether they
would be amenable to allowing the department to assign them to either the
f2f or the web-based section of the course. While students volunteered to
participate in the study, random assignment to the groups strengthened the
internal validity of the study and enhanced group equivalency. Because there
were two additional sections of the course offered during the same semester,
students who declined to participate were free to enroll in either of the other
sections.
To validate group equivalency, all students completed the Visual, Au-
ral, Read/write, Kinesthetic (VARK)-a diagnostic instrument designed to
determine learning preferences (Fleming & Bonwell, 2002). VARK reliabil-
236
5. Two Peas in a Pod?
ity and validity indices are currently under research. Using the VARK, stu-
dents can be classified with mild, strong, or very strong preferences in any
of the four learning styles. In addition, students can show multimodal ten-
dencies (more than one style appears to be preferred). For the purposes of
this study, students were classified in one of five categories-visual, aural,
read/write, kinesthetic, and multimodal learners. The frequencies of VARK
learning style preferences in each group were then compared using a chi
square goodness-of-fit test (using the control group frequencies as the ex-
pected distribution) to determine whether group differences were statistical-
ly significant (x = 0.05) rather than the result of sampling error.
To control other confounding variables that might result from the deliv-
ery methods of two sections of the course, the same instructor taught both
sections during the same semester. The instructor took care to compare the
design and delivery of both sections of the course to ensure that topics cov-
ered, work required, testing, and the classroom experience were as closely
matched as possible. The syllabi of both courses were also compared by a
colleague to provide content validity. Students enrolled in the web-based
section, while local, did not have f2f course-related contact with the instruc-
tor during the semester nor were any of these students enrolled in other
courses taught by the faculty member.
To provide an unbiased measure and comparison of student-teacher in-
teraction between groups, a modified interaction analysis instrument (IA)
based upon the work of Flanders (1970) was used. Flanders' IA is a system-
atic method of coding spontaneous verbal communication that has been and
is still currently used in classroom observation studies to examine teaching
interaction styles. The IA instrument consists of 10 categories listed in Table
1 (7 used when the teacher is talking and 3 when the students talk):
Table 1
Flanders' Interaction Analysis Categories
Activity Category
Teacher talks Accepts feelings
Praises or encourages
Accepts or used ideas of pupils
Asks questions
Explains
Gives directions
Criticizes
Student talks Responds
Initiates
Silence/Confusion
237
6. Mentzer, Cryan, and Teclehaimanot
In addition to the original Flanders IA categories, several categories
were added to measure student interaction more subtly. The original instru-
ment categorized student interaction as simply responding, initiating a topic,
or silence/confusion. The purpose of IA was to examine teaching styles. The
purpose of this study was not to determine teaching styles, but rather to de-
termine whether student participation, student-student interactions, and stu-
dent-teacher interactions were similar in both groups. To this end, four cate-
gories were added to the original student categories of "response," "ini-
tiates" and "confusion/silence." The new categories included "validation of
others' ideas," "praise or courtesy remarks," "questions or asks for clarifica-
tion," and "silence due to 'down time'." This last category was designed to
earmark extra time needed in a live chat online. Lengthy contributions in the
chat room require both longer time for typing as well as for reading. In this
case, "silence/confusion" is not an appropriate label for what is occurring.
The "down time" category was used only for the web-based course and was
not a function of comparison between groups. Down time was calculated by
determining the amount of time it took to read a response and then doubling
that amount of time to account for composing it. It was verified by rereading
logs of the live chats. This time was then subtracted from the full amount of
time spent in inactivity or silence to determine the amount of time to be at-
tributed to "silence/confusion."
IA scoring is measured by using an observer to listen to the classroom
interaction and take note of the type of interaction taking place from the list
of categories. Ordinarily, the observer marks a category every three seconds.
For this study, frequencies of categories were then tabulated to determine
trends by comparing categories within a session as well as the sequence be-
tween categories. For example, one category sequence comparison explored
the frequency with which teacher questions were followed by student re-
sponses as opposed to being followed by silence. In this study, comparisons
were made between f2f and web-based discussions to determine whether the
general interaction experience between the groups varied. If it did vary, that
might indicate that the two discussion experiences were different.
To conduct the IA, two 20-minute sessions were randomly selected and
video-taped from all possible f2f classroom discussions. Two corresponding
web-based chat room discussions were also monitored in real time for 20
minutes (observer sat in on the chat). The resulting frequencies were then
compared using a chi-square test of homogeneity to observe differences be-
tween multiple variables with multiple categories. To compensate for the
sometimes unwieldy nature of large chat rooms, the experimental class chat
rooms were limited to 10 students per session. Two sessions on the same
topic were offered per week to accommodate this limit.
238
7. Two Peas in a Pod?
Finally, the examination of student learning outcomes compared group
means of student test grades and overall grades using an independent t-test.
Test scores (as opposed to letter grades) were used to allow for more subtle
measurement. To measure student perceptions of student-teacher interac-
tions as well as satisfaction with the course as a whole, an identical end-of-
semester evaluation was completed and an independent sample t-test to
compare mean evaluation scores for the groups was calculated.
FINDINGS
Sample Information
Of the total (100+) students who enrolled in all four sections of the
ECE: Philosophy and Practices course, 36 agreed to participate in the ran-
dom assignment to either the control or experimental group. Both sections
had 18 students-I male and 17 females. All of the students in both sections
were considered traditional students in that they enrolled in college right out
of high school. All students were enrolled in the college's Early Childhood
Education- Teacher Education licensure program.
Group Equivalency
The VARK survey of learning preferences was completed by 18 stu-
dents in the f2f group and 15 students in the web-based group. The f2f stu-
dents completed the VARK in class while the web-based students were
asked to take the survey online. Three students in the web-based course did
not complete the VARK. The distribution of learning preferences for each
group is displayed in Table 2. A chi square goodness of fit test was adminis-
tered using the control group as expected frequencies and the experimental
group as the observed frequencies. Because the chi square test examines
proportions, unequal sample sizes are acceptable. Results showed no statisti-
cally significant difference between group learning preferences (X2
= 3.36;
df = 4; p > 0.05). Therefore it is assumed that the group learning styles were
equivalent and that any differences in learning style preferences were due to
sampling error.
239
8. Mentzer, Cryan, and Teclehaimanot
Table 2
Distribution of Learning Style Preferences
Group V A R K MM
F2F 1 0 2 6 9
WB 1 0 1 7 6
Interaction Analysis
Results of the chi square test of homogeneity revealed that a statistically
significant difference did indeed exist overall between the nature of teacher/
student interaction during class discussions in the two groups (Q2
= 900.035;
df=-9; p < 0.001). An examination of the standardized residuals revealed
which of the individual interaction categories contributed to the rejection of
the null hypothesis which stated that the two groups were equal. Table 3 il-
lustrates the categories/sessions that contributed to the significant X2
value.
The letters in the chart indicate whether the observed frequency was signifi-
cantly higher (H) or lower (L) than expected. So, for example, during web-
based session 1, teacher explaining occurred less frequently than expected
based upon session averages. (Note: WB represents the web-based courses
and F2F the corresponding face to face courses.)
Table 3
Significant Differences in Classroom Interaction
Teacher Categories WB 1 WB 2 F2F 1 F2F 2
Accepts feelings H
Explains L L H
Student Categories
Responds H H L
Asks questions H
Initiates an idea H
Supports others in class H
Silence or confusion H H L
In general, the instructor tended to spend less time explaining in the
chat room than in the classroom. In a web-based course, explanations often
take the form of web pages and are not a typical use of the chat room. Be-
cause only two samples from each group were observed, it is possible that
240
9. Two Peas in a Pod?
other f2f sessions may have resulted in less time devoted to the instructor
providing explanations. The general trend, however, is that while the teacher
tended to explain more in the f2f section, explanations did not dominate the
web-based course discussions.
The instructor also allowed for more and longer periods of silence in
the chat room than in the classroom. This was most likely due to the expect-
ant nature of chat room discussions. The instructor, without the aid of visual
contact with the students, was unable to determine whether students were
simply thinking and formulating questions and answers or whether they in-
deed had nothing to add. Students also may have been waiting for another
student to contribute or for the instructor to continue. It was observed, as of-
ten happens in chat room discussions, that a period of silence was followed
by several contributions from students popping on the screen almost simul-
taneously. In a f2f setting, students and the instructor can tell exactly when a
member of the class begins speaking (hopefully). The chat room discussions
smudge this demarcation into fluctuations of silence and activity.
Student responses to the instructor were higher than expected in the first
web-based and its corresponding f2f session. It is probable that the topic that
week generated more student interest or that the discussions were designed
to elicit student responses. The second f2f session resulted in student re-
sponses being much lower than expected, which is not surprising consider-
ing that the teacher explaining category was higher than expected that day.
The first f2f session also experienced higher student-generated questions
and ideas supporting the suggestion that that particular sample of class dis-
cussion was more spirited than the norm.
An unexpected difference between the two groups occurred in the first
web-based session where students showed support for one another to a high-
er degree than expected (see Table 3). Students showed support by validat-
ing other students' comments. Online chats, as opposed to speaking in front
of a class, may make students feel more comfortable thereby encouraging
students to not only support one another more openly but also to take on a
more empowered role in the class discussion.
Student Evaluations
Students in both classes completed identical course evaluations before
their final exam. The evaluation was one used by the department and includ-
ed items that explored student perceptions of both the instructor and the
course. Instructor items focused upon perceived teacher effectiveness (abili-
ty to motivate students, to encourage students, a degree of fairness in student
241
10. Mentzer, Cryan, and Teclehaimanot
treatment, availability for consultation, and a personal interest in the stu-
dents). Course items included those dealing with the general organization,
the value of the course as it related to their major area of study, the text-
books, exams, and general assignment workload. Web-based students took
the evaluation online and f2f students completed it in class with the instruc-
tor absent. All evaluations were anonymous.
The results of the t-test showed that students in the f2f class rated the in-
structor and the course significantly higher than those students in the web-
based course (with p < 0.001). Mean evaluation scores for the f2f and web-
based classes were 1.22 and 1.82 respectively on a 5 point scale where a "I"
indicated the highest ranking (outstanding) and a "5" the lowest (poor). So,
in both cases the instructor received very good scores; yet the students in the
f2f course believed the quality of the instructor and the course to be better
than those in the web-based course. T-tests were then conducted on individ-
ual questions to locate where the classes differed significantly. The alpha
level was lowered to 0.002 to control for Type I comparison error rate (al-
pha, 0.05, divided by 22 items) and the analysis revealed statistically signifi-
cant differences on each of the 22 questions. Mean scores for evaluation
items in the f2f course ranged from an outstanding rating of 1.04 ("demon-
strated a sincere interest in the subject") to a very high rating of 1.50
("promptness in returning graded assignments"). The web-based mean
scores ranged from the best rating of 1.47 ("demonstrated comprehensive
knowledge of the subject") to a low rating on two items of 2.40 ("prompt-
ness in returning graded assignments" and "offered assistance to students
with problems connected to the course"). All of this hints at extra informa-
tion students might collect and process concerning an instructor based upon
direct observation occurring in the f2f setting but absent in the web-based
venue. For example, in the web-based course, students have limited access
to instructor interaction with other students. A student in the web-based
class will not ask a question about personal difficulties with the course in the
chat room but rather will use e-mail. However, it is common for students to
ask questions of this type before, during, and after an f2f class where other
students can observe the exchange. It is logical, therefore, that an instructor
might receive a lower rating on an item such as offering assistance to stu-
dents with problems connected to the course in a web-based course where
this quality is less evident.
To examine the differences further, effect sizes were scrutinized and
those that exceeded 0.75 were considered to indicate a large difference be-
tween the groups based upon categories established by Cohen (1962). Table
4 provides a list of the items from the evaluation that exhibited large effect
242
11. Two Peas in a Pod?
sizes. The effect size was calculated by subtracting the web-based mean
score from the f2f mean score and dividing the result by the pooled standard
deviation.
Table 4
Student Evaluation Items with Large Effect Sizes
Item Effect Size
Offered assistance to students with problems connected with course 1.34
What grade would you assign the instructor (a, b, c, d, or f) 1.14
Demonstrated promptness in returning graded assignments and exams 1.08
Meaningful class preparation 1.01
Demonstrated sincere interest inthe subject 0.98
Expected grade in this course 0.96
Personal interest and sensitivity to student problems 0.94
Availability for consultation 0.92
Demonstrated respect for students 0.91
Demonstrated fairness and reasonableness in evaluating students 0.86
Demonstrated ability to explain course material 0.76
Encouraged independent thought by students 0.75
It is important to analyze the results in the correct light. Overall, the
web-based students gave the instructor a high rating and the f2f students
gave him a stellar rating. In neither case did the students indicate a negative
experience but rather a slightly less positive experience. Interesting compari-
sons indicated that the students in the f2f course expected an average grade
of A- while those in the web-based course expected a B-. As far as grading
the instructor, f2f students assigned an average grade of A and the web-
based students assigned a grade of B+. There have been many studies con-
ducted showing the high correlation between student expected grade and
student evaluation of the instructor. To determine whether students in one
section of the course actually did perform better than those in the other,
exam grades and overall grades were compared.
Three indicators of student success were examined-(a) midterm exam-
ination, (b) final examination, and (c) overall points earned for the semester
(included other assignments). Before making comparisons, an F test for ho-
mogeneity of variance was calculated and results showed that the variances
were unequal for the midterm and the overall grade. Appropriate tests of
mean scores based upon variance issues were then performed. Of the three
comparisons, only the mean score for overall grade differed at a statistically
significant level (p = 0.02). Students in the f2f course averaged an A- and
243
12. Mentzer, Cryan, and Teclehaimanot
those in the web-based course averaged a B. It is interesting that students
appeared to predict their final grade with accuracy indicating that the grad-
ing process for both sections was clear-cut in the minds of the students. The
main difference between the tests considered in the comparison and the
overall points earned for the semester were other assignments required
throughout the semester. A closer look at student records for the two sec-
tions revealed that students in the web-based course did not earn lower
grades on these assignments but merely failed to submit some of them sug-
gesting that learning outcomes were similar but that the personal contact of an
f2f course positively influenced and motivated students to turn in assignments.
CONCLUSIONS/RECOMMENDATIONS
General findings of this study showed that two equivalent groups, ran-
domly assigned to either an f2f or web-based course, do not have equal ex-
periences in the area of student perceptions. Learning outcomes can be con-
sidered to be equal based upon test scores. Because the instructor was the
same for both courses, it can be concluded that the course delivery may have
some effect on the variables examined. The interaction analysis showed that
the instructor tended to explain less in group discussions in the web-based
course. Because only two pairs of discussion sessions were scrutinized, find-
ings in other areas of interaction, and especially student interaction, may not
generalize. Student evaluations of the course and the instructor also differed.
Students in the web-based course tended to rate both the course and the in-
structor lower than students in the f2f course-although ratings for both
groups were considered to be above average. Finally, student achievement
differed only in the area of completing course assignments. Test scores
showed no statistically significant difference indicating that student mastery
levels were essentially the same; yet students in the web-based course were
more likely to omit submitting one or more assignments. Students in the
web-based course may be less conscientious or less motivated to complete
assignments.
Limitations of this study include a small sample size and a restricted
population. What occurs in an Early Childhood Education course may be
different from what occurs in other content areas. It is recommended that fu-
ture research apply this model to other content areas and that more research
be used to explore the specific differences in course delivery methods that
account for student perceptions. As noted earlier, many studies have shown
web-based courses to be as effective as the traditionally delivered course.
244
13. Two Peas in a Pod?
However, the majority of these studies were nonexperimental using existing
groups for control and experimental. It is suggested that some of the differ-
ences found between the f2f and web-based groups in this study were in fact
due to the random assignment of students to the groups. Students who may
not be familiar or comfortable with web-based courses were in the experi-
mental group, which often does not occur when existing sections are used.
Their perceptions and experiences, therefore, were more indicative of that of
the "average" student as opposed to those students who generally enroll in
web-based courses.
References
Cohen, J. (1962). The statistical power of abnormal-social psychological
research: A review. Journal of Abnormal and Social Psychology, 65,
145-153.
Daughenbaugh, R., Daughenbaugh, L., Surry, D., & Islam, M. (2002). Per-
sonality type and online versus in-class course satisfaction. Educause
Quarterly,25(3), 71-72.
Diaz, D., & Cartnal, R. (1999). Students' learning styles in two classes:
Online distance learning and equivalent on-campus. College-Teaching,
47(4), 130-135.
Flanders, N.A. (1970). Analyzing Teaching Behavior. Reading, MA: Addi-
son-Wesley.
Fleming, N.D., & Bonwell, C.C. (2002). VARK: A guide to learning styles.
Retrieved September 9, 2003 from http://www.vark-learn.com/english/
index.asp
Meyer, K. (2003). The web's impact on student learning. T.H.E. Journal,
30(10), 14-24.
Miller, M.D., Rainer Jr., P.K., Corley, J.K. (2003). Predictors of engage-
ment and participation in an on-line course. Online Journal of Dis-
tance Learning Administration, 6(1). Retrieved January 3, 2007 from
http://www.westga.edu/-distance/ojdla/spring6 I/miller61 .htm
Office of Institutional Planning and Research, (2000). Does distance learn-
ing make a difference? A matched pairs study of persistence and per-
formance between students using traditionaland nontraditionalcourse
delivery study modes. Sinclair Community College, Dayton OH. (ERIC
Document Reproduction Service No. ED477199)
Parkinson, D., Greene, W., Kim, Y., & Marioni, J. (2003). Emerging
themes of student satisfaction in a traditional course and a blended dis-
tance course. TechTrends, 47(4), 22-28.
Rivera, J.C., McAlister, M.K., & Rice, M.L. (2002). Comparison of student
outcomes & satisfaction between traditional & web-based course offer-
ings. Online Journal of Distance Learning EducationAdministration,
245
14. 246 Mentzer, Cryan, and Teclehaimanot
5(3). Retrieved December 3, 2003 from http://www.westga.edu/%7Edistance/
ojdla/faIl53/falI53.html
Schott, M., Chernish, W., Dooley, K.E., & Lindar, J.R. (2003). Innovations
in distance learning program development and delivery. Online Jour-
nal of Distance LearningAdministration,5(2). Retrieved September 9, 2003
from http://www.westga.edu/%7Edistance/ojdla/summer62/schott62.html
15. COPYRIGHT INFORMATION
TITLE: Two Peas in a Pod? A Comparison of Face-to-Face and
Web-Based Classroom
SOURCE: Journal of Technology and Teacher Education 15 no2
2007
PAGE(S): 233-46
The magazine publisher is the copyright holder of this article and it
is reproduced with permission. Further reproduction of this article in
violation of the copyright is prohibited.