2. BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 75
The purpose of this study is to evaluate how dynamic inter-
action [9] is used in the context of eCommerce mashups. The
remainder of this paper is structured as follows. First, a review
of eCommerce decision support mashups and end-user mashup
literature are presented, and the theoretical background sup-
porting dynamic interaction is discussed. Next, five hypothe-
ses are developed involving dynamic interaction, diagnosticity,
confidence, and intention. Then, an experiment is designed and
conducted to evaluate the research model. The results of the
Fig. 1. Substrata of dynamic interaction [9].
experiment and a post hoc analysis of decision quality are then
presented. This paper concludes with a discussion of the study’s Using the control theory, Beemer and Gregg [9] developed
contributions and implications for future research. a measurement scale to quantify the support of iterative de-
cision making in decision tools that operate in unstructured
domains. This paper defined dynamic interaction as a formative
II. T HEORETICAL BACKGROUND second-order construct [36], with the following three substrata:
Prior researchers have found that, when decision tools are ap- 1) inclusive; 2) incremental; and 3) iterative [6]. Inclusive refers
plied to unstructured domains, their inability to justify solutions to the system’s ability to include user input into the KBS’s
can result in low confidence in the decision recommendations cognition process. Incremental refers to the ability of the system
[3], [23], [24]. There are two main schools of thought on how to break larger problems into smaller more manageable pieces
to overcome this lack of confidence. The first focuses on devel- which are incrementally updated and then aggregated together
oping more robust explanation facilities to justify the system’s [49]. Finally, iterative refers to the decision tool’s support
solution in unstructured domains [3]. The second declares that for an iterative decision-making process [9]. Fig. 1 shows a
“the need for interaction between the system and the user has conceptualization of dynamic interaction.
increased, mainly, to enhance the acceptability of the reasoning The inclusion of dynamic interaction in the decision-making
process and of the solutions proposed by the system” [24, p. 1]. tool fundamentally changes the way the users evaluate infor-
Through dynamic interaction with the user, the system is able mation and influences their perceived reliability and perceived
to track with the user’s iterative cognition process in solving usefulness of the system [9]. However, it also has the potential
unstructured decisions and involve the user’s opinion in the to change the user’s perceptions of the information provided by
system’s logic, which gives them a sense of ownership (and the tool as well as their evaluation of the decisions made as a
ultimately confidence) in the solution [42], [54]. result of using a more interactive decision support tool.
B. End-User Programming in Mashups
A. Dynamic Interaction and Unstructured Decision Support
Mashups are ideal for investigating dynamic interaction in
Dynamic interaction’s underpinnings are found in system eCommerce domains because they support iterative user inter-
control theory, which spans many academic disciplines ranging faces that track with the user’s iterative decision process [6].
from engineering to economics and is primarily focused on As with the iterative nature of DSS designed for unstructured
influencing the behavior of dynamic systems [47]. Specifi- domains, researchers have postulated that, when developing
cally stated, “Control theory is the area of application-oriented mashups for unstructured domains, end-users “actually work
mathematics that deals with the basic principles underlying the iteratively on data, switching from aligning and cleaning up the
analysis and design of control systems. To control an object data to using the data and back, as they get to know the data bet-
means to influence its behavior so as to achieve a desired goal.” ter over time” , p. 13[34]. However, previous mashup research
[72, p. 1]. The majority of control theory applications incor- has focused on extending the capabilities and functionality of
porate some variation of a feedback loop. Control feedback this new technology [2], [8], [44] as opposed to evaluating
loops have three general phases that include the following: the dynamic decision-making processes that mashups support.
1) inputting values; 2) processing input and calculating output; As mashups have begun to mature, academic researchers have
and 3) evaluating output and, if necessary, iterating back to recently identified the need to evaluate mashups in business
step 1) and adjusting input values [65]. domains such as decision support (e.g., [77]). Walczak et al.
Researchers have found that KBS can be effective in sup- [77] conducted an eCommerce decision experiment to compare
porting unstructured decisions when they are designed with a traditional search engine to a mashup in terms of confidence
feedback loops, which allow the user to influence the behavior in the decision, time, ease of finding information, and knowl-
of the system, so as to achieve the desired solution by evaluating edge acquisition; however, they did not explicitly address the
alternative solutions [72]. One example is a traditional KBS iterative processes inherent in mashup decision making.
that incorporated an iterative interface designed to help logistics A major challenge in creating mashups is to seamlessly pack-
professionals achieve more efficient shipment processes [38], age mashup technologies in such a way that nontechnical users
[48]. In the eCommerce domain, interactive decision aids have can easily and effectively create mashup applications. Two dif-
produced strong positive effects on both the quality and effi- ferent approaches are commonly taken in addressing (enabling)
ciency of the decision-making process [29]. end-user mashup development. The first approach is passive
3. 76 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013
by nature and focuses on designing plugins that work with the D. Consumer Confidence
user’s current browser to observe what the user is viewing and
Confidence is another important construct essential for un-
to suggest related sources for potential mashing (e.g., [19], [20],
derstanding the impact of a DSS on the decision-making pro-
and [67]). A second approach to end-user mashup development
cess. Confidence is generally described as a state of being
is proactive by nature and is necessary when the mashup pro-
certain either that a decision is correct or that a chosen course
cess becomes more complicated (e.g., process modeling or ad-
of action is the best or most effective. Consumer confidence has
vanced interface integration) [56]. Tuchinda et al. [74] present
substantial practical implications as an individual’s confidence
a tool that enables users to develop mashups in complicated
in a belief or decision has been shown to influence one’s
integration domains by first providing examples of what the end
decision process [18]. Researchers have found that confidence
mashup should look like; the tool then aims to mimic the format
is positively related to decision satisfaction [41].
of the end result, allowing for mashups to be developed by end-
Koriat and Goldsmith [43] found that confidence affects an
users who do not have programming experience. Tatemura et al.
individual’s memory processes as their subjective confidence
[73] take a similar “by example” approach in developing a tool
determines whether they are willing to report information from
that allows users to mashup disparate data sources by creating
memory. Similarly, Russo and Shoemaker [66] indicate that
abstracted target schemas that are populated based on examples
one’s confidence in the quality of a particular decision can
provided by the user. Mashroom is another end-user mashing
affect both the selection and implementation stages of the
application that is based on the nested relational model and al-
decision-making process. Both Koriat and Goldsmith [43] and
lows users to iteratively construct mashups through continuous
Russo and Shoemaker [66] describe situations of undercon-
refinement [78].
fidence which can significantly decrease a decision maker’s
This study focuses on proactive mashups, namely, websites
willingness to act on a decision.
that are devoted to the mashup process. The advantage of using
Another problem in consumer decision situations is overcon-
these preexisting mashup websites is that there are hundreds of
fidence. The overconfidence effect is a well-established bias
different mashups available at these sites that support differing
from psychology in which someone’s subjective confidence
levels of dynamic interaction with the user.
in their judgments is reliably greater than their objective ac-
curacy, particularly when confidence is relatively high [12],
C. Diagnosticity [60]. Researchers have found that people tend to exhibit greater
overconfidence if the task is difficult [40], [68], [82]. Thus,
Diagnosticity is a well-documented research stream in the
overconfidence can be a significant problem in an unstructured
consumer behavior literature. This research examines user’s
decision domain, like eCommerce. Both overconfidence and
beliefs about the usefulness of particular information for mak-
underconfidence are problems in DSS design [37].
ing a decision [53]. The principles of diagnosticity [22], [50]
suggest that the likelihood that information is used for deci-
sion making is as follows: 1) positively associated with the
III. H YPOTHESIS D EVELOPMENT
accessibility of information in memory; 2) positively associated
with the diagnosticity/usefulness of information in memory; Dynamic interaction research suggests that the ability to use
and 3) negatively associated with the diagnosticity/usefulness a decision tool to iteratively develop and evaluate solutions for
and accessibility of alternative information [22]. Understanding unstructured problems can impact a wide variety of decision
diagnosticity in DSS domains is important because without outcomes [9]. This is similar to phase theorem from decision
a significant level of diagnosticity, a DSS is less likely to science theory, which identifies the distinct phases that indi-
effectively support the decision-making process. viduals go through when solving complicated or unstructured
Jiang and Benbasat [35] examined diagnosticity in an eCom- problems: 1) problem identification; 2) assimilating necessary
merce context. They defined diagnosticity as a consumer’s per- information; 3) developing possible solutions; 4) solution eval-
ception of how helpful a website is in fostering understanding uation; and 5) solution selection [13], [79]. Individuals itera-
of the products being listed. They found that presentation for- tively repeat these phases and compare new information to their
mat influences perceived diagnosticity. For example, a virtual current knowledge of the decision domain [7], [54].
product experience (VPE), in which consumers are empowered Researchers have suggested that, when developing mashups,
with visual and functional control [35] was shown to increase users work iteratively on the data as they get to know the data
perceived diagnosticity. In addition, product presentation in a better over time [34]. This process maps directly to phases
video-with-narration format was also found to increase diag- 2, 3, and 4 from phase theorem. Prior research has found
nosticity as compared to video-without-narration or a static- that the ability to iteratively use a decision tool to evaluate
picture format [35]. Researchers have also examined the impact alternatives has a significant influence on perceived reliability
of diagnosticity on user confidence in the decision through and perceived usefulness (e.g., [9]). An antecedental construct
postpurchase product evaluations. Kempf and Smith [39] in- similar to perceived reliability is diagnosticity, which refers
dicate that consumers are more likely to be confident in their to the degree that retrieved information is useful to decision
decisions if their product experience is more diagnostic. More makers developing reliable judgments [53]. In the domain of
recently, researchers have found that perceived diagnosticity of eCommerce, diagnosticity can be further defined as “the extent
the website positively influences users’ confidence calibration to which a consumer believes that the shopping experience
and users’ intention to purchase [31]. is helpful to evaluate a product” [35, p. 111]. In general,
4. BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 77
diagnosticity is high whenever the consumer feels that the
information allows him or her to categorize the product/service
clearly into one group (e.g., high quality or low quality) [11].
As such, it is hypothesized that the ability to inclusively,
incrementally, and iteratively evaluate mashup information
through dynamic interaction will positively influence perceived
diagnosticity.
H1: The user’s ability to iteratively develop the mashup
through dynamic interaction will have a positive influ-
ence on perceived diagnosticity.
The influence of diagnosticity on confidence and sat- Fig. 2. Research model.
isfaction is well documented. For example, Lynch et al.
[50] found that the extent to which a decision maker
can evaluate a product will determine their confidence of the product [28]. As such, it is hypothesized that
in their evaluation. Similarly, Kempf and Smith [39] the mashup’s ability to track with the user’s iterative
found that the more diagnostic (amount of informa- decision-making process through dynamic interaction
tion available) an individual’s product evaluation is, will have a positive influence on the user’s intention to
the more confident they are in their decision. As such, use the mashup.
these higher perceptions of diagnosticity are believed H3: The user’s ability to iteratively develop the mashup
to strengthen a decision maker’s beliefs in their de- through dynamic interaction will have a positive influ-
cision [35]. Researchers have observed that an inter- ence on their intention to use the mashup.
face with greater diagnosticity provides the user with Additionally, diagnosticity (the amount of infor-
more information cues and thus a better understanding mation a decision maker possesses) is believed to
of product information, enhancing the user’s cognitive influence a user’s intention to use the information.
evaluation of the product [35]. Research also suggests Accessibility–diagnosticity research states that the
that confidence is positively related to the quality of probability that information is used for decision making
the decision [41]. Koriat and Goldsmith [43] found is influenced by the following: 1) the accessibility of
that participant’s willingness to participate was based the information in memory; 2) the accessibility of alter-
upon their confidence in the accuracy of their answers native information in memory; and 3) the diagnosticity
and thus influenced the participants overall accuracy. In of the information compared to alternative information
eCommerce, confidence in item attributes [27] and item [22]. If the information is more diagnostic, it is more
choice [32] is related to consumer product satisfaction. likely to be used in decision making. In eCommerce
In other words, if the consumer is able to thoroughly domains, a website’s diagnosticity has been conceptu-
evaluate a product, satisfaction with the product should alized as the cognitive belief in the website along with
be inherently present in the user’s level of confidence in other beliefs such as compatibility and enjoyment [35].
their selection of the product. Therefore, if a website has higher perceived diagnos-
H2: The perceived diagnosticity of the mashup will have a ticity and thus is more helpful to the consumer (by
positive influence on the user’s decision confidence. providing more information) when evaluating a product,
Initially, researchers postulated that decision makers the consumer is more likely to use the website.
execute the steps of the phase theorem in a sequential H4: The diagnosticity of the mashup will have a positive
linear fashion [79]. Later, it was discovered that this influence on the user’s intention to use the mashup.
is only true in certain decision domains. In structured A participant’s willingness to use a tool is influenced
decision domains that have a definable “right” solution, by their confidence in the accuracy of the information
decision makers do execute the phase theorem linearly, provided by the tool and, thus, their confidence in their
much like a decision tree [13], [79]. However, in un- decision when using the tool [43]. In purchasing deci-
structured domains that contain outcome uncertainty, sions, confidence in item attributes [27] and item choice
the decision maker iterates through steps 2, 3, and 4 [32] is related to consumer product satisfaction. When
of the phase theorem by assimilating new informa- a consumer is able to thoroughly evaluate a product,
tion, developing alternative solutions, and comparing they are more likely to experience confidence in their
the alternatives [54]. This process is repeated until the decision and thus are more likely to use the decision
following occurs: 1) The decision maker experiences tool which improves their decision confidence [27],
information overload and cannot assimilate any more [32], [43]. Therefore, it is hypothesized that increased
information, or 2) a time constraint on the decision is confidence will have a positive influence on the user’s
reached requiring the decision to be made [54]. In the intention to use the mashup. The proposed research
domain of eCommerce, the decision maker experiences model for this study is shown in Fig. 2.
uncertainty with purchasing decisions because there is H5: Increased confidence in the user’s decision will have a
a risk that the seller is being untruthful about the quality positive influence on their intention to use the mashup.
5. 78 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013
IV. E XPERIMENT D ESIGN TABLE I
E XPERIMENT D ESIGN
To evaluate the research model shown in Fig. 2, a 4 × 1
online research experiment was designed. In the experiment,
subjects made a consumer-purchasing decision using an online
decision support mashup and then answered survey questions
about their perceptions of the decision process.
A. eCommerce Mashups
Web 2.0 is a new trend for Web applications that emphasizes
services, participation, scalability, remixability, and collective
intelligence. Since Web 2.0 applications facilitate user involve-
ment in contributing to information sources, there has been
a vast increase in the amount of information and knowledge
sources on the Web. The amount of information now available
on the Web can lead to information overload and an inability to The first set of mashup tools consisted of mashuplike store
apply the best information available to a particular decision- interfaces that lacked dynamic interaction and are not true
making task. eCommerce is a good example of where this mashups because only single sellers are included in the user in-
problem is prevalent, as consumers desire information access terface. The second grouping consisted of mashups that include
while making decisions, and such information access produces multiple sellers. The next grouping consisted of mashups that
a more satisfied consumer [45]. include both multiple sellers and have iterative functionality.
Developers have begun developing eCommerce decision Finally, the last grouping consisted of mashups that include
support mashups, to address information overload, and making multiple sellers, have iterative remashability, and provide in-
relevant information available to the consumer. eCommerce cremental comparison mashup functionality. Two real-world
mashups can be categorized by the decision they are designed mashup sites were included in each of the experiment’s strata.
to support. The first decision is “what to buy?” 109things.com Table I contains the mashups and the corresponding dynamic
is a mashup interface for Amazon.com designed to support interaction functionality of each substratum.
this decision and allows users to select numerous items and
then compare them to one another. The second decision is
B. Measurement Scales
“where to buy?” Ugux.com/shopping is a mashup that allows
users to select items and then to compare Amazon.com and The measures for diagnosticity were taken from the Jiang
EBay.com in terms of price, warranty, and shipping. Recently, and Benbasat’s [35] study on VPE. Items on confidence were
developers have begun designing mashups to address both of derived from the work of Hess et al. [31] on calibration and
these decisions. Earlymisser.com is an example of such and confidence in online shopping environments. Items for inten-
enables the user to evaluate multiple products from multiple tion were taken from the well-established intention scale that
sellers. In addition to classifying eCommerce mashups by the is a part of the technology acceptance model [13]. The actual
question they address, they can also be classified by the mashup measurement items for these constructs are listed in Table II.
functionality they obtain. Table III contains the measurement items that were derived
A review of 568 eCommerce mashups, obtained from pro- for dynamic interaction from Beemer and Gregg [9]. Since
grammableweb.com/mashups, revealed three different “masha- dynamic interaction is a relatively new construct in IS literature,
bility” traits prevalent in eCommerce mashups: including and because this study is applying this construct to a new
multiple sellers, incremental comparison, and the ability to domain (eCommerce mashups), two pretests were conducted to
iteratively incorporate different decision attributes. Depending refine and validate the derived measurement items. The pretest
on the context and what the mashup is designed for, mashups participants consisted of ten IT professionals.
can contain one, two, or all three of these traits. Bestsport- The purpose of the first pretest was to evaluate internal
deals.com is an example of a mashup that includes multiple sell- validity, by having participants perform a substratum clustering
ers. A more complicated mashup is Pricegrabber.com, which procedure to group like items. Participants were given three
includes both a multiple seller mashup and iterative remasha- envelopes, each containing the name and definition of one of
bility. Finally, Mysimon.com and Shopper.com are two of the dynamic interaction’s three substrata. Next, they were given
most complex mashups which include multiple seller mashups, 15 index cards (each containing one of the scale items) and
iterative remashability, and incremental comparison mashup were instructed to match each item with one of the substrata
functionality. Shopper.com’s comparison mashup functionality definitions. A fourth envelope was provided labeled, “does not
allows the user to view the similarities and differences between fit anywhere,” so that participants could discard items that they
multiple products. Based on the review of the 568 eCommerce felt did not match anywhere. The categorization data were then
mashups at programmableweb.com, eight different mashups cluster analyzed by placing in the same cluster the items that
were selected (two mashups for each of the four substrata of six or more respondents placed in the same category [10].
the experiment) as illustrated in Table I. “The clusters are considered to be a reflection of the domain
6. BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 79
TABLE II
D ERIVED M EASUREMENT I TEMS FOR D IAGNOSTICITY, C ONFIDENCE , S ATISFACTION , AND I NTENTION
TABLE III
D ERIVED DYNAMIC I NTERACTION S CALE I TEMS
substrata for each construct and serve as a basis of assessing Researchers suggest that, when modeling second-order factor
coverage, or representativeness, of the item pools.” [17, p. 325]. models (like dynamic interaction is), each first-order construct
Items 3, 4, 6, 11, and 15 did not belong to any cluster and should have an equal number of indicators [14], [16]. Therefore,
were thus removed. This left the inclusive, incremental, and the incremental cluster was refined by dropping item seven
iterative substrata with three, four, and three items, respectively. because it was the lowest loading indicator for this substratum.
7. 80 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013
TABLE IV
R ESPONSE B IAS A NALYSIS AND D ESCRIPTIVE S TATISTICS
The purpose of the second pretest was to evaluate the sub- one, which align with the four constructs in the hypothesis
strata coverage of the selected mashup tools. To lighten their model (dynamic interaction, diagnosticity, confidence, and in-
workload, the pretest participants were split into two equal tention). Additionally, no general factor was apparent in the
groups and were each asked to evaluate four of the eight unrotated factor structure, with the first factor accounting for
mashups. The first group was asked to evaluate mashups 1–4 less than 33% of the variance. Last, to determine if response
from Table I, using the same definitions from the first pretest for bias was present among early and late responders, the means
inclusive, incremental, and iterative, to identify the functional- for each construct were calculated for the first 30 and the last
ity that the mashups possessed. The second group was asked to 30 respondents. As illustrated in Table IV, there is no significant
perform the same task but was given mashups 5–8 instead. All difference between early and late responders. Furthermore,
of the classifications were aggregated, and overall, the pretest three control variables were collected (age, gender, and grad-
participants classified the mashups with 80% accuracy in accor- uate versus undergraduate students), but none had significant
dance to Table I which suggests that the experiment substrata differences between groupings. Thus, the proactive design of
classifications were properly assigned in the experiment design. the questionnaire, the results of the post hoc factor analysis,
and the analysis of the early–late responders all suggest that
C. Data Collection Procedures common method bias is not a great concern in this study.
To conduct the experiment, a hypothetical scenario was cre-
ated for purchasing a laptop. This decision domain was selected V. DATA A NALYSIS
because of the following: 1) It is a complex domain that con- Visual PLS, version 1.04b, a partial least squares (PLS)
tains several different specifications; 2) it contains uncertainty structural equation modeling (SEM) software package, was
as with any online purchasing decision; and 3) it includes many used to evaluate the hypothesis model. The decision to use PLS
mashup sites for purchasing computers. An invitation was sent was based upon several considerations. PLS can be used to es-
to 450 undergraduate and graduate students. Again, because of timate models that use both reflective and formative indicators
the widespread use of computers in college curriculums, it can [14], allows for modeling latent constructs under conditions
be assumed that the students are potential laptop consumers. Of of nonnormality [36], and is appropriate for small-to-medium
the 450 invitations sent, there were 114 respondents yielding sample sizes [15]. A common heuristic used to determine the
a 25.3% response rate. There were 73 male respondents and appropriate sample size for a PLS model is to take the depen-
41 female respondents, and the average age of all respondents dent construct with the largest number of constructs impacting
was 27. Participants were randomly sent to one of the eight it and then multiply the number of impacting paths by ten [15].
mashup tools in Table I and were given a scenario in which Using this heuristic, with intention being influenced by the three
they were asked to use the mashup tool to find a laptop for substrata of dynamic interaction (inclusive, incremental, and
under $650, with 4 GB of RAM and a 2-GHz processor. Upon iterative), diagnosticity, and confidence, the minimum sample
using the mashup decision tool, the participants were given a size to evaluate the hypothesis model for this study would be 50.
survey composed of the revised measurement scale items from Therefore, the sample size of 114 that was collected is adequate
Tables II and III. Each item was measured with a seven-point for this study.
Likert scale. The psychometric properties of the research model were
Three steps were taken to first prevent and then evaluate the evaluated by examining item loadings, internal consistency, and
existence of common method bias. First, the online experiment discriminant validity. Researchers suggest that item loadings
was designed to guarantee response anonymity, and the mea- and internal consistencies greater than 0.70 are considered
surements of predictor and criterion variables were separated acceptable [1]. As can be seen by the shaded cells in Table V,
[63]. Second, at the suggestion of Podsakoff and Organ [64], all item loadings surpass this threshold. Internal consistency
a post hoc factor analysis, also known as Harmin’s single- is evaluated by a construct’s composite reliability score. The
factor test, was performed. If common method bias is present, composite reliability scores are located in the leftmost column
we would expect to see a single factor that emerges from the of Table VI and are more than adequate for each construct.
factor analysis that accounts for most of the covariance in the There are two parts to evaluating discriminant validity. First,
independent and criterion variables [4]. The results of the factor each item should load higher on its respective construct than on
analysis extracted four factors with eigenvalues greater than the other constructs in the model. Second, the average variance
8. BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 81
TABLE V
L OADINGS AND C ROSS L OADINGS
Fig. 3. PLS SEM results.
TABLE VII
S UMMARY OF H YPOTHESIS T ESTS
TABLE VI VI. Post Hoc A NALYSIS OF D ECISION
I NTERNAL C ONSISTENCY AND D ISCRIMINANT VALIDITY Q UALITY AND C ALIBRATION
A shortcoming of the current study and Beemer and Gregg
[7] is that the consequential construct “intention” is used in both
research models to evaluate the impact of dynamic interaction.
Intention is merely an opinion that the user holds as to whether
they intend to use the system. Furthermore, even if intention
was captured in terms of actual use, there still remains the
question of dynamic interaction’s actual effectiveness in terms
of improving decision quality. The alignment between an indi-
extracted (AVE) for each construct should be higher than vidual’s decision confidence and the quality of their decision is
the interconstruct correlations [1]. In Table V, by comparing referred to as calibration [5], [40], which has received limited
the shaded cells to the nonshaded cells, we can see that all attention in eCommerce research [31]. To address the short-
items load higher on their respective construct than the other coming of using an opinion-oriented consequential construct in
constructs in the research model. Likewise, in Table VI, by “intention,” and to evaluate the existence of calibration, a post
comparing the shaded cells to the nonshaded cells, we can see hoc analysis was performed to evaluate the significance of the
that the AVE for each construct is higher than the interconstruct relationship between confidence and decision quality.
correlations without exception. Overall, these two comparisons When completing the survey, the respondents were required
suggest that the model has sufficient discriminant validity. to report the make, model, and price of the laptop that
The results of the PLS SEM analysis are shown in Fig. 4. As they selected during the decision process. Google Products
prescribed by Beemer and Gregg [9], dynamic interaction was (www.google.com/products) was used to retrieve the amount
modeled as a formative second-order construct using the hier- of RAM and processor speed for each laptop; then, Amazon
archical component model and, thus, has an R2 of 1.0 because (www.amazon.com) was used to retrieve consumer product
of the repeated indicators used in this approach [14]. Diagnos- reviews for each laptop. On Amazon, consumers can rate each
ticity, confidence, and intention had R2 values of 0.51, 0.33, product from one to five stars, and Amazon reports the average
and 0.61, respectively. This means that 51% of the variation in stars for each product. The average stars for each laptop were
diagnosticity is explained by dynamic interaction, 33% of the recorded and then aggregated for each mashup category. In
variation in confidence is explained by dynamic interaction and result, the following data points were collected for each survey
diagnosticity, and 61% of the variation in intention is explained response: memory (RAM), processor speed (in gigahertz), cost,
by dynamic interaction, diagnosticity, and confidence. As Fig. 3 and consumer review, which were aggregated to represent
and Table VII show, four of the five hypotheses were supported. decision quality, as shown in Fig. 4.
Hypotheses 1, 2, and 4 are significant at 0.01, and hypothesis 5 To evaluate calibration (the alignment between an individ-
is significant at 0.05. The relationship between dynamic inter- ual’s decision confidence and the quality of their decision), a
action and intention was not directly supported. Instead, the post hoc analysis was performed. PLS (Visual PLS, version
results of this study suggest that the influence of dynamic 1.04b) was used to evaluate the relationship between the four
interaction on intention is fully mediated by diagnosticity and confidence scale items from Table II and the four decision
confidence. quality data points shown in Fig. 4. The relationship between
9. 82 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013
neous relationships between dynamic interaction → perceived
reliability, perceived usefulness, and diagnosticity → intention.
From an academic perspective, this study provides several
significant contributions. To our knowledge, two of the five hy-
potheses that were evaluated have never been evaluated before:
dynamic interaction − > diagnosticity and dynamic interaction
− > intention. Another important contribution of this study was
to further evaluate the role that dynamic interaction plays in
decision support tools. Dynamic interaction plays a significant
role in explaining variation in diagnosticity, confidence, and in-
Fig. 4. Post hoc decision quality measure. tention, which suggests that it may be an important IS construct
confidence and decision quality was significant at the 95% with implications in other system domains.
confidence level. With dynamic interaction (indirectly) influ- To date, the majority of mashup literature has been practi-
encing confidence through diagnosticity, and the significant re- tioner oriented and focuses on extending the capabilities and
lationship between confidence and decision quality, it suggests functionality of this new technology [8]. However, the current
that incorporating dynamic interaction in eCommerce mashups body of mashup literature lacks insight into the underlying
increases decision quality. cognitive constructs that affect how the user assimilates and
evaluates the information provided by the tool. The results of
VII. D ISCUSSION AND L IMITATIONS this study provide an initial insight into these underlying cogni-
tive factors and suggest that the incorporation of an iterative use
The purpose of this study was to evaluate the relationships case (via dynamic interaction) indirectly fosters confidence and
between dynamic interaction, diagnosticity, confidence, and intention and improves decision quality. This provides insight
intention in the context of eCommerce mashups. Four of the for sellers on how to create mashup tools that are more likely
five hypotheses were supported, providing support regarding to be used in that the relationship between confidence and
dynamic interaction’s nomological validity. The relationship intention was supported. In the current eCommerce advertising
between dynamic interaction and diagnosticity was supported. model, website use can generate revenue even if no purchase is
This provides evidence supporting the notion that the user’s actually made when third-party sponsored links are deployed.
ability to combine information from multiple sources, and iter- One limitation of this study was the assumption that the un-
atively organize it, improves their ability to use the information dergraduate and graduate student subjects had familiarity with
in the decision process. The hypothesized relationships between the product domain of laptop computers. If a substantial number
diagnosticity, confidence, and intention were also supported of the student subjects were not familiar with laptop computers,
suggesting that improving diagnosticity improves the user’s this could introduce underlying factors influencing confidence
perception of their ability to make reliable judgments using that were not controlled for in this study. For example, Herr
the tool. The only hypothesis that was not supported was the et al. [30] and Park and Lee [61] found that customers with low
relationship between dynamic interaction and intention. This product knowledge may be more inclined to use a comparison
suggests that the diagnosticity and confidence resulting from tool. However, the survey instrument contained a text box for
the dynamic interaction serve to increase intention but not the comments or questions, and none of the respondents asked for
dynamic interaction itself. clarification on the product domain, which suggests that this
This study answered some important questions but raised limitation did not have a substantial impact on the results of the
some interesting ones as well. To date, only one other study study. Future research may overcome this limitation by exam-
is known to have evaluated dynamic interaction. Beemer and ining dynamic interaction and mashups in a real-world setting.
Gregg [9] developed the measurement scale for dynamic inter- A second limitation of the study is that user’s perceptions of
action and then evaluated its nomological validity by testing dynamic interaction were not corroborated with evidence from
hypothesized relationships between perceived usefulness, per- how they actually used the mashup tool. The purpose of this
ceived reliability, and intention. There are two significant limi- study was to determine how user perceptions of dynamic in-
tations to the measurement scale of Beemer and Gregg [9], both teraction influenced their perceptions of how helpful a mashup
of which are addressed by this study. The first limitation is that tool was in understanding and evaluating the products listed on
the decision domain used was not a business domain. This study the website and, in turn, how this influenced their confidence
showed that dynamic interaction is relevant for (eCommerce) in their decisions and their intention to buy. As such, it was
business decisions. The second limitation is that only three of decided to use subjective measures to evaluate user perceptions
the potential constructs in dynamic interaction’s nomological of the dynamic interaction capabilities of the two they were
net were evaluated. This study expanded dynamic interaction’s using. However, it is also possible to evaluate how users ac-
nomological net by evaluating consequential constructs from tually interact with a mashup tool to see if their perceptions
psychology (diagnosticity), decision science (confidence), and of dynamic interaction match the way they actually used the
IS literature (intention). mashup tool. Future studies could have subjects use the mashup
Both Beemer and Gregg [9] and this study found intention tool in a laboratory setting where their actual inclusion of data,
to be a significant indirect consequential construct of dynamic incremental solutions, and iterative decision making could be
interaction. Future research could benefit from combining the observed and analyzed in comparison to their perceptions of
research models from these two studies to evaluate the simulta- the dynamic interaction capabilities of the mashup tools.
10. BEEMER AND GREGG: DYNAMIC INTERACTION IN DECISION SUPPORT 83
The post hoc analysis in this study observed a significant [8] B. A. Beemer and D. G. Gregg, “Mashups: A literature review and
relationship between confidence and decision quality. This classification framework,” Future Internet J., vol. 1, no. 1, pp. 59–87,
Dec. 2009.
raises some interesting questions that could benefit from future [9] B. A. Beemer and D. G. Gregg, “Dynamic interaction in knowledge based
research. It would be interesting to evaluate the role of dynamic systems: An exploratory investigation and empirical evaluation,” Decision
interaction on the decision process from an objective perspec- Support Syst., vol. 49, no. 4, pp. 386–395, Nov. 2010.
[10] A. Bhattacherjee, “Individual trust in online firms: Scale development and
tive while observing the direct influences on decision quality. initial test,” J. Manage. Inf. Syst., vol. 19, no. 1, pp. 211–241, 2002.
[11] P. F. Bone, “Word-of-mouth effects on short-term and long-term product
VIII. C ONCLUSION judgments,” J. Bus. Res., vol. 32, no. 3, pp. 213–223, Mar. 1995.
[12] N. Brewer, A. Keast, and A. Rishworth, “The confidence-accuracy rela-
Over the past few years, as eCommerce has grown into over tionship in eyewitness identification: The effects of reflection and discon-
a $100 billion industry [21], [51], a phenomenon described firmation on correlation and confidence,” J. Exp. Psychol. Appl., vol. 8,
no. 1, pp. 44–56, Mar. 2002.
as Web 2.0 has emerged. Web 2.0 is a new trend for Web [13] O. Brim, G. C. David, C. Glass, D. E. Lavin, and N. Goodman, Personality
applications that emphasizes services, participation, scalability, and Decision Processes. Stanford, CA: Stanford Univ. Press, 1962.
remixability, and collective intelligence [58]. One important [14] W. Chin, “Partial least squares for researchers: An overview and presen-
tation of recent advances using the PLS approach,” in Proc. Int. Conf. Inf.
Web 2.0 technology that is being used in the eCommerce Syst., Brisbane, Australia, 2000, Lecture Slides.
domain is mashups, and thus, it is important that researchers [15] W. Chin and P. Newsted, “Structural equation modeling analysis with
understand the factors that impact the use of these systems. The small samples using partial least squares,” in Statistical Strategies for
Small Sample Research. Thousand Oaks, CA: Sage, 1999, pp. 307–341.
majority of mashup literature to date is practitioner oriented [8], [16] W. Chin, B. Marcolin, and P. Newsted, “A partial least squares latent
which further motivates the need for academic evaluation of variable modeling, approach for measuring interaction effects: Results
these tools, and is addressed by this study. from a Monte Carlo simulation study and voice mail emotion/adoption
study,” in Proc. Int. Conf. Inf. Syst., Cleveland, OH, 1996.
This study evaluated the consumer’s use of mashups in an [17] F. D. Davis, “Perceived usefulness, perceived ease of use, and user accep-
eCommerce decision domain. Existing measurement scales for tance of information technology,” MIS Quart., vol. 13, no. 3, pp. 319–340,
dynamic interaction, diagnosticity, confidence, and intention Sep. 1989.
were used to evaluate the role that dynamic interaction plays [18] R. M. Dawes, “Confidence in intellectual vs. confidence in perceptual
judgments,” in Similarity and Choice: Papers in Honor of Clyde Coombs.
in the decision-making process. Then, the measurement scale Bern, Switzerland: Hans Huber, 1980, pp. 327–345.
for dynamic interaction is applied to the domain of eCommerce [19] R. Ennals and M. Garofalakis, “Mashmaker: Mashups for the masses,” in
mashups, and the results of this study provide strong support Proc. Int. Conf. Manage. Data, 2007, pp. 1116–1118.
[20] R. Ennals and D. Gay, “User-friendly functional programming for
for the development of inclusive, incremental, and iterative mashups,” in Proc. Int. Conf. Funct. Program., 2007, pp. 223–234.
functionality in eCommerce decision support tools. The em- [21] D. C. Fain and J. O. Pedersen, “Sponsored search: A brief history,”
pirical evaluations conducted in this paper revealed that, for in Proc. 2nd Workshop Sponsored Search Auctions, Ann Arbor, MI,
2006.
eCommerce decision support, dynamic interaction has a signif- [22] J. M. Feldman and J. G. Lynch, “Self-generated validity and other effects
icant impact on diagnosticity, confidence, and intention. A post of measurement on belief, attitude, intention, and behavior,” J. Appl.
hoc decision quality analysis was performed to detect whether Psychol., vol. 73, no. 3, pp. 421–435, 1988.
over- or underconfidence was prevalent in the decision domain [23] G. Forslund, “Toward cooperative advice-giving systems: A case study in
knowledge based decision support,” IEEE Expert, vol. 10, no. 4, pp. 56–
when using the mashup tools. The results were consistent with 62, Aug. 1995.
the evaluation of the hypothesis model, in that as dynamic [24] V. Furtado, Developing Interaction Capabilities in Knowledge-Based Sys-
functionality increased and so did decision quality. As such, tems via Design Patterns, 2004.
[25] D. Gefen, E. Karahanna, and D. W. Straub, “Trust and TAM in online
this study also provides evidence that dynamic interaction is shopping: An integrated model,” MIS Quart., vol. 27, no. 1, pp. 51–90,
a legitimate IS construct that can be used by future researchers Mar. 2003.
working on unstructured decision support domains. [26] D. Gefen, V. Rao, and N. Tractinsky, “The conceptualization of trust,
risk and their relationship in electronic commerce: The need for clarifi-
cations,” in Proc. Hawaii Int. Conf. Syst. Sci., 2003, p. 10.
R EFERENCES [27] M. M. Goode, “Predicting consumer satisfaction from CD players,”
[1] R. Agarwal and E. Karahanna, “Time flies when you’re having fun: J. Consum. Behav., vol. 1, no. 4, pp. 323–335, Jun. 2002.
Cognitive absorption and beliefs about information technology usage,” [28] D. Gregg and S. Walczak, “Auction advisor: Online auction recommen-
MIS Quart., vol. 24, no. 4, pp. 665–695, Dec. 2000. dation and bidding decision support system,” Decision Support Syst.,
[2] M. Albinola, L. Baresi, M. Carcano, and S. Guinea, “Mashlight: A vol. 41, no. 2, pp. 449–471, Jan. 2006.
lightweight mashup framework for everyone,” in Proc. Int. World Wide [29] G. Häubl and V. Trifts, “Consumer decision making in online shopping
Web Conf., Madrid, Spain, 2009. environments: The effects of interactive decision aids,” Market. Sci.,
[3] V. Arnold, N. Clark, P. A. Collier, S. A. Leech, and S. G. Sutton, “The vol. 19, no. 1, pp. 4–21, Jan. 2000.
differential use and effect of knowledge-based system explanations in [30] P. Herr, F. Kardes, and J. Kim, “Effects of word-of-mouth and product
novice and expert judgment decisions,” MIS Quart., vol. 30, no. 1, pp. 79– attribute information on persuasion: An accessibility–diagnosticity per-
97, Mar. 2006. spective,” J. Consum. Res., vol. 17, no. 4, pp. 454–462, Mar. 1991.
[4] P. S. Aulakh and E. F. Gencturk, “International principal–agent [31] T. J. Hess, F. Tang, and J. D. Wells, “Confidence and confidence with
relationships—Control, governance, and performance,” Ind. Market. online shopping,” in Proc. 9th SIG IS Cognit. Res. Exchange Workshop,
Manage., vol. 29, no. 6, pp. 521–538, Nov. 2000. Saint Louis, MO, 2009.
[5] J. V. Baranski and W. M. Petrusic, “The calibration and resolution of [32] M. Heitmann, D. R. Lehmann, and A. Herrmann, “Choice goal attainment
confidence in perceptual judgments,” Perception Psychophys., vol. 55, and decision and consumption satisfaction,” J. Market. Res., vol. 44, no. 2,
no. 4, pp. 412–428, Apr. 1994. pp. 234–250, May 2007.
[6] B. A. Beemer, “Dynamic Interaction: A Measurement Development and [33] T. Hornung, K. Simon, and G. Lausen, “Mashing up the deep Web,” in
Empirical Evaluation of Knowledge Based Systems and Web 2.0 Deci- Proc. Int. Conf. Web Inf. Syst. Technol., 2008, pp. 58–66.
sion Support Mashups,” Ph.D. dissertation, Univ. Colorado, Denver, CO, [34] D. Huynh, R. Miller, and D. Karger, “Potluck: Data mash-up tool for
May 2010. casual users,” in Proc. Int. Conf. World Wide Web, 2007, pp. 737–746.
[7] B. A. Beemer and D. G. Gregg, “Advisory systems to support decision [35] Z. Jiang and I. Benbasat, “Virtual product experience: Effects of visual
making,” Contributed Chapter in the International Handbook on Decision and functional control on perceived diagnosticity and flow in electronic
Support Systems, 2008. shopping,” J. Manage. Inf. Syst., vol. 21, no. 3, pp. 111–147, 2004.
11. 84 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013
[36] J. Karimi, T. Somers, and A. Bhattacherjee, “The impact of ERP im- [65] M. Raudsepp, Ideal Feedback Loop, 2007. [Online]. Available: http://en.
plementations on business process outcomes: A factor-based study,” wikipedia.org/wiki/File:Ideal_feedback_model.svg#filelinks
J. Manage. Inf. Syst., vol. 24, no. 1, pp. 101–134, 2007. [66] J. Russo and P. Schoemaker, Confident Decision Making. London, U.K.:
[37] G. M. Kasper, “A theory of decision support system design for user Piatkus, 1992.
calibration,” Inf. Syst. Res., vol. 7, no. 2, pp. 215–232, Jun. 1996. [67] M. Sabbouh, J. Higginson, S. Semy, and D. Gagne, “Mashup scripting
[38] G. Kauffman, “A theory of symbolic representation in problem solving,” language,” in Proc. Int. World Wide Web Conf., 2007, pp. 1305–1306.
J. Mental Imag., vol. 9, no. 2, pp. 51–69, 1985. [68] R. Shiller, Irrational Exuberance. Princeton, NJ: Princeton Univ. Press,
[39] D. Kempf and R. E. Smith, “Consumer processing of product trial and the 2000.
influence of prior advertising: A structural modeling approach,” J. Market. [69] R. E. Smith, “Integrating information from advertising and trial: Processes
Res., vol. 35, no. 3, pp. 325–338, Aug. 1998. and effects on consumer response to product information,” J. Market.
[40] G. Keren, “Calibration and probability judgments: Conceptual and Res., vol. 30, no. 2, pp. 204–219, May 1993.
methodological issues,” Acta Psychol., vol. 77, no. 3, pp. 217–273, [70] P. W. Smith, R. A. Feinberg, and D. J. Burns, “An examination of classi-
Oct. 1991. cal conditioning principles in an ecologically valid advertising context,”
[41] B. Kidwell, D. Hardesty, and T. Childers, “Emotional calibration effects on J. Market. Theory Pract., vol. 6, no. 1, pp. 63–72, 1998.
consumer choice,” J. Consum. Res., vol. 35, no. 4, pp. 611–621, Dec. 2008. [71] H. Sneed, “Integrating legacy software into a service oriented architec-
[42] C. N. Kim, K. H. Yang, and J. Kim, “Human decision-making behavior ture,” in Proc. Conf. Softw. Maintenance Reeng., 2006, pp. 3–14.
and modeling effects,” Decision Support Syst., vol. 45, no. 3, pp. 517–527, [72] E. D. Sontag, Mathematical Control Theory. New York: Springer-
Jun. 2008. Verlag, 1998.
[43] A. Koriat and M. Goldsmith, “Monitoring and control processes in the [73] J. Tatemura, A. Sawires, O. Po, S. Chen, K. Candan, D. Argrawal, and
strategic regulation of memory accuracy,” Psychol. Rev., vol. 103, no. 3, M. Goveas, “Mashup feeds: Continuous queries over web services,” in
pp. 490–517, Jul. 1996. Proc. Int. Conf. Manage. Data, 2007, pp. 1128–1130.
[44] A. Koschmider, V. Torres, and V. Pelechano, “Elucidating the mashup [74] R. Tuchinda, P. Szekely, and C. A. Knoblock, “Building mashups by
hype: Definitions, challenges, methodical guide and tools for mashups,” example,” in Proc. Int. Conf. Intell. User Interfaces, 2008, pp. 139–148.
in Proc. Int. World Wide Web Conf., New York, 2009. [75] A. Vance, C. Elie-Dit_Cosaque, and D. Straub, “Examining trust in infor-
[45] O. Kwon, “Multi-agent system approach to context-aware coordinated mation technology artifacts: The effects of system quality and culture,”
Web services under general market mechanism,” Decision Support Syst., J. Manage. Inf. Syst., vol. 24, no. 4, pp. 73–100, 2008.
vol. 41, no. 2, pp. 380–399, Jan. 2006. [76] A. Vancea, M. Grossniklaus, and M. Norrie, “Database-driven mashups,”
[46] R. J. Lewicki, D. J. McAllister, and R. J. Bies, “Trust and distrust: New in Proc. Int. Conf. Web Eng., 2008, pp. 162–174.
relationships and realities,” Acad. Manage. Rev., vol. 23, no. 3, pp. 438– [77] S. Walczak, D. L. Kellog, and D. G. Gregg, “An innovative service to
458, 1998. support decision-making in multi-criteria environments,” Int. J. Inf. Syst.
[47] F. L. Lewis, Applied Optimal Control and Estimation. Upper Saddle Serv. Sect., vol. 2, no. 4, pp. 39–56, Oct. 2010.
River, NJ: Prentice-Hall, 1992. [78] G. Wang, S. Yang, and Y. Han, “Mashroom: End-user mashup program-
[48] H. C. Lau and W. T. Tsui, “An iterative heuristics expert system for ming using nested tables,” in Proc. Int. World Wide Web Conf., 2009,
enhancing consolidation shipment process in logistics operations,” Intell. pp. 861–870.
Inf. Process., vol. 228, pp. 279–289, 2006. [79] E. Witte, “Field research on complex decision-making processes—
[49] M. Lin and S. Lee, “Incremental update on sequential patterns in large The phase theorem,” Int. Studies Manage. Org., vol. 2, no. 2, pp. 156–
databases by implicit merging and efficient counting,” Inf. Syst., vol. 29, 182, 1972.
no. 5, pp. 385–404, Jul. 2004. [80] J. Wong and J. Hong, “Making mashups with marmite: Towards end-user
[50] J. Lynch, H. Marmorstein, and M. Weigold, “Choices from sets including programming for the Web,” in Proc. Human Factors Comput. Syst. Conf.,
remembered brands: Use of recalled attributes and prior overall evalua- 2007, pp. 1435–1444.
tions,” J. Consum. Res., vol. 15, no. 2, pp. 169–184, Sep. 1988. [81] L. Xiong and L. Liu, “A reputation-based trust model for peer-to-peer
[51] K. Matzler, A. Wurtele, and B. Renzl, “Dimensions of price satisfaction: ECommerce communities,” in Proc. ACM Conf. Elect. Commerce, 2003,
A study in the retail banking industry,” Int. J. Bank Market., vol. 24, no. 4, pp. 228–229.
pp. 216–231, 2006. [82] J. F. Yates, Judgment and Decision Making. Englewood Cliffs, NJ:
[52] D. Merrill, “Mashups: The new breed of web app,” IBM Web Architecture Prentice-Hall, 1990.
Technical Library2006.
[53] G. Menon, P. Raghubir, and N. Schwarz, “Behavioral frequency judg-
ments: An accessibility–diagnosticity framework,” J. Consum. Res.,
vol. 22, no. 2, pp. 212–228, Sep. 1995. Brandon A. Beemer received the M.S. degree and
[54] H. Mintzberg, D. Raisinghani, and A. Theoret, “The structure of ‘Unstruc- the Ph.D. degree in computer science and infor-
tured’ decision processes,” Admin. Sci. Quart., vol. 21, no. 2, pp. 246–275, mation systems from the University of Colorado,
Jun. 1976. Denver.
[55] S. Murugesan, “Understanding Web 2.0,” IT Prof., vol. 9, no. 4, pp. 34– He is an Enterprise Technical Engineer with
41, Jul./Aug. 2007. McKesson Provider Technologies, San Francisco,
[56] T. Nestler, “Towards a mashup-driven end-user programming of SOA- CA. His current research is focused on decision
based applications,” in Proc. Int. Conf. Inf. Integr. Web-Based Appl. Serv., support in unstructured domains through dynamic
2008, pp. 551–554. interaction. His work has been published in jour-
[57] E. Obadia, Web 2.0 Marketing, 2007. [Online]. Available: nals including Future Internet and Decision Support
http://marketing20.blogspot.com/2007/01/eCommerce-revenue-over-100- Systems.
billion.html
[58] T. O’Reilly, What-is-Web-2.0, 2005. [Online]. Available: http://oreilly.
com/web2/archive/what-is-web-20.html
[59] T. O’Reilly and J. Musser, “Web 2.0 principles and best practices,” Dawn G. Gregg received the B.S. degree in mechan-
O’Reilly Radar2006. ical engineering from the University of California,
[60] G. Pallier, “The role of individual differences in the accuracy of confi- Irvine, the M.B.A. degree from Arizona State Uni-
dence judgments,” J. Gen. Psychol., vol. 129, no. 3, Jul. 2002. versity, Phoenix, and the M.S. degree in informa-
[61] C. Park and T. Lee, “Information direction, website reputation and eWOM tion management and the Ph.D. degree in computer
effect: A moderating role of prodtype,” J. Bus. Res., vol. 62, no. 1, pp. 61– information systems from Arizona State University,
67, Jan. 2009. Tempe.
[62] N. Pollock, “Knowledge management: Next step to competitive She is an Associate Professor of information sys-
advantage—Organizational excellence,” Program Manager, 2001. tems and entrepreneurship with the University of
[63] P. M. Podsakoff, S. B. MacKenzie, and J. Y. Lee, “Common method Colorado, Denver. Her current research seeks to im-
biases in behavioral research: A critical review of the literature and rec- prove the quality and usability of Web-based infor-
ommended remedies,” J. Appl. Psychol., vol. 88, no. 5, pp. 879–903, mation. Her work has been published in journals including MIS Quarterly,
Oct. 2003. International Journal of Electronic Commerce, IEEE T RANSACTIONS ON
[64] P. M. Podsakoff and D. W. Organ, “Self-reports in organizational research: S YSTEMS , M AN , AND C YBERNETICS, Communications of the ACM, and
Problems and prospects,” J. Manage., vol. 12, pp. 69–82, 1986. Decision Support Systems.