Академический Документы
Профессиональный Документы
Культура Документы
Cameron K. Tuai
such as the ICs. Unfortunately, a review of the literature shows that many of
the issues that concerned librarians in the past have yet to be resolved. More
specifically, although an extensive body of literature exists on the topic, the
majority of it is merely ‘‘surveys of practice, speculation about practice, and
recommendations regarding suitable organizational and management
strategies’’ (Lynch, 1990, p. 218). This literature may be ideal for identifying
and describing administrative issues, but the absence of methodological
rigor limits its generalizability and value in the design and operation of an
integrated IC. Kirk (2008) neatly summarizes the approach this researcher
has taken to resolve these issues regarding the integration of libraries and
computer centers:
LITERATURE REVIEW
This literature review describes the boundaries that define the areas of
concern included and excluded from the research area. Drawing from
structural contingency theory literature, the empirical literature, and the
library literature, the conceptual framework will describe the concepts and
variables concerned with the integration of collaborative workflows within
an IC. In particular, the conceptual framework will focus on the variables
and relationships of workflow interdependence, coordination, behavioral
differentiation, and performance.
Structural contingency theory, or contingency theory for short, defines
organizations as ‘‘collectivities oriented to the pursuit of relatively specific
goals and exhibiting relatively highly formalized social structures’’ (Scott,
1992, p. 23). Within this definition, contingency theorists describe organi-
zations in terms of four structural features: centralization, formalization,
division of labor, and coordination. These organizational structures are
dependent upon three contexts or contingencies: size, technology, and
interdependence. Given the relationships among the independent contin-
gencies and the dependent structures, researchers generally use contingency
theory within an intraorganizational unit of analysis. This includes both the
structures internal to a particular unit and the structures external to it.
Contingency theory normally does not examine the individual in isolation,
nor an organization’s interaction with its environment or other organiza-
tions. Therefore, researchers will generally not apply contingency theory to
study the social or psychological levels of the organization’s effects on
individuals, nor will they apply it to investigate the ecological level of
organizations or classes of organizations interacting with their environments
(Scott, 1992).
The underlying premise of contingency theory is that no one best way
exists to organize, but not all ways of organizing are equally effective
(Galbraith, 1973). Given this supposition, contingency theorists endeavor to
identify the optimal organizational structure for a given organizational
contingency or context. Within a collaborative information service context,
numerous contingencies exist; the area of concern for the proposed research
is the integration of librarians and technologists within an IC. Contingency
theory defines integration as ‘‘the process of achieving unity of effort among
6 CAMERON K. TUAI
Interdependence
organization but does not require any mutual interactions. Thus a failure
within one unit’s work processes does not directly affect other units’ work
processes. For example, a failure in a library’s circulation unit does not
directly affect the serials cataloging unit. The next higher degree of
interdependence is sequential, which occurs when unit A’s outputs inform
unit B’s inputs. Thus a failure of A directly affects B, but not vice versa. For
example, a failure in acquisitions will affect cataloging, but a failure in
cataloging will not affect acquisitions. The highest degree of interdepen-
dence is reciprocal, which occurs when inputs and outputs move back and
forth between operations. Failure of either operation results in the failure of
the other. Within the library, this relationship can be seen between
circulation and shelving. Circulation’s receipt of books forms the input for
shelving; conversely, shelving’s proper placement of books in the stacks
forms the input for circulation. Thus, failure of either partner will result in
problems for the other.
The challenge in examining the predictor variable of interdependence
within the professional information services literature is that it is rarely, if
ever, explicitly mentioned. This is likely because the interdependence, as
defined by the relationship between workflow actions (Thompson, 1967;
Van de Ven, Delbecq, & Koenig, 1976) requires a level of detail that is rare
in the library case study literature. Although explicit mentions of inter-
dependence are infrequent, Thompson’s observation of increasing degrees of
interdependence within units at lower hierarchical levels suggests that some
degree of interdependence must exist between librarians and technologists
within a collaborative information service unit. To illustrate Thompson’s
theoretical categories of interdependence in terms of an IC’s workflow
interdependence, one can extrapolate definitions and examples from the
library literature to illustrate these categories.
The professional library literature on integrated library and computing
centers describes interdependence theoretically and practically in various
ways. Bailey and Tierney’s (2008) handbook on the learning commons
summarizes some of the common conceptions of interdependence within an
IC setting, such as seamless integration of technology into the construction
of individual and shared knowledge, integration of a continuum of library
services and technology, and integration of facility that focuses on complete
service to users. One challenge in analyzing the professional literature is to
move beyond broad descriptions of services or strategies to the more specific
level of service workflows. Beagle (1999) and Lippincott (2009) present a
number of good examples describing levels of interdependent workflows and
coordination that correspond to Thompson’s (1967) categories.
8 CAMERON K. TUAI
other partner. For instance, reference librarians will more likely conduct
reference interviews for technology questions if they have an understanding of
the synergies offered by delivering information in a digitally integrated
environment. The case study literature describes these types of services in
terms of informed referrals (Crockett, McDaniel, & Remy, 2002); cross-
functional assistance (Church, 2005; Cowgill, Beam, & Wess, 2001; Spencer,
2007); and tiered service (Fitzpatrick et al., 2008). In comparison, inter-
dependence is at only a pooled level if IC staff members break questions into
either library or technology issues without offering additional assistance.
The last level of interdependence, reciprocal, is captured by Beagle’s
(1999) idea of case management and Lippincott’s (2009) definition of
collaborative services. Beagle defines case management as librarians and
computing center staff teaming together to resolve patron information
needs. Lippincott defines reciprocal interdependence as a collaborative level
of service found when technologists and librarians develop common goals
and programs. In both cases the authors speculate that the IC has yet to
reach this level of service. A review of the case literature largely supports this
speculation, finding little evidence of services requiring reciprocal levels of
interdependence. One of the few examples of services that require a
reciprocal level of interdependence comes from Earlham College which
offers services such as ‘‘training users to find and utilize podcasts, customize
course management systems, create web sites, develop multimedia
presentations, and other software tasks’’ (Baker & Kirk, 2007, p. 385).
Empirical research on interdependence within this context comes
primarily from Weng (1997a) who measures Thompson’s (1967) three
categories of interdependence using Van de Ven et al.’s (1976) instrument.
Two other studies of interest are Lynch (1974) and Tushman (1979) who
define interdependence as a sub-scale of technology, similar to Lawrence
and Lorsch’s (1967a) approach, rather than as a measure of workflow
interdependence separate from technology Thompson proposed. Both
Lynch and Tushman measure intradepartmental dependence using an
instrument similar to Mohr’s (1971) in that interdepartmental relations are
nonspecific and composed of a single measure in contrast to Thompson’s
three coordination measures. Support of this researcher’s use of Thomp-
son’s definition of interdependence comes from Lynch’s conclusion that
interdependence is likely not a technology variable but rather a structural
variable. She supports this conclusion through her factor analysis, which
loads task interdependence with the coordinative variable of rules. This
loading supports Thompson’s idea of a relationship between interdepen-
dence and coordination.
10 CAMERON K. TUAI
Coordination
Granath, & Pengelly, 2000), cross-functional teams (Foley, 1997; Fox et al.,
2001), and cross-training (Kent & McLennan, 2007; Metzger, 2006; Wolske,
Larkee, Lyons, & Bridgewater, 2006). In reviewing the IC literature
concerning coordination by mutual adjustment, it is difficult to ascertain
whether these coordinative structures influence workflows to the degree
Thompson implied. The integrative effects of lateral communication in
information services seem to fit the degree of uncertainty linked to
sequential interdependence better than that of reciprocal interdependence.
For instance, Baker and Kirk (2007) describe the benefits of face-to-face
communication in terms of the opportunity to ‘‘gain a finer understanding
of their colleagues’ work’’ ( p. 382); Greenwell (2007) says it ‘‘allows service
desk personnel to make better referral’’ (p. 40); and Dallis and Walters
(2006) report it is ‘‘the key factor in helping students successfully navigate
the suite of IC services is the atmosphere of cooperation’’ (p. 253). These
statements would better describe the sequential level of interdependence
associated with Beagle’s (1999) definition of IC services at a referral
consultancy level, although they do fit somewhat better with Lippincott’s
(2009) broader definition of ideas of cooperative services as the development
of mutual goals.
Empirically, Weng (1997a) measures coordination using the degree of
hierarchical authority present within the library unit. For instance, higher
levels of hierarchical authority would be similar to coordination through
policies and procedures; and lower levels of hierarchical authority would be
similar to coordination through mutual adjustment. A closer examination of
Weng’s measure presents an example of the challenges in measuring
coordination, especially in terms of maintaining consistency in the unit of
analysis. Weng (1997a) identifies four items in decreasing hierarchy of
authority: (a) upper management outside your department, (b) department
heads, (c) individuals within the unit, and (d) groups within the unit. To
measure centralization/decentralization she defines upper management as
representing centralization and department heads, while individuals and
groups represent decentralization. This definition is problematic because
hierarchy of authority is dependent upon the unit of analysis. For example,
‘‘upper management outside your department’’ will always represent a
centralized authority; however, the ‘‘department head’’ is a centralized
authority at an intradepartmental level of analysis but not at the
interdepartment level. Another way to see this is that staff within the
department (intradepartmental) will view the department head as a
centralizing force; however, other department heads (interdepartmental)
view themselves as a decentralized force relative to upper management.
Structural Contingency Theory Model of Library and Technology Partnerships 13
Differentiation
Performance
A review of the information science literature finds both support for and
opposition to applications of contingency theory, with the primary critique
relating to contingency theory’s assumption of organizational rationality
(Moran, 1978; Rayward, 1969; Weng, 1997a). For many organizational
studies’ researchers, organizations are subject to both the subjective and
unpredictable humanistic tendencies of their members, and the rational
forces of efficiency. Scott (1992) labels the focus on the role of human nature
within organizations as a natural systems perspective. He defines the natural
systems perspective of organizations as ‘‘collectivities whose participants
share a common interest in the survival of the system and who engage in
24 CAMERON K. TUAI
collective activities, informally structured, to secure this end’’ (p. 25). This
definition of organization emphasizes the role of the participants’ goals and
informal structures, in comparison to the rational perspective’s focus on the
role of the organization’s goals and formalized structures. The natural
systems’ critique of contingency theory thus highlights the degree to which
collectivities are oriented to the pursuit of personal goals that may or may
not align with the rational goals of the organization (Lynch, 1990; Weng,
1997a). Thus, the central criticism of contingency theory is its positivist
definition of rational forces such as technology, size, or workflow inter-
dependence, as determinants of organizational structures. Within the field of
information science, social informatics provides an alternative approach to
the rational assumptions of contingency theory.
Social informatics is the ‘‘interdisciplinary study of the design, uses, and
consequences of information technologies that take into account their
interaction with institutional and cultural contexts’’ (Kling, 2007, p. 205).
It aligns with this research project in its focus on information and
communication technologies (ICTs) within organizational structures. Social
informatics defines ICTs as ‘‘the artifacts and practices for recording,
organizing, storing, manipulating, and communicating information’’ (Kling,
Rosenbaum, & Sawyer, 2005, p. 11). Social informatics’ critique of structural
contingency theory is similar to Scott’s (1992) in that it takes issue with
contingency theory’s assumption of technological determinism (Henfridsson,
Holmström, & Söderholm, 1997; Kling, 1980, 2007). It proposes that any
study of the role of ICTs in organizations take into account the social,
technical, and the institutional context in which the ICT is employed. The
advantage of this approach is that it leads ‘‘to a broader understanding of
how computerization is engaged and what its effects are’’ (Kling et al., 2005,
p. 15). The social informatics literature encompasses a wide range of
disciplines and offers a number of perspectives on understanding ICTs within
organizations. There are two critiques of contingency theory which illustrate
some of the issues presented within social informatics. The first critique
comes from strategic contingency theory which examines the role of power or
other social forces in influencing the contingency theory relationship between
technology and structure. The second critique concerns the duality of
technology, arguing that the relationship between technology and structure
is recursive and therefore researchers cannot assign causality exclusively to
either variable.
Child (1972) builds strategic contingency theory on the premise that,
because individuals interpret organizational goals in terms of their own self
interests, they will not pursue the organization’s goals to the extent assumed
Structural Contingency Theory Model of Library and Technology Partnerships 25
RESEARCH METHODS
Interdependence Independent
Differentiation Independent Independent
Coordination Dependent Dependent
Performance Dependent
Fita Dependent Independent
a
f (Fit) ¼ [Interdependence, Coordination].
Structural Contingency Theory Model of Library and Technology Partnerships 29
feel that high levels of horizontal coordination coupled with complex forms
of interdependence should lead to higher levels of performance. This
expectation could taint their responses concerning their perception of
performance. To minimize the chances of respondents reporting what they
believe should occur rather than what is actually occurring, line-level
employees responded to the survey concerning operational-level actions, and
managers responded to questions concerning performance.
The majority of the items on the instrument were drawn from the
organizational assessment instrument (OAI) (Van de Ven & Ferry, 1980).
Three broad modifications were made in order to fit the instrument into the
context of an IC. The first change was to unify, when appropriate, Likert-
type answer categories around the anchors of ‘‘To No Extent’’ and ‘‘To a
Great Extent.’’ Babbie (2004, pp. 253–254) notes the advantage of single-
response categories, in that respondents can compare the strength of earlier
answers against later answers, thus providing the respondent and researcher
with greater comparability among questions. The disadvantage is that
respondents may develop a pattern in answering questions based upon
previous answers, creating a common method bias (Podsakoff et al., 2003).
Because the questionnaire is relatively short, it was felt that this bias would
be marginal. The second change to the original questions was to increase the
number of choices available from five to seven. This change increases the
sensitivity of the instrument while keeping the number of choices within a
reasonable range (Spector, 1992). The last change to the instrument was to
use context-specific nouns instead of pronouns. For example, ‘‘to what
extent has this unit carried out its responsibilities y’’ was replaced with
‘‘to what extent have the computing consultants carried out their responsi-
bilities y.’’ This use of context-specific nouns is similar to the approach
taken within the study of nursing (Leatt & Schneck, 1981; Velasquez, 2007;
Zinn, Brannon, Mor, & Barry, 2003).
The instrument was pretested in two ways, first library and technology
managers at the Indiana University ICs completed the survey and follow-up
interviews were conducted to gather suggestions for improvement. Second,
six librarians and six technologists completed the survey with follow-up
interviews. The revised instrument was included in the final application
for approval from the Indiana University Institutional Review Board
(Appendix B). Approval was received in March 2009. Subjects for the study
were recruited using purposive sampling. The researcher identified eligible
ICs as those formed through a partnership between the libraries and
university computing, through a review of the literature, web searches, and
posting an invitation to participate in the listserv INFOCOMMON-L
30 CAMERON K. TUAI
Manager Survey
Staff Survey
Q3. Coordination (Van de Ven et al., 1976): The measure for coordination
is based upon work by Van de Ven et al. (1976). Using pre-test
interviews and a review of the literature (Daft, 2001; Galbraith, 1973;
Thompson, 1967), the researcher modified the instrument to customize
it to the information service unit context. The modifications included
dropping four of the nine categories in the original instrument and
adding one category. Unless otherwise noted, the survey adopts the
remaining categories verbatim. Modifications to the original instru-
ment are as follows:
Dropped the response that described coordination through ‘‘work
plans’’ or ‘‘work schedules.’’ This type of coordination is generally
associated with a manufacturing context where sequential inter-
dependence requires specification of quantities and delivery dates in
order for one step in the work process to coordinate with the next.
Pre-test interviews revealed that respondents interpreted this form of
coordination to mean shift schedules, which is better associated with
rules and procedures – a lower form of coordinative structure.
Dropped assistant supervisors as coordinators. The survey merged
this category with coordination through senior administrators. Pre-
test interviews revealed confusion as to the difference between
assistant and senior administrators because there were often no
assistant supervisors within the unit.
Added Galbraith’s coordinative structure of ‘‘service statements
or goals’’ as a form of Thompson’s (1967) coordination through
38 CAMERON K. TUAI
The data were collected using an online survey administered from April
2010 to November 2010. The initial population of 112 organizations
produced n ¼ 62 unique institutional responses. Participants provided 435
surveys, of which n ¼ 315 were usable (Table 4); responses that contained at
least one completed survey section were deemed usable.
To describe the characteristics of the institutions participating, in the
survey, the researcher downloaded demographic data for 2007–2008
(Department of Education [DoE], 2008) from the National Center for
Educational Statistics (NCES). Because NCES collects data only on U.S.
schools, Table 4 does not reflect Canadian institutions. The ‘‘# of Years IC
opened’’ was gathered from the survey (Table 5).
The analysis uses the mean responses of each IC’s members as proxies for
the IC characteristics of interest. To test the applicability of this approach
an analysis of variance (ANOVA) test will check for within group
homogeneity and between group heterogeneity. If within group homo-
geneity is significantly different from between group homogeneity at ao.05,
then the analysis will accept an IC member’s mean answers as a proxy for
the unit characteristics in question.
Measurement Development
Because theoretical expectations may not hold when extended to this new
area of research, principal component analysis (PCA) was used to factor
instrument items into overall measures. To maximize correlation between
items after the initial extraction, the analysis employed a varimax ortho-
gonal rotation, the most common and recommended rotation approach
(Tabachnick & Fidell, 2007). The factorability of items was determined by
confirming an inter-item correlation of RW.30, and that the Kaiser–Meyer–
Olkin measure of sampling adequacy was greater than .60. To test the
measure’s reliability, the analysis looked for Cronbach’s alpha of aW.70.
The value for each measure will be the mean of its items. This approach
allows the inclusion of surveys where some items are missing within the
measure. Additionally, this makes it easier to interpret the analysis because
most items in the survey are on a scale of 1–7, which will mirror the
analysis’s findings.
Interdependence
Coordination
The survey measured coordination using six items and confirmed the
applicability of factor analysis by finding that all coordinative measures
correlated at r(178)W0.30, po.01 to at least one other measure, and a
Kaiser–Meyer–Olkin (KMO) W0.60 , which is the accepted minimum for
this measure (Tabachnick & Fidell, 2007). Exploratory PCA of the
coordination variables using an unspecified solution (EigenvalueW1)
42 CAMERON K. TUAI
extracted two factors accounting for 58% of the variance across variables.
Communalities or the amount of variance accounted for in the variable by
the solution were all W.30, which supports the applicability of factor
analysis (Neill, 2008). Orthogonal varimax rotation with Kaiser Normal-
ization of the initial factor matrix found two factors (Table 7), with Factor 1
loading with policy, goals, manager, meetings, cross-training, and teams
at W.60; and Factor 2 loading with observation, and direct contact also
loading at W.60, which suggests very good representation (Tabachnick &
Fidell, 2007). A challenge to finding a simple solution was cross-training and
teams, which cross-loaded onto Factor 2 at W.40. The factoring of the items
largely conforms to the theoretical expectation with policies, goals,
manager, meetings, cross-training, and teams relating to formal methods
of coordination, and observation and direct contact relating to informal
methods of coordination (Van de Ven et al., 1976). This logical grouping
lends support to the construct validity of the measure. The analysis finds
further support for this measure’s construct validity in a forced three-factor
solution where the factors load in increasing order of coordinative
complexity: Factor 1 loading with policies, goals, and manager; Factor 2
loading with meetings, cross-training, and teams; and Factor 3 loading with
observation and direct contact. Interpreting these factors would suggest
1 2
Policy .773
Goals .804
Manager .701
Meetings .606
Observation .715
Direct contact .814
Cross-training .604 .531
Teams .680 .400
Eigenvalue 3.60 1.06
% Total variance explained 45.04 58.28
Cronbach’s alpha .83 .49
Behavioral Orientation
through the importance placed upon market and technoeconomic goals. The
LPC score is the sum of 18 behavioral adjectives with positive anchors being
reverse scored: higher scores indicating a greater level of relationship
orientation and lower values a greater level of task orientation. The analysis
calculated the LPC by taking the average score for each respondent, with
the unit score being the average of the respondents within each IC. Because
the items that compose the LPC are not multiple measures, factor analysis
was not necessary. Cronbach’s alpha was calculated at a ¼ .91 (n ¼ 110)
indicating a good level of reliability (Table 9).
Analysis of within and between group variance for behavioral differentia-
tion assumes that the LPC will be influenced by both the librarian’s or
technologist’s cultural differences, as well as the ICs. Analysis of within and
between group variance did not find a reliable effect of a unit’s LPC
characteristics on its members’ perceptions of the LPC, for either librarians
or technologists. Analysis of within and between group variance of the staff
regardless of their profession also did not find a reliable effect of the unit’s
LPC characteristic on its members perceptions of the LPC. Only analysis
of LPC differences between librarians and technologists regardless of the
unit in which they worked found a reliable difference on LPC (Table 10).
This suggests that librarians and technologists differ on the LPC because
of individual characteristics and not the IC in which they work (Klein et al.,
1994). The measurement of LPC differs from Vorwerk’s (1970) findings, in
that he is largely unable to differentiate between public services and systems,
reporting public services with a m ¼ 4.9 and systems with a m ¼ 4.5, on a scale
of 1–8. His analysis does not conduct an ANOVA or report standard
deviations. The other difference is that he finds public services to have a
higher level of relationship orientation than systems staff, whereas this
analysis found technologists to have a higher level of relationship orienta-
tion than librarians. This last finding is admittedly a bit surprising, given the
stereotype of technologists being more task oriented. One possible expla-
nation could be that IC technology staff are the equivalent of technology
help desk staff and primarily composed of students who may differ from
full-time systems technology staff; this could explain the higher than
expected relationship score.
The second behavioral measure was the sum of the eight questions
concerning goal orientation. The applicability of factor analysis was
supported by seven of the eight items correlating with one other item at
r(149)W0.30, po.01 and KMO ¼ 0.71, self-service being the exception.
Exploratory PCA using orthogonal varimax rotation with Kaiser Normal-
ization using an unspecified solution (EigenvalueW1) extracted three factors
accounting for 61% of the variance across items. Examination of Factor 3
found that two of its three items cross-loaded at W.30 and that the third
factor was self-service. Dropping self-service and re-running the factor
analysis found two factors, with Factor 1 loading onto service, satisfaction,
instruction, usage, and Factor 2 loading onto security and cost. The analysis
dropped integration because it cross-loaded onto each factor at W.40. The
resulting rotated solution found two factors accounting for 56% of the
variance, with no cross-loadings. Internal consistency of the two factors as
measured through the Cronbach’s alpha was only moderate for Factor 1
(a ¼ .66) and low for Factor 2 (a ¼ .53). Elimination of items did not
increase reliability. The factors largely conform to the theoretical expecta-
tion with service, satisfaction, instruction, and usage relating to marketing-
type goals, and security and cost to operational-type goals (Lawrence &
Lorsch, 1967b). The analysis dropped Factor 2 because it showed a low level
of reliability. The analysis calculated behavioral orientation of Factor 1,
which closely resembles Lawrence and Lorsch’s (1967a) Market Goals by
taking the mean of service, satisfaction, instruction, and usage (Table 11).
The Market Behavioral Orientation score is the sum of service,
satisfaction, instruction, and usage. The analysis calculated the Market
Behavioral score by taking the average score for each respondent, with the
46 CAMERON K. TUAI
1 2
Service .607
Satisfaction .729
Instruction .763
Usage .663
Security .838
Cost .756
Eigenvalue 2.26 1.11
% Total variance explained 37.70 56.34
Cronbach’s alpha .66 .53
unit score being the average of the respondents within each IC. The scale for
Marketing Goal Orientation is from 1 (least important) to 7 (most important)
(Table 12).
Analysis of within and between group variance for behavioral differentia-
tion assumes that both the IC staff’s profession and the ICs to which they
belong will influence their goal orientation. Analysis of within and between
group variance of librarians did find a reliable effect of a unit’s marketing
orientation on its members, suggesting that the unit does influence the
librarians’ perceptions of the unit’s market orientation. Similarly, analysis of
within and between group variance of technologists also found a reliable
effect of a unit’s marketing orientation on its members, suggesting that the
unit does influence the technologists’ perceptions of the unit’s market
orientation. Lastly, analysis by profession by unit and profession also found
a reliable effect of a marketing orientation, suggesting that both the unit and
Structural Contingency Theory Model of Library and Technology Partnerships 47
the profession of the staff influence their perception of their goals regarding
a marketing orientation (Table 13).
Perceptions of Performance
1 2
Reputation .785
Goals .829
Efficiency .766
Morale .792
Patron satisfaction .655
Productive .842
Effort .751
Satisfaction .794
Relative payoff .780
Goal .843
Service .800
Relate .827
Spend .747
Eigenvalue 1.74 7.15
% Total variance explained 68.44 41.65
Cronbach’s alpha .85 .93
1
Day-to-day .839
Current .883
New .668
Eigenvalue 2.39
% Total variance explained 79.66
Cronbach’s alpha .87
Funding
Inferential Statistics
Predictors of Coordination
1. Interdependence 1.00
(49)
2. Coordination .39 1.00
(49) (51)
3. LPC difference .08 .11 1.00
(26) (26) (26)
4. Market goal difference .47 .15 .33 1.00
(24) (24) (24) (24)
5. Years opened .11 .08 .14 .28 1.00
(47) (49) (26) (24) (58)
6. # of Students enrolled .23 .32 .08 .27 .20 1.00
(46) (48) (26) (24) (47) (48)
N in parentheses.
po.05; po.01.
Step 1
Constant 3.21 0.22
SFTE 0.00 0.00 0.32
Step 2
Constant 2.19 0.44
SFTE 0.00 0.00 2.36
Interdependence 0.36 0.13 0.36
Step 1
Constant 3.84 0.38
Time open 0.03 0.05 0.11
Step 2
Constant 2.65 0.53
Time open 0.05 0.04 0.16
Interdependence 0.39 0.13 0.42
Step one for testing the first research question duplicates Step one of H1
by regressing the control variables SFTE against coordination and found a
significant relationship. Step two introduced the predictor variable of
behavioral differentiation as measured through the LPC and found no
influence of LPC on coordination. Introduction of the control variables
found no significant relationship to coordination.
Turning to the second behavioral differentiation measure, Difference in
Market Orientation, examination of the control variable SFTE found
significant influence of this variable on coordination. Introduction of
Market Orientation into the model found no significant relationship with
coordination. Using the control variable Time Open found no significant
influence of this variable on coordination, or in the second model, which
includes the measure Marketing Orientation.
These findings do not support H2, which suggests that there will be a positive
relationship between differentiation and coordination.
Step one of testing the fourth hypothesis regressed the control measure
Perceptions of Funding against the dependent variable Perceptions of
Performance and found no significant influence of the control on
performance. To test the influence of congruency or fit to the expected
relationship between interdependence and coordination, the analysis
standardized these measures using a z score, z ¼ (xm)/s, then took the
absolute value of interdependence minus coordination. This actually leads to
a ‘‘misfit score’’ because higher number equals misfit and not fit. To
transform this to a ‘‘fit score,’’ the analysis multiplied the fit score by negative
one (Tables 21 and 22).
Introduction of the predictor variable of Fit between interdependence
and coordination found a statistically significant relationship that predicted
R2 ¼ 20% of the variance in performance, a 13% increase over the control
variables. Examination of the Durbin–Watson (d ¼ 1.7) supports the
assumption of the independence errors. Subtracting R2 from the Adjusted
R2 found 4.1% of the variance would be lost if the model were derived from
the population rather than the sample. Examination of the ANOVA table
N in parenthesis.
po.05; po.01.
Step 1
Constant 4.68 0.55
Funding 0.18 0.11 0.25
Step 2
Constant 5.03 5.31
Funding 0.16 0.10 0.23
Fit 0.34 0.13 0.37
finds that the model significantly improves the ability to predict Perceptions
of Performance F(2, 40) ¼ 4.89, po.01. As indicated in Table 20, congru-
ency significantly predicts Perception of Performance: a b ¼ 0.34 or
b ¼ 0.37 indicating the number of standard deviation of change in
coordination for every standard deviation change in Perceptions of Perfor-
mance. This finding is somewhat supported by Weng’s (1997b) report that
at low and medium levels of intradepartmental dependence, there is a
significant positive correlation between formalization and performance. In
other words, at low and medium levels of interdependence, higher levels of
formalization led to higher levels of performance. A further review of her
findings reveals no other significant correlation to performance in her
interdependence and coordination fit measures, and none of her regression
coefficients regarding fit and performance was significant.
These findings support H4, which suggests that there will be a positive
relationship between performance and the degree to which coordination fits
interdependence (Table 23).
CONCLUSION
here has been to use structural contingency theory to explore empirically the
integration of librarians and technologists. By testing the contingency
theory expectations regarding workflow interdependence, coordination, and
performance, this work makes two broad contributions to library manage-
ment and organizational theory. The first contribution is the empirical
testing and confirmations of Thompson’s (1967) proposed relationship
between workflow interdependence and coordination.
Although the literature reveals a number of studies that empirically test
Thompson’s proposed relationship between interdependence and coordina-
tion, they all fail to measure these items as originally defined. The work
presented here is one of the first empirical studies to duplicate Thompson’s
propositions and confirm his findings. This contribution was unanticipated
because the literature generally presents the relationship between workflow
interdependence and coordination in a largely factual manner (Daft, 2001;
Donaldson, 2001; Scott, 1992). Yet a careful review finds consistent
deviation from Thompson’s definition of either workflow interdependence
or coordination. For instance, Van de Ven et al. (1976) point out in their
seminal article on workflow interdependence and coordination that their
measure of interdependence does not allow them to describe this item in
terms of a Guttman scale (p. 335), a central characteristic of Thompson’s
definition of interdependence. Thus, Van de Ven and colleagues measure
interdependence as a single item, instead of an aggregate of three. This
deviation from Thompson’s definition of interdependence as a Guttman
scale of three items is also found in both Tushman’s (1979) and Lynch’s
(1974) studies, which purposefully duplicate Mohr’s (1971) single-item
definition of interdependence. Even the studies that do measure inter-
dependence as defined by Thompson deviate from the author’s definition of
coordination, confounding these studies’ efforts to duplicate the original
proposition. For instance, Cheng (1983) uses a completely different
definition of coordination, and Lynch (1974)and Weng (1997b) define
coordination at the organizational rather than the workflow level as her unit
of analysis as Thompson had originally intended to do.
By empirically testing workflow interdependence and coordination, this
study makes other contributions regarding the effect of behavioral
differentiation. The failure to confirm behavioral differentiation as a
moderating force in the relationship between workflow interdependence
and coordination contributes to IC practitioners’ understanding; it refutes a
common perception that IC managers can improve performance by
minimizing the behavioral differences between librarians and technologists.
This conclusion comes from two findings: (a) that differences between
Structural Contingency Theory Model of Library and Technology Partnerships 57
Background
Research Methods
The researcher sought to establish the reliability and validity of the measures
by customizing existing organizational science instruments to the context of
the IC. Contingency theory, generally, categorizes interdependence and
coordination into three categories of increasing complexity. Using these
categories, the researcher measured, in order of increasing complexity,
interdependence as pooled, sequential, and reciprocal; and coordination as
standardization, planning, and mutual adjustment. To customize these
categories to an IC context, the researcher analyzed approximately 80 IC
case studies to identify common practices that matched the level of
complexity associated with each category. The work associated pooled
interdependence with the IC service of collocation of library and technology
services within a common space. At this level of interdependence, librarians
and technologists are largely independent with little workflow occurring
between partners. Managers coordinate this level of interdependence
through the standardization of policies and procedures. Less evident within
the literature is sequential interdependence, which the researcher associated
with the service of informed referrals. Informed referrals involve a higher
60 CAMERON K. TUAI
Analysis
Interpretation of Findings
Limitations
Threats. Some writers suggest that weaknesses and threats are really just
opportunities to create new strengths. Similarly, one perspective on a
dissertation’s limitations is that they are simply suggestions for future
research. Adopting this perspective reveals a number of potential research
questions. One area is the expansion of the context beyond library
partnerships with technologists.
Research into how libraries work with other units presents an excellent
research opportunity because there is currently a strong trend within the
academy to use partnerships to leverage individual unit capabilities. Examples
of other library partnerships of interest include those with vendors, teaching
and research units, and other university service units. Alternatively, future
researchers could expand beyond the dissertation’s context of the IC to
include other multiunit collaborations, for example, with digital repositories,
cyber-infrastructures, or digitization programs. A second research opportu-
nity is the expansion of predictor variables of workflow interdependence to
include technology. Similarly, one could expand upon the criterion variable of
coordination to include other structural measures, such as centralization,
formalization, and division of labor. Necessarily, analysis of these alternative
structural variables would also imply shifting the unit of analysis from a
group to an organizational level of analysis. Another potential area for future
research opportunities is to extend the rational assumptions into social factors
such as power or politics.
The dissertation was naturally constrained in its investigation of
coordination by its use of structural contingency theory, which is largely
rationally based. Future research could develop a broader understanding of
coordination by adopting a constructivist approach. This shift in
epistemological assumptions would allow researchers to account more
completely for human factors. Further, the shift from a rational to a
constructivist perspective would allow researchers to use qualitative
methods more easily, thus improving the ability to understand the complex
social interactions that influence the success and failure of library partner-
ships. Qualitative methods also represent a significant opportunity to
improve upon the dissertation’s measures of behavioral differentiation.
Beyond the methodological opportunities, a constructivist approach would
allow researchers to connect more readily with the theories and ideas
generally adopted in information science. Social informatics’ success with
the Socio-Technical Interaction Network model is one example of an
information science approach that could expand upon the findings reported
here. Because many researchers within this discipline draw upon organiza-
tional science to build upon and advance their ideas, information science
offers the opportunity to expand libraries’ understanding of partnerships
66 CAMERON K. TUAI
REFERENCES
ACRL. (1998). Task force on academic library outcomes assessment report. Retrieved from
http://www.ala.org/ala/mgrps/divs/acrl/publications/whitepapers/taskforceacademic.
cfm. Accessed on December 14, 2009.
Alexander, D. E., Lassalle, C. C., & Steib, L. C. (2005). Manning the boat with a diverse (non-
traditional) crew. Paper presented at the Proceedings of the 33rd Annual ACM
SIGUCCS Conference on User Services, Monterey, CA.
Alexander, J. W., & Randolph, W. A. (1985). The fit between technology and structure as a
predictor of performance in nursing subunits. Academy of Management Journal, 28(4),
844–859.
Allen, T. J., & Cohen, S. I. (1969). Information flow in research and development laboratories
(technical communication patterns in R&D laboratories, discussing effects of work
structure, social relations, etc). Administrative Science Quarterly, 14(1), 12–19.
Argote, L. (1982). Input uncertainty and organizational coordination in hospital emergency
units. Administrative Science Quarterly, 27(3), 420–434.
Arnold, H. J. (1982). Moderator variables: A clarification of conceptual, analytic, and
psychometric issues. Organizational Behavior and Human Performance, 29(2), 143–174.
Babbie, E. (2004). The practice of social research. Belmont, CA: Wadsworth/Thomson Learning.
Structural Contingency Theory Model of Library and Technology Partnerships 67
Bailey, D. R., & Tierney, B. (2008). Transforming library service through information commons:
Case studies for the digital age. Chicago, IL: American Library Association.
Baker, N., & Kirk, T. G. (2007). Merged service outcomes at Earlham college. Reference
Services Review, 35(3), 379–387.
Barley, S. R. (1986). Technology as an occasion for structuring: Evidence from observations of
CT scanners and the social order of radiology departments. Administrative Science
Quarterly, 31(1), 78–108.
Barton, E., & Weismantel, A. (2007). Creating collaborative technology-rich workspaces in an
academic library. Reference Services Review, 35(3), 395–404.
Beagle, D. R. (1999). Conceptualizing an information commons. Journal of Academic
Librarianship, 25(2), 82–89.
Blain, A. (2000). A partnership for future information technology support at a community
college. In L. L. Hardesty (Ed.), Books, bytes, and bridges: Libraries and computer centers
in academic institutions (pp. 189–198). Chicago, IL: American Library Association.
Blandy, S. G. (1996). Introduction. In S. G. Blandy, L. Martin & M. Strife (Eds.), Assessment
and accountability in reference work (pp. 1–3). New York, NY: The Haworth Press.
Blau, P. M. (1972). Interdependence and hierarchy in organizations. Social Science Research,
1(1), 1–24.
Buckland, M. K. (2003). Five grand challenges for library research. Library Trends, 51(4),
675–686.
Channing, R. K., & Dominick, J. L. (2000). Wake forest university. In L. L. Hardesty (Ed.), Books,
bytes, and bridges: Libraries and computer centers in academic institutions (pp. 137–141).
Chicago, IL: American Library Association.
Cheng, J. L. C. (1983). Interdependence and coordination in organizations: A role-system
analysis. Academy of Management Journal, 26(1), 156–162.
Child, J. (1972). Organizational structure, environment and performance: The role of strategic
choice. Sociology, 6(1), 1–22.
Church, J. (2005). The evolving information commons. Library Hi Tech, 23(1), 75–81.
Comstock, D. E., & Scott, W. R. (1977). Technology and the structure of subunits:
Distinguishing individual and workgroup effects. Administrative Science Quarterly,
22(2), 177–202.
Cowgill, A., Beam, J., & Wess, L. (2001). Implementing an information commons in a
university library. Journal of Academic Librarianship, 27(6), 432–439.
Crawford, G. A. (1997). Information as a strategic contingency: Applying the strategic
contingencies theory of intraorganizational power to academic libraries. College &
Research Libraries, 58(2), 145–155.
Crockett, C., McDaniel, S., & Remy, M. (2002). Integrating services in the information
commons: Toward a holistic library and computing environment. Library Administration
and Management, 16(4), 181–186.
Daft, R. L. (1978). System influence on organizational decision-making: Case of resource-
allocation. Academy of Management Journal, 21(1), 6–22.
Daft, R. L. (2001). Organizational theory and design (7th ed). Cincinnati, OH: Thomson Learning.
Daft, R. L., & Macintosh, N. B. (1981). A tentative exploration into the amount and
equivocality of information processing in organizational work units. Administrative
Science Quarterly, 26(2), 207–224.
Dallis, D., & Walters, C. (2006). Reference services in the common environment. Reference
Services Review, 34(2), 248–260.
68 CAMERON K. TUAI
David, F. R., Pearce, J. A., & Randolph, W. A. (1989). Linking technology and structure to
enhance group performance. Journal of Applied Psychology, 74(2), 233–241.
de Jager, K. (2002). Successful students: Does the library make a difference? Performance
Measurement and Metrics, 3(3), 140–144.
Dearborn, D. C., & Simon, H. A. (1967). Selective perception: A note on the departmental
identifications of executives. Sociometry, 21(2), 140–144.
Department of Education, Institute of Educational Science. (2008). Library statistics program:
Academic libraries, 2008 [Data File]. Retrieved from National Center for Educational
Statistics Website, http://nces.ed.gov
Donaldson, L. (2001). The contingency theory of organizations. Thousand Oaks, CA: Sage
Publications Inc.
Dugan, R. E., & Hernon, P. (2002). Outcomes assessment: Not synonymous with inputs and
outputs. The Journal of Academic Librarianship, 28(6), 376–380.
Duncan, J. M. (1998). The information commons: A model for (physical) digital resource
centers. Bulletin of the Medical Library Association, 86(4), 576–582.
Edgar, W. B. (2006). Questioning LibQUAL þ t. portal: Libraries and the Academy, 6(4), 445–465.
Fiedler, F. E. (1964). A contingency model of leadership effectiveness. Advances in Experimental
Social Psychology, 1, 149–190.
Fitzpatrick, E. B., Moore, A. C., & Lang, B. W. (2008). Reference librarians at the reference
desk in a learning commons: A mixed methods evaluation. The Journal of Academic
Librarianship, 34(3), 231–238.
Flowers, K., & Martin, A. (1994). Enhancing user services through collaboration at rice
university. CAUSE/EFFECT, 17(3), 19–25.
Foley, T. J. (1997). Combining libraries, computing, and telecommunications: A work in
progress. Paper presented at the Proceedings of the 25th Annual ACM SIGUCCS
Conference on User Services: Are You Ready?, Monterey, CA.
Fox, D., Fritz, L., Kichuk, D., & Nussbaumer, A. (2001). University of Saskatchewan
information commons: Reconfiguring the learning environment. Retrieved from http://
library2.usask.ca/Bfox/ic.pdf. Accessed on July 21, 2009.
Frand, J., & Bellanti, R. (2000). Collaborative convergence: Merging computing and library
services at the Anderson graduate school of management at UCLA. Journal of Business
& Finance Librarianship, 6(2), 3–25.
Franks, J. A., & Tosko, M. P. (2007). Reference librarians speak for users: A learning commons
concept that meets the needs of a diverse student body. The Reference Librarian, 47(97),
105–118.
Fry, L. W., & Slocum, J. W., Jr. (1984). Technology, structure, and workgroup effectiveness:
A test of a contingency model. Academy of Management Journal, 27(2), 221–246.
Galbraith, J. R. (1973). Designing complex organizations. Boston, MA: Addison-Wesley.
Ghoshal, S., & Nohria, N. (1989). Internal differentiation within multinational corporations.
Strategic Management Journal, 10(4), 323–337.
Greenwell, S. (2007). Around the world to the technology at the hub@ wt’s, the university of
Kentucky’s information commons. Library Hi Tech News, 24(6), 40–42.
Gresov, C. (1990). Effects of dependence and tasks on unit design and efficiency. Organization
Studies, 11(4), 503–529.
Griffin, R. (2000). Technology planning: Oregon state university’s information commons. OLA
Quarterly, 6(3), 12–13.
Hage, J., Aiken, M., & Marrett, C. B. (1971). Organization structure and communications.
American Sociological Review, 36(5), 860–871.
Structural Contingency Theory Model of Library and Technology Partnerships 69
Hall, R. H., & Tolbert, P. S. (2005). Organizations: Structures, processes, and outcomes. Upper
Saddle River, NJ: Prentice Hall.
Heath, F., Cook, C., Kyrillidou, M., & Thompson, B. (2002). Arl index and other validity
correlates of libqual þ scores. portal: Libraries and the Academy, 2(1), 27–42.
Heath, F., Cook, C., & Thompson, R. (2002). Reliability and structure of LibQUAL þ scores:
Measuring perceived library service quality. portal: Libraries and the Academy, 2(1),
3–12.
Henfridsson, O., Holmström, J., & Söderholm, A. (1997). Beyond the common-sense of
practice: A case for organizational informatics. Scandinavian Journal of Information
Systems, 9(1), 47–56.
Hernon, P., & Dugan, R. E. (2006). Institutional mission-centered student learning. In
P. Hernon, R. E. Dugan & C. Schwartz (Eds.), Revisiting outcomes assessment in higher
education (pp. 1–12). Westport, CT: Libraries Unlimited Inc.
Hernon, P., & Whitman, J. R. (2001). Delivering satisfaction and service quality: A customer-
based approach for libraries. Chicago, IL: American Library Association.
Hickson, D. J., Pugh, D. S., & Pheysey, D. C. (1969). Operations technology and organization
structure: An empirical reappraisal. Administrative Science Quarterly, 14(3), 378–397.
Hook, R. D. (1980). A comparative study of three medium-sized academic libraries using a
contingency theory of management. Unpublished Ph.D. thesis, University of Southern
California, Los Angeles, CA.
Ito, J. K., & Peterson, R. B. (1986). Effects of task difficulty and interunit interdependence on
information processing systems. Academy of Management Journal, 29(1), 139–149.
Jääskeläinen, A., & Lönnqvist, A. (2009). Designing operative productivity measures in public
services. VINE, 39(1), 55–67.
Jones, C. L. (1984a). Academic libraries and computing: A time of change. EDUCOM Bulletin,
20(1), 9–12.
Jones, K. H. (1984b). Conflict and change in library organizations: People, power, and service.
London: Clive Bingley Ltd.
Kauser, S., & Shaw, V. (2004). The influence of behavioural and organisational characteristics on
the success of international strategic alliances. International Marketing Review, 21(1), 17.
Kayongo, J., & Jones, S. (2008). Faculty perception of information control using
LibQUAL þ t indicators. The Journal of Academic Librarianship, 34(2), 130–138.
Kent, P. G., & McLennan, B. (2007). Developing a sustainable staffing model for the learning
commons: The Victoria university experience. Paper presented at the International
Conference on Information and Learning Commons: Enhancing its Role in Academic
Learning and Collaboration.
Khandwalla, P. N. (1973). Viable and effective organizational designs of firms. The Academy of
Management Journal, 16(3), 481–495.
Kim, K. K., & Umanath, N. S. (1992). Structure and perceived effectiveness of software
development subunits: A task contingency analysis. Journal of Management Information
Systems, 9(3), 157–181.
Kirk, T. (2004). The role of management theory in day-to-day management practices of a
college library director. Library Administration and Management, 18(1), 35–38.
Kirk, T. (2008). The merged organization: Confronting the service overlap between libraries
and computer centers. Library Issues: Briefings for Faculty and Administrators, 28(5), 1–4.
Klein, K. J., Dansereau, F., & Hall, R. J. (1994). Levels issues in theory development, data
collection, and analysis. The Academy of Management Review, 19(2), 195–229.
70 CAMERON K. TUAI
Metzger, M. C. (2006). Enhancing library staff training and patron service through a cross-
departmental exchange. Technical Services Quarterly, 24(2), 1–7.
Miller, J. (2008). Quick and easy reference evaluation: Gathering users’ and providers’
perspectives. Reference & User Services Quarterly, 47(3), 218–222.
Mohr, L. B. (1971). Organizational technology and organizational structure. Administrative
Science Quarterly, 16(4), 444–459.
Molholt, P. (1985). On converging paths: The computing center and the library. Journal of
Academic Librarianship, 11(5), 284–288.
Morales, S., & Sparks, T. (2006). Creating synergy to make it happen. Paper presented at the
Proceedings of the 34th Annual ACM SIGUCCS Conference on User Services,
Edmonton, Alberta, Canada.
Moran, R. F. (1978). Contingency theory and its implications for the structure of an academic
library. ERIC Document Reproduction Service No. ED163949, East Lansing, MI.
Neff, R. K. (1986). Merging libraries and computer centers: Manifest destiny or manifestly
deranged? Information Reports and Bibliographies, 15(3), 17–20.
Neill, J. (2008). Writing up a factor analysis. University of Canberra, Centre for Applied
Psychology.
Nielsen, B., Steffen, S. S., & Dougherty, M. C. (1995). Computing center/library cooperation in
the development of a major university service: Northwestern’s electronic reserve system.
Paper presented at the Realizing the Potential of Information Resources: Information,
Technology, and Services – Proceedings of the 1995 CAUSE Annual Conference,
New Orleans, LA.
Nikkel, T. (2003). Implementing the Dalhousie learning commons. Feliciter, 49(4), 212–214.
Orlikowski, W. (1992). The duality of technology: Rethinking the concept of technology in
organizations. Organization Science, 3(3), 398–426.
Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for
studying technology in organizations. Organization Science, 11(4), 404–429.
Orlikowski, W. J., & Robey, D. (1991). Information technology and the structuring of
organizations. Information Systems Research, 2(2), 143–169.
Oulton, A. J. (1991). Strategies in action: Public library management and public expenditure
constraints. London, UK: Library Association Publishing.
Overton, P., Schneck, R., & Hazlett, C. B. (1977). An empirical study of the technology of
nursing subunits. Administrative Science Quarterly, 22(2), 203–219.
Perrow, C. (1967). A framework for the comparative analysis of organizations. American
Sociological Review, 32(2), 194–208.
Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., & Lee, J.-Y. (2003). Common method
biases in behavioural research: A critical review of the literature and recommended
remedies. Journal of Applied Psychology, 88(5), 879–903.
Price, J. L. (1972). Handbook of organizational measurement. Lexington, MA: D.C. Heath and
Company.
Rayward, W. B. (1969). Libraries as organizations. College and Research Libraries, 30(4),
312–326.
Roszkowski, M. J., Baky, J. S., & Jones, D. B. (2005). So which score on the LibQUAL þ t tells
me if library users are satisfied? Library & Information Science Research, 27(4), 424–439.
Samson, S., Granath, K., & Pengelly, V. (2000). Service and instruction: A strategic focus. In
L. L. Hardesty (Ed.), Books, bytes, and bridges: Libraries and computer centers in
academic institutions (pp. 153–163). Chicago, IL: American Library Association.
72 CAMERON K. TUAI
Scott, R. (1992). Organizations: Rational, natural, and open systems (3rd ed). Upper Saddle
River, NJ: Prentice-Hall.
Scott, R., & Davis, G. F. (2007). Organizations and organizing: Rational, natural, and open
systems perspectives. Upper Saddle River, NJ: Pearson Prentice Hall.
Sharrow, M. J. (1995). Library and it collaboration projects: Nine challenges. CAUSE/
EFFECT, Winter, 55–56.
Shi, X., & Levy, S. (2005). A theory-guided approach to library services assessment. College and
Research Libraries, 66(3), 266–277.
Spector, P. E. (1992). Summated rating scale construction: An introduction (Vol. 82). Newbury
Park, CA: Sage University Paper Series.
Spencer, M. E. (2007). The state-of-the-art: NCSU libraries learning commons. Reference
Services Review, 35(2), 310–321.
Stemmer, J. K. (2007). The perceptions of effectiveness in merged information services
organizations: Combining library and information technology services at liberal arts
institutions. Ohio University.
Stueart, R. D., & Moran, B. B. (2007). Library and information center management (7th ed).
Westport, CT: Libraries Unlimited.
Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed). Boston, MA:
Pearson.
Telatnik, G. M., & Cohen, J. A. (1993). Working together: The library and the computer center.
Paper presented at the Proceedings of the 21st Annual ACM SIGUCCS Conference on
User Services, San Diego, CA.
Thompson, B., Cook, C., & Kyrillidou, M. (2005). Concurrent validity of libqual þ (tm) scores:
What do libqual þ (tm) scores measure? The Journal of Academic Librarianship, 31(6),
517–522.
Thompson, B., Kyrillidou, M., & Cook, C. (2008). Library users’ service desires: A libqual þ
study. The Library Quarterly, 78(1), 1–18.
Thompson, J. (1967). Organizations in action. New York, NY: McGraw Hill.
Todd, K., Mardis, L., & Wyatt, P. (2005). Synergy in action: When information systems
and library services collaborate to create successful client-centered computing labs.
Paper presented at the Proceedings of the 33rd Annual ACM SIGUCCS Conference on
User Services, Monterey, CA.
Trawick, T. A., & Hart, J. (2000). The computing center and the library at teaching university:
Application of management theories in the restructuring of information technology. In
L. L. Hardesty (Ed.), Books, bytes, and bridges: Libraries and computer centers in
academic institutions (pp. 178–188). Chicago, IL: American Library Association.
Tucker, J. M., & McCallon, M. (2008). Abilene Christian university: Margaret and Herman
Brown library. In D. R. Bailey & B. Tierney (Eds.), Transforming library service through
information commons: Case studies for the digital age (pp. 99–103). Chicago, IL:
American Library Association.
Tushman, M. L. (1977). Special boundary roles in the innovation process. Administrative
Science Quarterly, 22(4), 587–605.
Tushman, M. L. (1978). Technical communication in R&D laboratories: The impact of project
work characteristics. The Academy of Management Journal, 21(4), 624–645.
Tushman, M. L. (1979). Work characteristics and subunit communication structure: A
contingency analysis. Administrative Science Quarterly, 24(1), 82–98.
Structural Contingency Theory Model of Library and Technology Partnerships 73
Tushman, M. L., & Scanlan, T. J. (1981). Boundary spanning individuals: Their role in
information transfer and their antecedents. Academy of Management Journal, 24(2),
289–305.
Van de Ven, A. H., Delbecq, A. L., & Koenig, R. (1976). Determinants of coordination modes
within organizations. American Sociological Review, 41(2), 322–338.
Van de Ven, A. H., & Ferry, D. L. (1980). Measuring and assessing organizations. New York,
NY: Wiley.
Velasquez, D. (2007). The development and testing of a questionnaire to measure complexity of
nursing work performed in nursing homes. Geriatric Nursing, 28(2), 90–98.
Vorwerk, R. J. (1970). The environmental demands and organizational states of two academic
libraries. Unpublished Ph.D. thesis, Indiana University, Bloomington, IN.
Vose, D. S. (2008). Binghamton university, state university of New York: Glenn g. Bartle
library. In D. R. Bailey & B. Tierney (Eds.), Transforming library service through
information commons: Case studies for the digital age (pp. 29–34). Chicago, IL: American
Library Association.
Weiner, S. (2009). The contribution of the library to the reputation of a university. The Journal
of Academic Librarianship, 35(1), 3–13.
Weiner, S. G. (2003). Resistance to change in libraries: Application of communication theories.
portal: Libraries and the Academy, 3(1), 69–78.
Weng, H. (1997a). A contingency approach to explore the relationship among structure,
technology, and performance in academic library departments. In D. E. Williams &
E. D. Garten (Eds.), Advances in library administration and organization (Vol. 15,
pp. 249–317). Greenwich, CT: JAI Press.
Weng, H. (1997b). A contingency approach to explore the relationships among structure,
technology, and performance in academic library departments. Unpublished Ph.D. thesis,
Rutgers University.
Wolske, M., Larkee, B., Lyons, K., & Bridgewater, K. (2006). Lessons learned from the library:
Building partnerships between campus and departmental it support. Paper presented at
the Proceedings of the 34th Annual ACM SIGUCCS Conference on User Services,
Edmonton, Alberta, Canada.
Woodsworth, A. (1988). Computing centers and libraries as cohorts: Exploiting mutual
strengths. Journal of Library Administration, 9(4), 21–34.
Yohe, M., & AmRhein, R. (2005). Its not your parents’ library: No box required. Paper
presented at the Proceedings of the 33rd Annual ACM SIGUCCS Conference on User
Services Monterey, CA.
Zinn, J. S., Brannon, D., Mor, V., & Barry, T. (2003). A structure-technology contingency analysis
of caregiving in nursing facilities. Health Care Management Review, 28(4), 293–306.
74 CAMERON K. TUAI
Dear ____________
Best Wishes,
Cameron