Вы находитесь на странице: 1из 49

Wk1 Project Description

Gary Jechorek

Evaluation Project Description


I work for the University of Wisconsin-Milwaukee, College of Nursing. I administer the process
of data collection and return for statistical process and comment completion.
I would like to evaluate our teaching and course evaluation process for improvements needed
in the data collection of course and teaching data.
I want to specialize my evaluation on third and fourth year courses that are in three parts to the
course for 6 credits. Seminar, lab, and (clinical observation and practice location).
The accuracy of data collected has been muddied due to a change in collection procedures.
Question: Is an evaluation to be completed to determine the accuracy of the data collected for
statistical analysis by specific location and instructor?
The objectives needing to be clarified.
Data collection observations currently:
An instructor may teach a seminar, a lab and a clinical location group
(All students are the same)
An instructor may teach a seminar, but different clinical and different lab
(Different students all 3 sections)
An instructor may teach a seminar, a lab, but a different clinical group
(Seminar and lab students the same, but different clinical students)
An instructor may teach a seminar, clinical, but different lab.
The combinations continue.
What I will be asking for is clarification and a means to complete the possible change of
collection process.
The current data collection method used does not allow for a clean data collection by location.
Is the current system of collection sufficiently accurate information as collected?
Does each area (Seminar, Lab, & Clinical) need to be reviewed as a separate teaching
area and section a more advantageous method?
I need to investigate through surveying the faculty, and adjunct staff members what statistical
information should be collected. Can there be combinations as there is today, or with a
redesign, do we need clean and specific data collected for each section (seminar, lab, & clinical
location) to get a better read on student/instructors performance issues and site location
efficiency to complete objectives?
The current collection methods does not allow for data collection to clearly specify, seminar
results, lab results or clinical practice site results without overlap of data mixed:
24 instructors with any combination of up to three locations.
Seminar, lab & clinical

Seminar alone

Seminar & lab

Lab alone

Seminar & clinical

Clinical alone

And the combinations of results are many more combinations.


It is unclear statistically to calculate with any accuracy.

Gary Jechorek
EIDT-6130-2,
Program Evaluation.2015 Summer Sem 05/04-08/23-PT
Table of Contents

00.

Table of Contents .........................................................................File Wk00

I.

Wk1a Evaluation Project Description ........................................... File Wk1a

II.

Wk1b Examine a Program Evaluation ..........................................File Wk1b

III.

Wk2 Concept Map: Analyze Contextual ........................................File Wk2

IV.

Wk3 Select an Evaluation Criteria ..................................................File Wk3

V.

Wk4 Develop a Logic Model ..........................................................File Wk4

VI.

Wk5 Develop Evaluation Model ....................................................File Wk5

VII.

Wk6 Collection Strategies ..............................................................File Wk6

VIII.

Wk7 Evaluation Report Strategy ....................................................File Wk7

IX.

Wk7 Audio Presentation .................................................................File Wk7

Wk1 Project Description


Gary Jechorek

Evaluation Project Description


I work for the University of Wisconsin-Milwaukee, College of Nursing. I administer the process
of data collection and return for statistical process and comment completion.
I would like to evaluate our teaching and course evaluation process for improvements needed
in the data collection of course and teaching data.
I want to specialize my evaluation on third and fourth year courses that are in three parts to the
course for 6 credits. Seminar, lab, and (clinical observation and practice location).
The accuracy of data collected has been muddied due to a change in collection procedures.
Question: Is an evaluation to be completed to determine the accuracy of the data collected for
statistical analysis by specific location and instructor?
The objectives needing to be clarified.
Data collection observations currently:
An instructor may teach a seminar, a lab and a clinical location group
(All students are the same)
An instructor may teach a seminar, but different clinical and different lab
(Different students all 3 sections)
An instructor may teach a seminar, a lab, but a different clinical group
(Seminar and lab students the same, but different clinical students)
An instructor may teach a seminar, clinical, but different lab.
The combinations continue.
What I will be asking for is clarification and a means to complete the possible change of
collection process.
The current data collection method used does not allow for a clean data collection by location.
Is the current system of collection sufficiently accurate information as collected?
Does each area (Seminar, Lab, & Clinical) need to be reviewed as a separate teaching
area and section a more advantageous method?
I need to investigate through surveying the faculty, and adjunct staff members what statistical
information should be collected. Can there be combinations as there is today, or with a
redesign, do we need clean and specific data collected for each section (seminar, lab, & clinical
location) to get a better read on student/instructors performance issues and site location
efficiency to complete objectives?
The current collection methods does not allow for data collection to clearly specify, seminar
results, lab results or clinical practice site results without overlap of data mixed:
24 instructors with any combination of up to three locations.
Seminar, lab & clinical

Seminar alone

Seminar & lab

Lab alone

Seminar & clinical

Clinical alone

And the combinations of results are many more combinations.


It is unclear statistically to calculate with any accuracy.

Running Head: EXAMINE A PROGRAM EVALUATION

Week One Application


Program Evaluation
Gary Jechorek
Walden University
Examine a Program Evaluation
EDUC - 6130 - 2)
June 30, 2015
Dr. Michael Burke

EXAMINE A PROGRAM EVALUATION


Article Being Reviewed
EVALUATION OF A WEB-BASED MASTERS DEGREE PROGRAM:
Lessons Learned From an Online Instructional Design and Technology Program
Ray Martinez, University of Missouri-St. Louis, Shijuan Liu, William Watson, and Barbara
Bichelmeyer Indiana University (Martinez, Shijuan, Watson, & Bichelmeyer, 2006a)
An independent external evaluation is of the University of Illinois (U of I) Master of Science
Degree program; Instructional Design and Technology (IDIT) Program, was performed by
Primary Research Group. The evaluation was to measure the successes of the U of I IDIT
program, by evaluating the accountability and quality of the program. The intended audience
that the results were envisioned for: prospective adult learners interested in an e-learning
distance master degree program. Secondary audience: faculty, adjunct faculty, and potential or
prospective student candidates and students currently in the program.
The governance and creation group overseeing the U of I, IDIT course program are the fulltime
faculty with tenure track. This group has created the curriculum and control the content including
the program management and course content management of the program.
It is unclear the program objectives, as they are not stated in this review. I found a general
program description of an IDIT program:
This unique online degree program is designed to give instructors and corporate trainers
the tools to bridge the gap between traditional learning environments and the everexpanding realm of technology and media, allowing them to create immersive and
interactive instructional tools (The iSchool at Illinois: Graduate School of Library and
Information Science, Instructional Technology & Design: Who We Are).

The present evaluation is most closely aligned to the purposes of the participant-oriented and
expertise-oriented evaluation approaches. Proponents of participant-oriented evaluation view
participants as central to the evaluation. Using this approach, evaluators work to portray the
multiple needs, values, and perspectives of the program stakeholders in order to make judgments
about the value or worth of the program (Fitzpatrick, Sanders, & Worthen, 2004).
ISSUES TO BE ADDRESSED
The issues that were identified and that would be surveyed for accountability and quality were:
Student characteristic and practices
Faculty characteristics and practices
Curricula design, technology
Organizational supports

METHODOLOGY
1|Page

EXAMINE A PROGRAM EVALUATION

This evaluation involved a mixed-methods approach that incorporated data collected


from interviews and an online survey. Participants of the study included administrators,
faculty, and students of the DM program. Administrators were interviewed during March
2004, an online survey was administered to students in April 2004, and faculty members
were interviewed during April 2004 (Martinez, Shijuan, Watson, & Bichelmeyer,
2006b)
The summary of findings were taken from table 4 of the report identifying the outcomes of the
evaluation. (Martinez, et al., 2006a), Pg.280.
SUMMARY OF THE FINDINGS
Teaching online:
Advantages:

Flexibility for instructors and students

Disadvantages:

More difficult and time consuming in proving feedback and


communication in general

Strengths of the program:


Equivalent to the residential program in terms of quality, admission, and
evaluation criteria
Beneficial for the residential programs of the department in many ways
Hiring qualified adjunct faculty
Project-oriented design of the program and emphasizing pedagogy
Technology aspects
Students showed a high level of technological readiness.
Overall quality reputation of the department and faculty
Convenience and flexibility of the distance option
The technical support was rated as above average.
Technology supporting learning needed to be further improved.
AREAS FOR IMPROVEMENT NEEDED:
Technology still hindering the program in many aspects
Building online community, decreasing students feelings of isolation in learning
online
Helping students have appropriate expectations of the program
Class registration:
Students were relatively satisfied with the class registration.
Learning support:
Students were relatively satisfied with learning support and
assistance from the faculty, graduate assistants, and
administrative staff.
Resources:
The quality and quantity of the resources were agreeable to
Students, yet the reliability and timing needed to be
improved

2|Page

EXAMINE A PROGRAM EVALUATION


Administration

Adjunct faculty lacking of direct control of the courses


Hoping to take into account the special design of online courses
and how often the design was utilized for faculty merit reports
Technology was not adequate for providing effective courses.
Needing better course management system
(Martinez, et al., 2006a), Pg. 280.
This summative evaluation refers to the assessment of participants where the focus is on the
outcome of a program. This evaluation was not concerned with providing information to
serve decisions or assist in making judgements about program adoption, continuation or
expansion.
This summative evaluation was to assist in making judgements about the programs overall
worth or merit in relation to important criteria. by any observers or decision makers.
who need valuative conclusions for any other reasons besides development. Those being,
prospective students, looking for a degree program and determining its worthiness
(Fitzpatrick, Sanders, & Worthen, 2011), Pg.21.
But it is apparent that formative evaluation was part of the results. In formative evaluation,
programs or projects are typically assessed in early implementation (this the third cohort in
the program) to provide information about how best to revise and modify for improvement.
This type of evaluation often is helpful for pilot projects and new programs, but can be used
for progress monitoring of ongoing programs (Fitzpatrick, et al., 2011), Pg.22. This
evaluation identified findings in the program, still in the developmental phase, that need to be
addressed to strengthen and stabilize the U of I IDIT program.
This case study was a very excellent example of an overview perspective from an outside
source looking at a program for its overall accountability and quality to the consumer. An
outside source looking in at a program and identifying the strengths and weaknesses and
creating an outcome of observations reported.
It identified data collection groups, administration, faculty, adjunct faculty, and students
related to the programs, questioned through data collection methods through participantoriented and expertise-oriented evaluation approaches. It delivered outcomes related to each
group of participants, and its original intent, those perspectives students looking for a
successful distance learning environment and college to complete their MS degree.
This was a good example of a case study as a learning tool.

3|Page

EXAMINE A PROGRAM EVALUATION

Resources
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation:
Alternative approaches and practical guidelines.
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation:
Alternative approaches and practical guidelines (4th ed.). Upper Saddle
River: Pearson.
Martinez, R., Shijuan, L., Watson, W., & Bichelmeyer, B. (2006a, Fall2006).
EVALUATION OF A WEB-BASED MASTER'S DEGREE PROGRAM.
Quarterly Review of Distance Education Retrieved 3, 7, from
https://ezproxy.lib.uwm.edu/login?url=http://search.ebscohost.com/login.as
px?direct=true&AuthType=ip,uid&db=tfh&AN=22941929&site=ehostlive&scope=site
Martinez, R., Shijuan, L., Watson, W., & Bichelmeyer, B. (2006b). EVALUATION
OF A WEB-BASED MASTER'S DEGREE PROGRAM. [Article]. Quarterly
Review of Distance Education, 7(3), 267-283.
The iSchool at Illinois: Graduate School of Library and Information Science.
(Instructional Technology & Design: Who We Are). Instructional
Technology & Design: Who We Are Retrieved June 30, 2015, from
https://www.lis.illinois.edu/academics/itd

4|Page

CONCEPT MAP: ANALYZED CONTEXTUAL

Week 2
Concept Map: Analyze Contextual

Gary Jechorek
Walden University
Program Evaluation
(EDUC - 6130 - 2)

July 16, 2015 Resubmission

Dr. Michael Burke

Running Head: CONCEPT MAP: ANALYZED CONTEXTUAL


In recent years, the healthcare industrys severe shortage of hospital nursing staff has also
begun to negatively impact the development of future nursing students and faculty. In some
cases, schools of nursing have been forced to put more emphasis on their clinical nursing
programs, while academic tracks that prepare students for teaching careers were neglected or
even cancelled altogether (da Cunha Miguel, 2013).The need for evaluating teaching methods
and course content is critical for future industry success.
Primary stakeholders will be those who create the evaluation process and those that teach to
the objectives of the program. The evaluation will determine if course level objective goals have
been met within the program.
Secondary stakeholders are those reviewing the results and determining the usefulness of the
evaluation results. Determining needed improvements as necessary based on data collected.
Primary Stakeholders
Faculty
Adjunct Faculty

Evaluation Design Committee

Secondary Stakeholders
Program Directors
Program Coordinators
Course Coordinators
Academic Affairs Dean
College Dean

All stakeholders in the diagram are Faculty and Adjunct Faculty looking for improved
course objective outcomes.
The evaluation design content and process is created by Faculty and Adjunct Faculty in
an open door committee.
Faculty and Adjunct Faculty receive their individual teaching and course evaluation
results.
The Evaluation Design Committee does not receive results directly but hold other
supervisory positions.
Course Coordinators receive results for only their supervised course level. (combination
of two or three courses)
Program Coordinators (4) receive results by semester supervised: 1st Semester of
Program, 2nd Semester of Program, 3rd Semester of Program, and 4th Semester of
Program.
Program Directors receive results by program, Undergraduate, Graduate, and Doctorate.
Academic Dean receives all results by semester.
Academic Dean communicates anything notable to the College Dean.

Participation of administrators, faculty, staff,. in decision-making makes all of us take


responsibility for the success of our colleges and universities and thereby enhances everyones
commitment to excellence, efficiency, and productivity.This participation promotes
developing and maintaining high academic quality (University of Wisconsin-Milwaukee
Governance Committee, 2013).

1|Page

Running Head: CONCEPT MAP: ANALYZED CONTEXTUAL

Course
Coordinators
Secondary

Primary Interacts
with Secondary
Adjunct
Faculty
Primary

Primary Interacts
with Secondary
Open Door

Faculty
Primary

Secondary to
Secondary

Results
Distribution
Primary
Stakeholders

Results
Distribution

College
Dean
Secondary

Results
Distribution
Open Door

Primarry
Stakeholders

Program
Coordinators
Secondary

Results
Distribution

Openn Door

Nursing Faculty
and Adjunct
Faculty
Evaluation Design
Committee
Primary

Evaluation
Process

Indirect
Primary
Results
Distribution
Stakeholders

Academic
Affairs
Dean
Secondary

Results
Distribution
Secondary to
Secondary
Secondary to
Secondary

Results
Distribution
Primary Interacts
with Secondary
Program
Directors
Secondary

2|Page

Running Head: CONCEPT MAP: ANALYZED CONTEXTUAL

Reference
da Cunha Miguel. (2013). Those Who Can Teach Minority Nursing (March 30, 2013 ed.).
University of Wisconsin-Milwaukee Governance Committee. (2013). University of Wisconsin
Milwaukee Faculty Document No. 2934, November 21, 2013 Retrieved July 16, 2015,
from http://www4.uwm.edu/secu/docs/faculty/2934_SharedGov_Statement.pdf

3|Page

EVALUATION MODEL CHOICE WITH CRITERIA

Week 3: Select an Evaluation Model


Week 4: Develop Evaluative Criteria for Your Program Evaluation

Gary Jechorek
Walden University
Program Evaluation

(EDUC - 6130 - 2)

August 1, 2015

Dr. Michael Burke

EVALUATION MODEL CHOICE WITH CRITERIA

Course Assignment: Select an Evaluation Model


Gary Jechorek
Evaluation Model

Advantages

Disadvantages

DECISION-ORIENTED EVALUATION APPROACHES-MIXED METHODS MODEL

The CIPP
Evaluation Model

Helps administrators make good


decisions, though the process of
delineating, obtaining, reporting
and applying descriptive and
judgmental information about some
objects merit, worth, probity, and
significance to guide decision
making, support accountability,
disseminate, effective practices,
and increase understanding of the
involved phenomena, (Fitzpatrick,
Sanders, & Worthen, 2011),
Pg,173)
A redefined definition being more
succinct as the process of
delineating, obtaining, and
providing useful information for
judging decision alternatives
(Fitzpatrick, et al., 2011), Pg.178).
It encourages participation of many
stakeholders, who may not have
explicit decision-making concerns.
(Fitzpatrick, et al., 2011), Pg.178).

The focus is typically on


managers. (Fitzpatrick, et al.,
2011), Pg.178).
It encourages participation of
many stakeholders, who may
not have explicit decisionmaking concerns, will
necessarily receive less
attention in defining the
purposes of the evaluation.
(Fitzpatrick, et al., 2011),
Pg.178).
Management is the initiator of
data collection and the
interpretation of results.
(Fitzpatrick, et al., 2011),
Pg.178).

The evaluation focuses on the stage


of the program and different
questions arise at different stages.
It encourages managers and
evaluators to think cyclical,
rather than project based. .
(Fitzpatrick, et al., 2011), Pg.179).

Page | 1

EVALUATION MODEL CHOICE WITH CRITERIA


Evaluation Model

Advantages

Disadvantages

DECISION-ORIENTED EVALUATION APPROACHES-MIXED METHODS MODEL

Utilization-Focused Model

The primary purpose of


evaluation is to inform
decisions (makers) and

Staffing changes or turnover


of the primary intended users
using. Instead a task force of
primary users are needed.

The evaluations use is most


likely to occur if the
evaluator identifies one of
more of the stakeholders who
care about the evaluation and
are in the position to use it.

Decision making done by few


primary users and unchanging
context and decisions.
(Fitzpatrick, et al., 2011),
Pg.181).

The presence of an individual


or group who care about the
evaluation and the results.
Focuses on working with key
stakeholder or groups.

Users maybe too close to the


program being evaluated to
become very involved.
(Fitzpatrick, et al., 2011),
Pg.181).

The intended users to help


consider decisions by the
type of data or evidence and
the feasibility of affecting
them. (Fitzpatrick, et al.,
2011). Pg.180).

Explain your choice of model for your current evaluation process:


The CIPP model will produce a summative result, as we are to accomplish by policy, but as it is
defined as, the process of delineating, obtaining, reporting and applying descriptive and
judgmental information about some objects merit, worth, probity, and significance to guide
decision making, support accountability, disseminate, effective practices, and increase
understanding of the involved phenomena, (Fitzpatrick, et al., 2011), Pg,173). The process is
also accomplishing a formative review of the program, the course objectives, and the instructors
teaching accomplishments.

Page | 2

EVALUATION MODEL CHOICE WITH CRITERIA


Our process follows the CIPP models four categories of evaluation: context, input, process, and
product. The process follows the logic structure created by Stufflebeam:

Focusing the evaluation,


Collection of Information
Organizing of information

CIPP Model Logic Structure


Analysis of Information
Reporting of Information
Administration of the evaluation

(Fitzpatrick, et al., 2011), Pg.175-76)


The Utilization Evaluation method is part of the review process of the evaluation. It follows the
two assumptions:
A. The primary purpose of the evaluation is to inform decisions; and
B. Use is most likely to occur if the evaluator identifies one or more stakeholders who care
about the evaluation and are in a position to use it (Fitzpatrick, et al., 2011), Pg. 179).
The distribution of results is extensive in the review process. Here is a list of stakeholders that
will review by semester the nature of the findings. There are committees that will discuss the
findings, interpret results and make appropriate recommendations as needed, they are:
Undergraduate Program Committee, the Graduate Program Committee, and sub committees UPC
Annual Review Committee and GPC Annual Review Committee. Also all stakeholder below
receive and review results and add their input as recommendations. (University of WisconsinMilwaukee College of Nursing UnderGraduate Program Committee, 2008)
In recent years, the healthcare industrys severe shortage of hospital nursing staff has also
begun to negatively impact the development of future nursing students and faculty. In some
cases, schools of nursing have been forced to put more emphasis on their clinical nursing
programs, while academic tracks that prepare students for teaching careers were neglected or
even cancelled altogether (da Cunha Miguel, 2013).The need for evaluating teaching methods
and course content is critical for future industry success.
Primary stakeholders will be those who create the evaluation process and those that teach to
the objectives of the program. The evaluation will determine if course level objective goals have
been met within the program.
Secondary stakeholders are those reviewing the results and determining the usefulness of the
evaluation results. Determining needed improvements as necessary based on data collected.
Primary Stakeholders
Faculty
Adjunct Faculty

Evaluation Design Committee

Secondary Stakeholders
Program Directors
Program Coordinators
Course Coordinators

Page | 3

EVALUATION MODEL CHOICE WITH CRITERIA


Academic Affairs Dean
College Dean

All stakeholders are Faculty and Adjunct Faculty looking for course objective outcomes.
The evaluation design content and process is created by Faculty and Adjunct Faculty in
an open door committee.
Faculty and Adjunct Faculty receive their individual teaching and course evaluation
results.
The Evaluation Design Committee does not receive results directly but hold other
supervisory positions.
Course Coordinators receive results for only their supervised course level. (combination
of two or three courses)
Program Coordinators (4) receive results by semester supervised: 1st Semester of
Program, 2nd Semester of Program, 3rd Semester of Program, and 4th Semester of
Program.
Program Directors receive results by program, Undergraduate, Graduate, and Doctorate.
Academic Dean receives all results by semester.
Academic Dean communicates anything notable to the College Dean.

Participation of administrators, faculty, staff,. in decision-making makes all of us take


responsibility for the success of our colleges and universities and thereby enhances everyones
commitment to excellence, efficiency, and productivity.This participation promotes
developing and maintaining high academic quality (University of Wisconsin-Milwaukee
Governance Committee, 2013). This is a policy that must be followed under campus governance
policies.
The review of the evaluation process by stakeholders is undertaken to inform decisions, clarify
options, identify improvements and provide information about programs and policies within the
contextual boundaries of the time, place, values and politics (Fitzpatrick, et al., 2011), Pg. 180.
Using a mixed methods evaluation model works well in an educational environment since so
much of what is completed increases ownership in the evaluation, and the use of the results.
The two models I have written about as models in my situation, work in an educational
environment where summative and formative evaluations are necessary in Nursing, due to ever
changing best practice methods that are continually being updated.
The evaluation questions in the current evaluation survey form process are not a consideration in
this formative evaluation. We will be looking at processes.
Using a case study design qualitative survey method, there is a query to be answered. Is a
change process needed? The case study design will identify the current process method
description, and if necessary execute a change in collection, statistical analysis, and distribution
of result outcome methods to report each section (Seminar, Lab, & Clinical Site) as a unique
instructional type.

Page | 4

EVALUATION MODEL CHOICE WITH CRITERIA

The standards reflected in the choice of formative evaluation review questions are guided by
the descriptive case study design, and is particularly useful when the purpose of the evaluation
is to describe something -- a case in depth and are concerned with exploring the hows and
whys of a program (Fitzpatrick, et al., 2011), Pg.390.
The role of the stakeholders should be clarifying the evaluation process for adequate
representation of teaching and course objectives.
Question for Formative Evaluation:
Is our end of semester evaluation processes for *FCPI and *FCPII Level evaluations
providing an adequate representation of the teaching and course objectives?
*Foundation of Clinical Practice I & II Courses (Seminar, Lab, & Clinical Site types)

Using the models and criteria questions should assist administrators to make good decisions,
though the process of delineating, obtaining, reporting and applying descriptive and judgmental
information about some objects merit, worth, probity, and significance to guide decision
making, support accountability, disseminate, effective practices, and increase understanding of
the involved phenomena, (Fitzpatrick, et al., 2011), Pg,173)
The formative evaluation will determine if an implementation of new process methods will be
made to the current evaluation process program.

Page | 5

EVALUATION MODEL CHOICE WITH CRITERIA


Start Here
Questions for Formative Evaluation:
Is our end of semester evaluation processes for FCPI and FCPII Level evaluations
providing an adequate representation of the teaching and course objectives?
Description: Starting Point
This is what is happening:
FCPI & FCPII Level Coordinators have identified a problem with Teaching and Course
Evaluation Results for Seminar Lab, and Clinical Sites.
This is the result of:
The process of evaluation at this time is to distribute evaluation forms during the seminar
for FCPI & FCPII. When completing the evaluations the students do not indicate the
Stakeholders and their roles.
This makes happen:
An interview process of Committee Members to examine/develop/redesign evaluation
processes.
Question? Do the results from the evaluation present an accurate assessment of teaching and
course objectives?

Yes: The results are adequate for our needs in assessing the teaching and course
results
This makes happen:
No further step needs to be taken.
This is the result of:
Question? Are the results in FCPI and FCPII giving us an
accurate assessment, of Seminar, Lab and Clinical
site teaching and course evaluation outcome
results?
Answer: Yes. End process.

No: Continue to an interview process.


There is a need to identify a system to report each section as a unique instructional
type.
This makes happen:
The results are incomplete and a system correction is needed identifying
Seminar, Lab and Clinical site in teaching and course evaluation outcomes

Page | 6

EVALUATION MODEL CHOICE WITH CRITERIA


Formative interview evaluation review.
Interviews: FCPI & FCPII Stakeholders to Identify Need
Continue interviews:
Evaluation Process Planning Committee Research of Needs
Question? What is the problem with the evaluation outcomes/reporting?
This makes happen: Interview questions.
a. How do we separate data from Seminar, Lab and Clinical sites?
i. What methods need changing?
b. Are there new data collection methods for improvement?
i. What methods need changing?
ii. How do we implement methods to collect evaluation data that is separate
for Seminar, Lab and Clinical Sites?
c. Are there changes needed in the reporting of evaluation data?
i. Are new reporting methods needed as a result of new data collection?
ii. What methods need changing?
d. Are changes needed in statistical analysis?
i. Are alternative statistics needed as a result of new data collection
methods?
ii. What statistics need to be added or changed?
e. Are there distribution changes needed?
i. How do we change distribution methods to provide separate data for
Seminar, Lab and Clinical Sites?
ii. What methods need changing?
f. What stakeholders are affected by these proposed changes in data collection,
statistical analysis and reporting?
i. How do we communicate the changes?
g. Is there additional stakeholder training needed to implement the changes?
i. How do we train the stakeholders?
h. Are other changes needed that have not been addressed?
i. What needs changing?
Results of Interviews: FCPI & FCPII Stakeholders identified needed system design changes
needed to create summative evaluation system changes to reflect inadequacies in the
process.
Go to: Start Here and repeat the process.

Page | 7

EVALUATION MODEL CHOICE WITH CRITERIA

References

da Cunha Miguel. (2013). Those Who Can Teach Minority Nursing (March 30, 2013 ed.).
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation:
Alternative approaches and practical guidelines (4th ed.). Upper Saddle River:
Pearson.
University of Wisconsin-Milwaukee College of Nursing UnderGraduate Program
Committee. (2008). Course Evaluation Policy Faculty Document # (01-12) 117A.
Policy and Procedure. Nursing. University of Wisconsin-Milwaukee College of
Nursing. Milwaukee.
University of Wisconsin-Milwaukee Governance Committee. (2013). University of
Wisconsin Milwaukee Faculty Document No. 2934, November 21, 2013 Retrieved July 16,

2015,
from http://www4.uwm.edu/secu/docs/faculty/2934_SharedGov_Statement.pdf

Page | 8

APPLICATION: DEVELOP A LOGIC MODEL

Week 4
Application: Develop a Logic Model

Gary Jechorek
Walden University
Program Evaluation
(EDUC - 6130 - 2)

July 16, 2015 Submission

Dr. Michael Burke

1|Page

Running Head: Application: Develop a Logic Model


In recent years, the healthcare industrys severe shortage of hospital nursing staff has also
begun to negatively impact the development of future nursing students and faculty. In some
cases, schools of nursing have been forced to put more emphasis on their clinical nursing
programs, while academic tracks that prepare students for teaching careers were neglected or
even cancelled altogether (da Cunha Miguel, 2013).The need for evaluating teaching methods
and course content is critical for future industry success.
Primary stakeholders will be those who create the evaluation process and those that teach to
the objectives of the program. The evaluation will determine if course level objective goals have
been met within the program.
Secondary stakeholders are those reviewing the results and determining the usefulness of the
evaluation results. Determining needed improvements as necessary based on data collected.
Primary Stakeholders
Faculty
Adjunct Faculty

Evaluation Design Committee

Secondary Stakeholders
Program Directors
Program Coordinators
Course Coordinators
Academic Affairs Dean
College Dean

All stakeholders in the diagram are Faculty and Adjunct Faculty looking for improved
course objective outcomes.
The evaluation design content and process is created by Faculty and Adjunct Faculty in
an open door committee.
Faculty and Adjunct Faculty receive their individual teaching and course evaluation
results.
The Evaluation Design Committee does not receive results directly but hold other
supervisory positions.
Course Coordinators receive results for only their supervised course level. (combination
of two or three courses)
Program Coordinators (4) receive results by semester supervised: 1st Semester of
Program, 2nd Semester of Program, 3rd Semester of Program, and 4th Semester of
Program.
Program Directors receive results by program, Undergraduate, Graduate, and Doctorate.
Academic Dean receives all results by semester.
Academic Dean communicates anything notable to the College Dean.

Participation of administrators, faculty, staff,. in decision-making makes all of us take


responsibility for the success of our colleges and universities and thereby enhances everyones
commitment to excellence, efficiency, and productivity.This participation promotes
developing and maintaining high academic quality (University of Wisconsin-Milwaukee
Governance Committee, 2013).

1|Page

Running Head: Application: Develop a Logic Model

2|Page

Running Head: Application: Develop a Logic Model

2|Page

Running Head: Application: Develop a Logic Model

Details
Steps
Are alternative statistical calculation methods needed as a result of new data collection
methods?
This makes happen:
Are new Reporting methods needed as a result of new data collection forms?
New Evaluation Procedures Improvement Developed and Implemented
This is the result of:
Review current Evaluation Process
Are changes in statistical calculation methods?
This makes happen:
Review current Evaluation Process
Are there changes needed in reporting methods?
This is the result of:
Question? What is missing in the evaluation outcomes?
The results are incomplete and a process correction is needed identifying
Seminar, Lab and Clinical site in teaching and course evaluation outcomes
Are new Reporting methods needed as a result of new data collection forms?
This makes happen:
New Evaluation Procedures Improvement Developed and Implemented 42
How do we design collection forms to separate, Seminar, Clinical Site and Lab?
This is the result of:
Are alternative statistical calculation methods needed as a result of new data collection
methods?
Review current Evaluation Process
Are there changes needed in reporting methods?
This makes happen:
Review current Evaluation Process
This is the result of:
Question? What is missing in the evaluation outcomes?
The results are incomplete and a process correction is needed identifying
Seminar, Lab and Clinical site in teaching and course evaluation outcomes
Are changes in statistical calculation methods?
What collection methods could be developed in improvement?
Evaluation Process Planning Committee
This makes happen:
NO there is a need to identify each section as a unique evaluation location
This is the result of:

3|Page

APPLICATION: DEVELOP A LOGIC MODEL


FCPI & FCPII Level Coordinators are questioning Teaching and Course Evaluation Results for
improvement
FCPI & FCPII Level Coordinators are questioning Teaching and Course Evaluation
Results for improvement
This makes happen:
Evaluation Process Planning Committee 16
This is the result of:
Is our Evaluation processes for end of semester Teaching and Course
Evaluations for FCPI and FCPII Level complete & adequate results? 12
How do we design collection forms to separate, Seminar, Clinical Site and Lab?
This makes happen:
New Evaluation Procedures Improvement Developed and Implemented
This is the result of:
How do we improve methods to separate data for Seminar, Lab and Clinical?
Are new Reporting methods needed as a result of new data collection forms?
Review current Evaluation Process
How do we improve methods to separate data for Seminar, Lab and Clinical?
This makes happen:
New Evaluation Procedures Improvement Developed and Implemented
How do we design collection forms to separate, Seminar, Clinical Site and Lab?
This is the result of:
Review current Evaluation Process
How do we separate outcomes for Seminar, Lab and Clinical sites?
This makes happen:
Review current Evaluation Process
What collection methods could be developed in improvement?
This is the result of:
The results are incomplete and a process correction is needed identifying
Seminar, Lab and Clinical site in teaching and course evaluation outcomes 3
Question? What is missing in the evaluation outcomes?
Interview FCPI & FCPII Stakeholders to identify Need
This makes happen:
Is our Evaluation processes for end of semester Teaching and Course
Evaluations for FCPI and FCPII Level complete & adequate results?
Question? What is missing in the evaluation outcomes?
This is the result of:
Question? Are the results in FCPI and FCPII giving us an accurate assessment,
of Seminar, Lab and Clinical site teaching and course evaluation outcome
results?

4|Page

APPLICATION: DEVELOP A LOGIC MODEL

Is our Evaluation processes for end of semester Teaching and Course


Evaluations for FCPI and FCPII Level complete & adequate results?
This makes happen:
FCPI & FCPII Level Coordinators are questioning Teaching and Course Evaluation Results for
improvement
This is the result of:
Question? Are the results in FCPI and FCPII giving us an accurate assessment,
of Seminar, Lab and Clinical site teaching and course evaluation outcome
results?
Interview FCPI & FCPII Stakeholders to identify Need
NO there is a need to identify each section as a unique evaluation location
This makes happen:
The results are incomplete and a process correction is needed identifying
Seminar, Lab and Clinical site in teaching and course evaluation outcomes
This is the result of:
Question? Are the results in FCPI and FCPII giving us an accurate assessment,
of Seminar, Lab and Clinical site teaching and course evaluation outcome
results?
Evaluation Process Planning Committee
New Evaluation Procedures Improvement Developed and Implemented
This makes happen:
Where improved outcomes obtained by the redesign of Evaluation Results by
Location: Seminar, Lab and Clinical Site Locations?
This is the result of:
How do we design collection forms to separate, Seminar, Clinical Site and Lab?

5|Page

APPLICATION: DEVELOP A LOGIC MODEL

Reference
da Cunha Miguel. (2013). Those Who Can Teach Minority Nursing (March 30, 2013 ed.).
University of Wisconsin-Milwaukee Governance Committee. (2013). University of Wisconsin
Milwaukee Faculty Document No. 2934, November 21, 2013 Retrieved July 16, 2015,
from http://www4.uwm.edu/secu/docs/faculty/2934_SharedGov_Statement.pdf

6|Page

EVALUATION MODEL CHOICE WITH CRITERIA

Week 3: Select an Evaluation Model


Week 4: Develop Evaluative Criteria for Your Program Evaluation

Gary Jechorek
Walden University
Program Evaluation

(EDUC - 6130 - 2)

August 1, 2015

Dr. Michael Burke

EVALUATION MODEL CHOICE WITH CRITERIA

Course Assignment: Select an Evaluation Model


Gary Jechorek
Evaluation Model

Advantages

Disadvantages

DECISION-ORIENTED EVALUATION APPROACHES-MIXED METHODS MODEL

The CIPP
Evaluation Model

Helps administrators make good


decisions, though the process of
delineating, obtaining, reporting
and applying descriptive and
judgmental information about some
objects merit, worth, probity, and
significance to guide decision
making, support accountability,
disseminate, effective practices,
and increase understanding of the
involved phenomena, (Fitzpatrick,
Sanders, & Worthen, 2011),
Pg,173)
A redefined definition being more
succinct as the process of
delineating, obtaining, and
providing useful information for
judging decision alternatives
(Fitzpatrick, et al., 2011), Pg.178).
It encourages participation of many
stakeholders, who may not have
explicit decision-making concerns.
(Fitzpatrick, et al., 2011), Pg.178).

The focus is typically on


managers. (Fitzpatrick, et al.,
2011), Pg.178).
It encourages participation of
many stakeholders, who may
not have explicit decisionmaking concerns, will
necessarily receive less
attention in defining the
purposes of the evaluation.
(Fitzpatrick, et al., 2011),
Pg.178).
Management is the initiator of
data collection and the
interpretation of results.
(Fitzpatrick, et al., 2011),
Pg.178).

The evaluation focuses on the stage


of the program and different
questions arise at different stages.
It encourages managers and
evaluators to think cyclical,
rather than project based. .
(Fitzpatrick, et al., 2011), Pg.179).

Page | 1

EVALUATION MODEL CHOICE WITH CRITERIA


Evaluation Model

Advantages

Disadvantages

DECISION-ORIENTED EVALUATION APPROACHES-MIXED METHODS MODEL

Utilization-Focused Model

The primary purpose of


evaluation is to inform
decisions (makers) and

Staffing changes or turnover


of the primary intended users
using. Instead a task force of
primary users are needed.

The evaluations use is most


likely to occur if the
evaluator identifies one of
more of the stakeholders who
care about the evaluation and
are in the position to use it.

Decision making done by few


primary users and unchanging
context and decisions.
(Fitzpatrick, et al., 2011),
Pg.181).

The presence of an individual


or group who care about the
evaluation and the results.
Focuses on working with key
stakeholder or groups.

Users maybe too close to the


program being evaluated to
become very involved.
(Fitzpatrick, et al., 2011),
Pg.181).

The intended users to help


consider decisions by the
type of data or evidence and
the feasibility of affecting
them. (Fitzpatrick, et al.,
2011). Pg.180).

Explain your choice of model for your current evaluation process:


The CIPP model will produce a summative result, as we are to accomplish by policy, but as it is
defined as, the process of delineating, obtaining, reporting and applying descriptive and
judgmental information about some objects merit, worth, probity, and significance to guide
decision making, support accountability, disseminate, effective practices, and increase
understanding of the involved phenomena, (Fitzpatrick, et al., 2011), Pg,173). The process is
also accomplishing a formative review of the program, the course objectives, and the instructors
teaching accomplishments.

Page | 2

EVALUATION MODEL CHOICE WITH CRITERIA


Our process follows the CIPP models four categories of evaluation: context, input, process, and
product. The process follows the logic structure created by Stufflebeam:

Focusing the evaluation,


Collection of Information
Organizing of information

CIPP Model Logic Structure


Analysis of Information
Reporting of Information
Administration of the evaluation

(Fitzpatrick, et al., 2011), Pg.175-76)


The Utilization Evaluation method is part of the review process of the evaluation. It follows the
two assumptions:
A. The primary purpose of the evaluation is to inform decisions; and
B. Use is most likely to occur if the evaluator identifies one or more stakeholders who care
about the evaluation and are in a position to use it (Fitzpatrick, et al., 2011), Pg. 179).
The distribution of results is extensive in the review process. Here is a list of stakeholders that
will review by semester the nature of the findings. There are committees that will discuss the
findings, interpret results and make appropriate recommendations as needed, they are:
Undergraduate Program Committee, the Graduate Program Committee, and sub committees UPC
Annual Review Committee and GPC Annual Review Committee. Also all stakeholder below
receive and review results and add their input as recommendations. (University of WisconsinMilwaukee College of Nursing UnderGraduate Program Committee, 2008)
In recent years, the healthcare industrys severe shortage of hospital nursing staff has also
begun to negatively impact the development of future nursing students and faculty. In some
cases, schools of nursing have been forced to put more emphasis on their clinical nursing
programs, while academic tracks that prepare students for teaching careers were neglected or
even cancelled altogether (da Cunha Miguel, 2013).The need for evaluating teaching methods
and course content is critical for future industry success.
Primary stakeholders will be those who create the evaluation process and those that teach to
the objectives of the program. The evaluation will determine if course level objective goals have
been met within the program.
Secondary stakeholders are those reviewing the results and determining the usefulness of the
evaluation results. Determining needed improvements as necessary based on data collected.
Primary Stakeholders
Faculty
Adjunct Faculty

Evaluation Design Committee

Secondary Stakeholders
Program Directors
Program Coordinators
Course Coordinators

Page | 3

EVALUATION MODEL CHOICE WITH CRITERIA


Academic Affairs Dean
College Dean

All stakeholders are Faculty and Adjunct Faculty looking for course objective outcomes.
The evaluation design content and process is created by Faculty and Adjunct Faculty in
an open door committee.
Faculty and Adjunct Faculty receive their individual teaching and course evaluation
results.
The Evaluation Design Committee does not receive results directly but hold other
supervisory positions.
Course Coordinators receive results for only their supervised course level. (combination
of two or three courses)
Program Coordinators (4) receive results by semester supervised: 1st Semester of
Program, 2nd Semester of Program, 3rd Semester of Program, and 4th Semester of
Program.
Program Directors receive results by program, Undergraduate, Graduate, and Doctorate.
Academic Dean receives all results by semester.
Academic Dean communicates anything notable to the College Dean.

Participation of administrators, faculty, staff,. in decision-making makes all of us take


responsibility for the success of our colleges and universities and thereby enhances everyones
commitment to excellence, efficiency, and productivity.This participation promotes
developing and maintaining high academic quality (University of Wisconsin-Milwaukee
Governance Committee, 2013). This is a policy that must be followed under campus governance
policies.
The review of the evaluation process by stakeholders is undertaken to inform decisions, clarify
options, identify improvements and provide information about programs and policies within the
contextual boundaries of the time, place, values and politics (Fitzpatrick, et al., 2011), Pg. 180.
Using a mixed methods evaluation model works well in an educational environment since so
much of what is completed increases ownership in the evaluation, and the use of the results.
The two models I have written about as models in my situation, work in an educational
environment where summative and formative evaluations are necessary in Nursing, due to ever
changing best practice methods that are continually being updated.
The evaluation questions in the current evaluation survey form process are not a consideration in
this formative evaluation. We will be looking at processes.
Using a case study design qualitative survey method, there is a query to be answered. Is a
change process needed? The case study design will identify the current process method
description, and if necessary execute a change in collection, statistical analysis, and distribution
of result outcome methods to report each section (Seminar, Lab, & Clinical Site) as a unique
instructional type.

Page | 4

EVALUATION MODEL CHOICE WITH CRITERIA

The standards reflected in the choice of formative evaluation review questions are guided by
the descriptive case study design, and is particularly useful when the purpose of the evaluation
is to describe something -- a case in depth and are concerned with exploring the hows and
whys of a program (Fitzpatrick, et al., 2011), Pg.390.
The role of the stakeholders should be clarifying the evaluation process for adequate
representation of teaching and course objectives.
Question for Formative Evaluation:
Is our end of semester evaluation processes for *FCPI and *FCPII Level evaluations
providing an adequate representation of the teaching and course objectives?
*Foundation of Clinical Practice I & II Courses (Seminar, Lab, & Clinical Site types)

Using the models and criteria questions should assist administrators to make good decisions,
though the process of delineating, obtaining, reporting and applying descriptive and judgmental
information about some objects merit, worth, probity, and significance to guide decision
making, support accountability, disseminate, effective practices, and increase understanding of
the involved phenomena, (Fitzpatrick, et al., 2011), Pg,173)
The formative evaluation will determine if an implementation of new process methods will be
made to the current evaluation process program.

Page | 5

EVALUATION MODEL CHOICE WITH CRITERIA


Start Here
Questions for Formative Evaluation:
Is our end of semester evaluation processes for FCPI and FCPII Level evaluations
providing an adequate representation of the teaching and course objectives?
Description: Starting Point
This is what is happening:
FCPI & FCPII Level Coordinators have identified a problem with Teaching and Course
Evaluation Results for Seminar Lab, and Clinical Sites.
This is the result of:
The process of evaluation at this time is to distribute evaluation forms during the seminar
for FCPI & FCPII. When completing the evaluations the students do not indicate the
Stakeholders and their roles.
This makes happen:
An interview process of Committee Members to examine/develop/redesign evaluation
processes.
Question? Do the results from the evaluation present an accurate assessment of teaching and
course objectives?

Yes: The results are adequate for our needs in assessing the teaching and course
results
This makes happen:
No further step needs to be taken.
This is the result of:
Question? Are the results in FCPI and FCPII giving us an
accurate assessment, of Seminar, Lab and Clinical
site teaching and course evaluation outcome
results?
Answer: Yes. End process.

No: Continue to an interview process.


There is a need to identify a system to report each section as a unique instructional
type.
This makes happen:
The results are incomplete and a system correction is needed identifying
Seminar, Lab and Clinical site in teaching and course evaluation outcomes

Page | 6

EVALUATION MODEL CHOICE WITH CRITERIA


Formative interview evaluation review.
Interviews: FCPI & FCPII Stakeholders to Identify Need
Continue interviews:
Evaluation Process Planning Committee Research of Needs
Question? What is the problem with the evaluation outcomes/reporting?
This makes happen: Interview questions.
a. How do we separate data from Seminar, Lab and Clinical sites?
i. What methods need changing?
b. Are there new data collection methods for improvement?
i. What methods need changing?
ii. How do we implement methods to collect evaluation data that is separate
for Seminar, Lab and Clinical Sites?
c. Are there changes needed in the reporting of evaluation data?
i. Are new reporting methods needed as a result of new data collection?
ii. What methods need changing?
d. Are changes needed in statistical analysis?
i. Are alternative statistics needed as a result of new data collection
methods?
ii. What statistics need to be added or changed?
e. Are there distribution changes needed?
i. How do we change distribution methods to provide separate data for
Seminar, Lab and Clinical Sites?
ii. What methods need changing?
f. What stakeholders are affected by these proposed changes in data collection,
statistical analysis and reporting?
i. How do we communicate the changes?
g. Is there additional stakeholder training needed to implement the changes?
i. How do we train the stakeholders?
h. Are other changes needed that have not been addressed?
i. What needs changing?
Results of Interviews: FCPI & FCPII Stakeholders identified needed system design changes
needed to create summative evaluation system changes to reflect inadequacies in the
process.
Go to: Start Here and repeat the process.

Page | 7

EVALUATION MODEL CHOICE WITH CRITERIA

References

da Cunha Miguel. (2013). Those Who Can Teach Minority Nursing (March 30, 2013 ed.).
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation:
Alternative approaches and practical guidelines (4th ed.). Upper Saddle River:
Pearson.
University of Wisconsin-Milwaukee College of Nursing UnderGraduate Program
Committee. (2008). Course Evaluation Policy Faculty Document # (01-12) 117A.
Policy and Procedure. Nursing. University of Wisconsin-Milwaukee College of
Nursing. Milwaukee.
University of Wisconsin-Milwaukee Governance Committee. (2013). University of
Wisconsin Milwaukee Faculty Document No. 2934, November 21, 2013 Retrieved July 16,

2015,
from http://www4.uwm.edu/secu/docs/faculty/2934_SharedGov_Statement.pdf

Page | 8

APPLICATION: CHOOSING DATA COLLECTION STRATEGIES

Week 6: Application: Choosing Data Collection Strategies

Gary Jechorek
Walden University
Program Evaluation
(EDUC - 6130 - 2)
August 9, 2015

Dr. Michael Burk

Application: Choosing Data Collection Strategies

The objective in this assignment was a practice method to interview two current Masters of
Instructional Design and Technology and to find out what they consider the most appropriate collection
strategies to evaluate the current program that they are completing for a Master of Science degree.
The choice of interview focus:
Have MS IDT students successfully acquired the knowledge and skills in the IDT program.
In both interviews there was agreement that Walden course statement of objectives should be
evaluated to identify if they had been prepared for real world tasks in Instructional Design.
The objectives of evaluation would be a review and measurement of the 13 courses to understand
student perceptions of program quality, to improve both student satisfaction and retention to degree
completion, and to plan for the future. Measured by using these stated program objectives:
Youll learn to apply theory, research, creativity, and problem-solving skills to a variety of technology
applications in order to improve learning. You will also develop the skills to assess, create, and manage
training materials. The combination of these skills will help you to support technology supported
training in educational institutions and corporate training classrooms. Through your coursework, you
will gain the experience needed to efficiently and effectively use technology and multimedia tools.
(Walden University, 2015)
Data Collection Method
The use of a pretestposttest design. The pre-test was completed at the beginning of the program. It
involved obtaining a pretest measure of the outcome of interest prior to the program of learning
followed by a posttest on the same measure after learning occurs used with the control group, our
cohort.
Strategy
Interview Consensus: This is an online program, an online survey tool would be the most appropriate
media tool available for collection. Both quantitative questions with use of a Likert scale and qualitative
questions to be used for program review. All students in the cohort would be offered participation that
are finishing the program, there would be no sampling. At all times there is student anonymity and
confidentiality.
Suggested questions discussed in the interview process for inclusion in the final evaluation of the
program measuring student perceptions of their successful completion of the program.
1.
2.
3.
4.
5.
6.
7.

What skills did you learn in this program that you can use?
How will you apply the skills you learned in this program?
What changes will you make in your situation based on the program?
How do todays program goals meet your needs?
In what way was this program useful to you?
How will this program help you set goals?
What goals have you set based on this program?

1|Page

Application: Choosing Data Collection Strategies


8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.

What changes did you make in your operation/situation as a result of courses you
attended in the past year?
Why did you participate in this program?
What are you doing today in your operation that you did not do prior to this
educational program? (Specify the program/context).
What result do I expect from using information gained from this program?
What problems will be addressed by you being involved in this program?
What practices you currently use will be discontinued as a result of this program?
What new practice (s) will you implement as a result of this program? In what way
has decision-making been made easier by participation in this program?
What is the best thing that can happen if you use the information from this
program?
What immediate steps/actions will you take as a result of this program?
What specific assistance would be helpful to you in implementing the new practices
presented in this program?
What will it take for you to implement the new practices/Information provided in
this program?
What result(s)/impact(s) do you expect from participation in this program?
What was the result/impact of your participation?
(Martin Robert A, 2003)

Who do they consider to be the stakeholders in this program evaluation, and what would their
interests be?
Interview Consensus Key players:
Students - Meeting the instructional needs of students is the cornerstone of every effective
distance education program, and the test by which all efforts in the field are judged.
Faculty - The success of any distance education effort rests squarely on the shoulders of the
faculty Special challenges confront those teaching at a distance. For example, the instructor
must:

Develop an understanding of the characteristics and needs of distant students.


Adapt teaching styles taking into consideration the needs and expectations of multiple,
often diverse, audiences.
Develop a working understanding of delivery technology, while remaining focused on
their teaching role.
Function effectively as a skilled facilitator as well as content provider.

Facilitators - The instructor often finds it beneficial to rely on a BB of D2L site facilitator to act as
a bridge between the students and the instructor. To be effective, a facilitator must understand
the students being served and the instructor's expectations.
Support Staff - These individuals are the silent heroes of the distance education enterprise and
ensure that the myriad details required for program success are dealt with effectively. Most
successful distance education programs consolidate support service functions to include student
registration, materials duplication and distribution, textbook ordering, securing of copyright

2|Page

Application: Choosing Data Collection Strategies


clearances, facilities scheduling, processing grade reports, managing technical resources, etc.
Support personnel are truly the glue that keeps the distance education effort together and on
track.
Administrators - Although administrators are typically influential in planning an institution's
distance education program, they often lose contact or relinquish control to technical managers
once the program is operational. Most importantly, they maintain an academic focus, realizing
that meeting the instructional needs of distant students is their ultimate responsibility (Milivoj,
2012).
How did the experience of conducting each interview differ?
My expectation in the interview process was conducting a phone or Skype interview and asking
clarifying questions. My cohort peers after supplying them with my contact information wrote they
preferred an e-mail interview method. Both of my peers explained they had limited time and that a
verbal method was not an available format and we needed to complete the interview via email. Emails
as interviews can be distinguished from email usage for making contact with prospective participants
and arranging face-to-face interviews, now commonplace in qualitative research; and email lists are
increasingly used as the basis for survey research (Gunter et al., 2002). I strongly suggested this was not
my preferred method, but if no other alternatives be available to me I would comply with their email
method.
A list questions and responses in written form via e-mail proceeded for three days. Printing
answer/responses and filing the communications in email folders for future reference.
Written communications in both directions were prompted asking for clarification and consensus to a
given topic and response. The process completed the interview, but I am old school, a forty five minute
oral face to face interview would still be my preferred method.
My previous expectations in an interview experience is to clearly communicate to the person at the
other end of the call that they are important. I still believe this can only be achieved over the telephone
/Skype through your tone of voice and visual observation when available. I would still prefer to have an
oral interaction in my interview with others, I am more comfortable that I am getting more complete
information by the intonation I here while using probing questions.
Overall we completed the interview process, but I believe there would be more to be gained using the
oral method for success.

3|Page

Application: Choosing Data Collection Strategies


References
Gunter, Barrie, Nicholas, David, Huntington, Paul and Williams, Peter (2002) 'Online Versus
Offline Research: Implications for Evaluating Digital Media', Aslib Proceedings, Vol. 54, No. 4, pp.
229 - 239. [doi:10.1108/00012530210443339]
Martin Robert A. (2003). Potential Program Evaluation Questions Retrieved August 8, 2015, from
https://www.extension.iastate.edu/ag/staff/info/evalquestions.html
Milivoj, K. a. (2012). What is Distance Education and Key Players. Agriculture.extension,
from http://www.agroextension.net/default.asp?ids=0&ch=176&typ=1&val=164
Walden University. (2015). This instructional design and technology masters degree program will teach
you to build quality education courses and training experiences Retrieved 08/08/2015, 2015,
from http://www.waldenu.edu/masters/ms-in-instructional-design-and-technology

4|Page

APPLICATION: CHOOSING DATA COLLECTION STRATEGIES

Week 6: Evaluation Reporting Strategy

Gary Jechorek
Walden University
Program Evaluation

(EDUC - 6130 - 2)

August 16, 2015

Dr. Michael Burke

Evaluation Reporting Strategy

Stakeholder

Evaluation
Design
Committee
(Faculty,
Adjunct
Faculty,
Statistician /
Academic
Support Staff)

Faculty /
Adjunct
Faculty

Program
Directors

Program
Coordinators

Reporting Strategy

Interviews/Oral/
Written Reports producing
Programs Evaluation
Methods.

Written reporting results by


course/section taught.

Written reporting results by


program; UG, or Grad, or
Doctoral

Written reporting results by


program: UG, or Grad, or
Doctoral

Implications

Stakeholder Involvement

Participation of
administrators, faculty,
staff,. in decision-making
makes all of us take
responsibility for the
success of our colleges
and universities and
thereby enhances
everyones commitment to
excellence, efficiency, and
productivity.This
participation promotes
developing and maintaining
high academic quality
(University of WisconsinMilwaukee Governance
Committee, 2013).
Reporting results to course
objectives, results reviewed
for teaching effectiveness.
Data and comments used
for annual review.
Reporting results to course
objectives, results reviewed
for teaching effectiveness.
Data and comments used
for annual review.
Reporting results to course
objectives, results review of
effectiveness to meeting
program objectives.

Committee member representatives to create the


evaluation program, to include quantitative and
qualitative questions, measurements, forms design,
distribution, collection, results reporting, and results
distribution. Create methods to Review results
reporting methods to stakeholders. Evaluation
methods of process.

Distribution and collection of teaching and course


evaluation data from students. Implication of
effectiveness of personal teaching styles and
teaching to meet course objectives.
Review of reported results comparison to course
objectives and teaching results used for review of
all teaching staff.

Review of reported results comparison to course


objectives and teaching results used for review of
program effectiveness.

1|Page

Evaluation Reporting Strategy


Stakeholder
Course
Coordinators

Administrators
:
Statistician /
Academic
Support Staff

Reporting Strategy

Implications

Stakeholder Involvement

Written reporting results by


program UG, 1st , 2nd, 3rd, and
4th semester also MS, and
DOC program

Reporting results to course


objectives, results review of
effectiveness to meeting
level objectives.

Review of reported results comparison to course


objectives and teaching results used in course level
review.

Interviews/Oral/
Written Reports of Programs
Evaluation Methods and
Processes.

Possible implications:

Preparation of data collection forms. Distribution of


data collection forms by course and section to
instructors. Collection of teaching and course
evaluation data from faculty.
Qualitative comments typed by course and section.
Scanned forms for statistical calculation y course
and section. Statistical reports created by course
and section number. Qualitative and statistical
results collated by course, section, and instructor.
Distribution of results to Faculty, Adjunct faculty,
Program director, Program coordinators, Course
coordinators, and Academic Dean.

Confidentiality of
stakeholders in data
collection process.
Incorrect forms creation.
Forms distribution
problems.
Incomplete instruction for
forms completion.
Incomplete forms
completion by
stakeholders.
Data collection not
completed at all.
Missing forms packets not
delivered to return area.
Scanning problems.
Reading qualitative
comments for typing.
Statistical calculation
problems.
Improper collation of
results, comments and
statistics.
Improper distribution of
results.

2|Page

Evaluation Reporting Strategy


Stakeholder

Reporting Strategy
Written reporting results
all programs

Academic
Affairs Dean

College Dean

Values,
Standards,
and Criteria:

As requested Academic Dean


reports overall results to
College Dean

The process of discussing


and crafting evaluation
systems focuses attention on
the practice of good teaching
and helps to create a culture
in which teaching is highly
valued. Consideration can be
given to changes in emphasis
and interest that will naturally
occur in an academic career.
(University of Michigan, 2014)

Implications

Stakeholder Involvement

Reporting results to
Nursing Program(s)
comparing results to
objectives, meeting Vision
Statement and Mission
Statement objectives.

Review of reported results comparison to course


objectives and teaching results used for review of
all Nursing programs.

Reporting results to
Nursing Program(s)
comparing results to
objectives, meeting Vision
Statement and Mission
Statement objectives.

Review of reported results comparison to course


objectives and teaching results used for review of
all Nursing programs.

Attaining Core Values

Accountability
Collaboration
Creativity
Diversity
Excellence
Integrity
Human Dignity
Social Justice

To ensure that the evaluation system adopted is


credible and acceptable, faculty members must
have a strong hand in its development
(University of Michigan, 2014).

(University of WisconsinMilwaukee College of


Nursing, 2015)

3|Page

Evaluation Reporting Strategy

Potential ethical issues:


Today higher education and nursing education are poised on the brink of sweeping changes. The forces driving these changes
are numerous and difficult to isolate; the increasing multiculturalism of society, decreasing financial resources in education and
health care, changes in delivery of health care through health care reforms, the integration of evidence-based practice, and the
need for more nurses with higher degrees,..the need for nurses educators ..due to a critical shortage, .nursing programs
must increase their graduation rates.specifically for higher rates of graduation completions (Billings & Halstead, 2013), pg. 1.
The need for a strong evaluation of educational programs is essential for the future success of the Nursing career industry to
ensure health care for all future recipients of health care essentials.

4|Page

Evaluation Reporting Strategy

References
Billings, D. M., & Halstead, J. A. (2013). Teaching in nursing: A guide for faculty: Elsevier Health Sciences.
University of Michigan. (2014). Guidelines for Evaluating Teaching Retrieved August 13, 2015,
from http://www.crlt.umich.edu/tstrategies/guidelines
University of Wisconsin-Milwaukee College of Nursing. (2015). Fast Facts Retrieved August 14, 2015,
from http://www4.uwm.edu/nursing/about/fast-facts.cfm
University of Wisconsin-Milwaukee Governance Committee. (2013). University of Wisconsin Milwaukee Faculty Document No. 2934,
November 21, 2013 Retrieved July 16, 2015, from http://www4.uwm.edu/secu/docs/faculty/2934_SharedGov_Statement.pdf

5|Page

Вам также может понравиться