Foundations for Evaluation and Research
1.0 Introduction to Criteria
According to Henderson and Bialeschki (2010), "The goal of systematic evaluation and rigorous research is to make reliable, valid, useful, and enlightened decisions and interpretations" (p.1).
Basic Concepts
Determining criteria is the process of developing the research question by reviewing issues and considerations to establish the purpose of the inquiry (research).
Evaluation is making decisions based on specific questions, issues, or criteria and supporting evidence.
Evaluation research is the process used to collect and analyze data or evidence.
Recreation services are the human service organizations and enterprises related to:
|
1.1 The Basic Question: What is Systematic Inquiry?
Recreation professionals use the process and techniques of systematic inquiry (evaluation and research) to help them make enlightened decisions and improve what they do.
Evaluation: the systematic collection and analysis of data to address some criteria of judgments about the worth or improvement of something.
Goal of evaluation is to determine "what is" compared to "what should be."
Research: the systematic investigation within some discipline undertaken to establish facts and principles to contribute to a body of knowledge. The goal of research is not necessarily to assist in practical decision making.
According to Mitra & Lankford (1999), "The ultimate goal of leisure research is to produce an accumulating body of reliable knowledge that will enable use to explain, predict, and understand leisure phenomenon that interest us." This knowledge would be used to promote positive human growth and development through recreation and leisure services.
Therefore, it is imperative for park, recreation and leisure professionals to acquire the research skills necessary to read current professional research, design and conduct their own research projects.
Systematic (Formal) Evaluations
A recreation director who uses observation, listening and discussions with participants and staff to evaluate programs is using an "informal evaluative" process.
A formal or systematic evaluation provides rigor when outcomes are complex, decisions are important, and evidence is needed to make enlightened or informed decisions or interpretations.
1) criteria (hypothesis, research questions, guiding questions, working hypothesis, purposes, measures, or objectives)
2) evidence (data collected and analyzed using appropriate designs and methods)
3) judgment (interpretations expresses in conclusions and recommendations)
Criteria + Evidence + Judgment = Evaluation
Continuum of Evaluation - Everyday Living to Systematic
Everyday Living
feelings/Thoughts | |
descriptive designs | |
experiments |
Systematic
Conducting formal evaluations is the basis for more efficient and effective operations, staff and programs.
Effectiveness: relates to changes of the results of a program of intervention
Efficiency: is how changes or results happen
Important Characteristics of Evaluation
Evaluation is a process. | |
The goal of evaluation is to make decisions by determining value or worth. | |
The most common way to evaluate is to measure and judge how well objectives are met. | |
The results of evaluation should lead to decision making about and for a specific situation or context. | |
Evaluation may be formal or informal. | |
Evaluation within an organization should be ongoing with evaluation systems. | |
Evaluation is continuous and does not necessarily occur only at the end of an event or activity. | |
Responsive evaluation is based on the premise that evaluation should respond to issues and concerns within an organization. | |
No magic formulas for evaluation exist. |
1.2 Evaluation and Research: Viva la Difference
Evaluation: the systematic collection and analysis of data to address some criteria of judgments about the worth or improvement of something.
Research: the systematic investigation within some discipline undertaken to establish facts and principles to contribute to a body of knowledge. The goal of research is not necessarily to assist in practical decision making.
Both evaluation and research are characterized by clearly delineated protocol in collecting, processing, and analyzing data. Evaluation is a specific form of applied research that results in the application of information for decision making.
Basic and Applied Research
Applied Research: studies conducted to provide answers to immediate problems or issues, such as program evaluations.
Basic Research: studies conducted toward long-range questions or advancing scientific knowledge.
Differences in Objectives and Purposes of Research and Evaluation
Research | Evaluation |
Tries to prove or disprove hypotheses. | Focuses on improvement in areas related to programs, personnel, policies, places and facilities. |
Focus on increasing understanding or scientific truth. | Focus on problem solving and decision making in a specific situation. |
Applies scientific techniques to testing hypothesis or research questions on findings related to theory. | Evaluation generally compare the results with organization goals to see how well they have been met. |
Using theory and sampling techniques, should be generalizable to other situations. | Not interested in generalizing results to other situations. |
Research is conducted to develop new knowledge. | Evaluation undertaken when a decision needs to be made or the value or worth of something is unknown. |
results are published to add to the body of knowledge about a specific topic or theory. | Results are not usually shared publicly |
Examples of Journals in Park, Recreation and Leisure
Services
Sharing of Common Methods
Research includes elements of evaluation and evaluation requires the use of research techniques. (See page 12 in the text.)
The use of methods are applied differently because research relies on theory and evaluation relies on application and decision making.
The same protocols and rules of methodology that apply to research apply to evaluation.
Reasons and applications are the major differences between evaluation and research.
Theory
Primary aims of theory are to "fit data to a theory" or "generate a theory from data."
Researcher's Dictionary
Hypothesis: a tentative statement about relationships between two or more variables.
Theory: An explanation about the cause of a specific phenomenon by describing a relationship between variables or constructs.
Construct: A concept used to integrate in an orderly way diverse data.
The Research Process
There are seven steps recognized in a quality research project. They are:
Identify the Problem or Issue, and State the Possible Relationships of Variables
Review and Analysis of Relevant Literature and other Studies.
Specify the Hypothesis or Research Question(s)
Develop a Research Plan and Study Design, and Decide on Data Collection Method(s)
Choose Subjects, Conduct the Study, and Collect the Data
Conduct Data Analysis, and Report Findings and Results
Discuss Implications or the Findings, make Recommendations, and Generalize Results
1.3 The Trilogy of Evaluation and Research: Criteria, Evidence and Judgment
Three components of evaluation
Criteria: the standards or ideals on which something is evaluated or studied.
Criteria is the basic organizing framework for evaluations, similar to a hypothesis or research question.
Criteria must be stated specifically enough so that it can be measured.
Evidence: is data, and data are pieces of information that are collected and analyzed to determine whether criteria are met.
Important aspects of gathering data include the timing, type of data, sample size and composition, and techniques for handling data.
Two major types of data are:
Quantitative: refers to numbers from measurements and result in some type of statistics.
Qualitative: refers to words used to describe or explain what is happening.
Judgment: is the interpretation for the value of something based on evidence collected from predetermined criteria.
Judgment is the final step in the evaluation process and includes a number of conclusions and recommendations.
Trilogy Summary |
||||||||
Criteria
|
||||||||
Evidence
|
||||||||
Judgment
|
1.4 Why Evaluate: You Don't Count if You Don't Count
New Concepts in the 21st Century
Best Practice: an aspect of an agency, process, or system that is considered excellent.
Benchmarking: is a way to identify best or promising practices because it is a standard of operation that enables an organization to compare itself to others' performance or to some standard or average.
Evidence-based Practice: refers to a decision-making process that integrates the best available research, professional expertise, and participant characteristics. It is an approach to assure that the programs conducted in recreation have the potential to make a difference in people's lives.
Major Reasons for Evaluation
Determine accountability - describes the capability of a leisure-service delivery system to justify or explain the activities and services provided. It reflects the extent expenditures, activities, and processes effectively and efficiently accomplish the purpose of an organization or program.
Assess or establish a baseline - an assessment is the gathering of data that is put into an understandable form to compare results with objectives.
Assess the attainment of goals and objectives - determine if sated objectives are operating and/or whether other objectives are more appropriate.
Ascertain outcomes and impacts - attempts to determine what differences a program has made.
To determine the keys to successes and failures - this is used to document processes that are used to obtain certain objectives. is similar to evidence-based practice and helps determine what contributes to a successful program or what creates problems or failures.
To improve programs - is related to quality control as a key practical reason for evaluation. Professionals evaluate staff, programs, policies and participants to make revisions in their existing programs.
To set future directions - all evaluations should result in changes for the future.
To comply with external standards - which may be the government, a funding agency, or professional body.
Other Reasons to Evaluate
There are many reasons for evaluation. Seldom would a professional only have one specific reason for evaluation.
Fear of Evaluation: difficult without goals and objectives; fear of results (negative); un-educated on evaluation process; costly; and time consuming.
When Not to Evaluate: do not evaluate
unless you are committed about making decisions to improve your program.
if your agency has serious organizational problems.
if you do not have goals and objectives that can be measured.
if you already know the outcome
if you know the disadvantages outweigh the advantages.
Knowing How to Evaluate: because professionals do not know how to develop an effective evaluation process, how to analyze the data, and/or how to interpret the data in a useful way to assist in making decisions.
1.5 Approaches to Evaluation: Models and More
Six Approaches/Models to Evaluation
A Pseudo-Model: Intuitive Judgment: relates to day-to-day observations made that provide information for decision making. Intuitive judgment is useful, but a systematic approach to determine criteria, collect evidence, and make enlightened decisions is also necessary.
Professional Expert Judgment: a form of evaluation using professional judgment or expert opinion. It may be either hiring an external evaluator or consultant or using a set of external standards. A standard is a statement of desirable practice or performance. A standard is an indirect measure of effectiveness. Many park and recreation professional associations have created standards for their specialty. These are often in the form of accreditation standards. These standards may be criterion referenced or norm referenced.
Goal-Attainment Model: is the predominate evaluation/management model because it uses pre-established goals and objectives to measure outcomes. A goal is a clear general statement about how the organization meets its purpose or mission. An objective is defined as a written intention about an outcome (see page 37).
Logic Model: this is a form of Goal-Attainment which helps a programmer where the program is going. It provides a framework for considering how to think about program evaluations and assessment of participant outcomes. It also provides a means for integrating planned work with intended results of the work (see page 40).
Goal-Free (Black Box) Model: is based on examining an organization, group of participants, or program, regardless of the goals. The point is to discover and judge actual effects, outcomes, or impacts without considering what the effects were supposed to be. The purpose is to determine what is really happening. data used may be either qualitative or quantitative but seems to work best with qualitative methods.
Process or Systems Approach: the model is process oriented and is used to create an understanding of an organization and it is capable of achieving agency and program outcomes (products). This approach is often used in management planning such as Program Evaluation and Review Technique (PERT) and Critical Path methods (CPM). Using the Systems Approach an entire organization or only its components can be evaluated.
1.6 Those Who Fail to Plan, Plan to Fail: The Five P's of Evaluation
Evaluation requires PLANNING!
Five P's
Program quality and improvement | |
Personnel | |
Places | |
Policies/Administration | |
Participant Outcomes |
Few recreation agencies are committed to continuous and systematic program evaluation.
Evaluating Systems: A systematic plan would include: establishing goals and objectives; establishing conclusions from previous evaluations; examining strategic or long-range plans; and creating a schedule (see page 56).
Personnel: Staffing is the largest expense in most recreation agencies. A professional and productive staff has a direct impact on the efficiency and effectiveness of the organization. the benefits of staff evaluations include improved job performance and providing feedback for personal development of staff. Staff evaluations may be conducted mid-year (formative) or at the end of the year (summative).
Policies/Administration: Evaluation is also used to analyze policies, procedures and management issues. Evaluation of public opinion, cost-benefits, performance based programs, economic impacts, and planning.
Places (Areas and Facilities): Evaluations include number of users, safety and legal aspects. Pre-established standards and often used in evaluating provisions for parks based on population (carrying capacity), levels of service, and risk management. Geographic Information Systems (GIS) now offer unique ways to monitor many types of information related to parks and recreation areas and facilities.
1.7 From Good to Great: Evaluating Program Quality and Participants
Benefit: anything good for a person of a thing. It also relates to a desired condition that is maintained or changed. A benefit also equals and outcome or end result.
Four Areas of Benefits
Individual | |
Communal | |
Economic | |
Environmental |
Programs are not just a bunch of activities that are planned for people. Programs should have a clear purpose and should have identifiable goals. A quality program results in activities that are designed and implemented to meet certain outcomes that address specific community needs.
Four Basic Levels of Program Evaluation
Participation
1. inputs,
2. activities,
3. people involvement
Reactions
4. reactions - responses from participants
KASA Outcomes
5. KASA = K (awareness, understanding, problem solving); A (feelings, change of interest, ideas, beliefs); S (verbal or physical abilities, new skills, improved performance, abilities); A (desires, courses of action, new decision)
Actions
6. practice change outcomes,
7. long term impact on quality of life outcomes
Value of designing outcomes and quality programs is in using systematic ways to improve the probability that desired outcomes are achieved.
Eight Action Steps - Designing a Quality Program
Ask (assess) participants
Ask staff
Assess current practice
Brainstorm
Choose strategies
Take action
Share your plan
Evaluate
An important premise of program quality is to use intentional and purposeful actions to create positive change through an on-going cycle of improvement (Henderson, Bialeschki & Browne, 2017, p. 73).
Impact Research: the proof of outcomes from recreation programs/activities.
Effective Measurement of Participant Outcomes
determine criteria (what do you want to measure) | |
determine what data are needed to measure outcomes | |
collect and analyze data | |
compare to the expected outcomes | |
apply the findings in conclusions and recommendations |
1.8 A Time for Evaluation
Timing of evaluations can profoundly affect the process, as the temporal sequence changes the evaluator's approach.
Evaluation may be conducted at the beginning (assessment) during the process (formative) or at the end of a program (summative).
Assessment examines the type of need and is used for additional planning. It is a process of determining and specifically defining a program, facility, staff member, participant's behavior or administrative process.
Formative evaluation uses an examination of the progress or process.
Summative evaluation measures the product or outcome or overall efficiency.
Needs Assessments - are conducted in a community recreation program and are used to determine the differences between "what is" and "what should be." Assessment evaluation determines where you want to begin.
Formative and summative evaluations may not measure different aspects of recreation, but their results are used in different ways. Formative evaluation will address organizational objectives (efficiency and effectiveness) and summative evaluation will address overall performance objectives, outcomes and products.
1.9 Designing Evaluation and Research Projects: Doing What You Gotta Do
Planning a research project
choosing a model to guide you | |
determine the timing and area (P's) you want to evaluate | |
select specific methods to use |
Design: is a plan for allocating resources of time and energy.
Design constraints = financial; time; and human resources
Developing Plans for a Specific Evaluation Project
Why - what is the purpose of the project
What - which aspects of the P's will be evaluated
Who - who wants the information and in what form & who will conduct the research
When - timing and time-line
Where - sample size and composition
How - how to collect and analyze the data, methods, techniques and ethics
1.10 To Be or Not to Be: Competencies and the Art of Systematic Inquiry
Systematic formal evaluation requires - education, training, and practical experience.
Internal vs External Evaluations:
Advantages: knows the organization; requires less time to become familiar with the organization; more accessible to other staff and less intrusive; easier to make changes from the inside
Disadvantages: more objective; professional commitment to the field not the organization; credibility based on professional experience and competency; more resources and data from other organizations (see table on page 98)
Developing Competencies
knowledge about the topical area to be evaluated
knowledge of how to design evaluation systems, developing planning frameworks and writing goals and objectives.
have a comprehensive knowledge of all the possible evaluation research methods
able to interpret data and relate the results to the criteria
know what to look for in analyzing qualitative and quantitative data using appropriate strategies and statistics
understand how to use results for decision making
able to address the political, legal, moral and ethical issues encountered in conducting an evaluation
appropriate personal qualities (professional, trustworthy, objective, responsiveness, good people skills)
1.11 Doing the Right Thing: Political, Legal, Ethical, and Moral Issues
Politics are the practical wisdom related to the beliefs and biases that individuals and groups hold.
understand the group or organization before the project is started | |
provide evidence to support any claims or conclusions | |
make sure everyone understands the purpose of the evaluation before starting the project |
Legal issues may arise in evaluations. Make sure your responses are coded and anonymous.
Ethical issues deal with issues of right and wrong and professional standards of the profession. Ethics involve:
be realistic about the results and a projects values and limitations | |
privacy is to assure confidentiality and anonymity in the evaluation | |
coercion is not allowed, no one should be forced to participate | |
written consent may not be required but may be useful in the evaluation | |
do no harm in the evaluation. Make sure no harm comes to anyone for their participation | |
participants right to know the results. People who contribute data have a right to know the results |
Moral issues relate to what (right or wrong) the evaluator may do while conducting a study.
inappropriate or inadequate samples | |
cultural or procedural biases | |
must report all results (positive and negative) | |
extended delay in publishing results | |
ensuring quality control throughout the project |
[Unit 1]
Copyright 2011. Northern Arizona University, ALL RIGHTS RESERVED