Unit One: Criteria: Foundations for Evaluation and Research 

On-line Lesson

Foundations for Evaluation and Research

1.0 Introduction to Criteria

According to Henderson and Bialeschki (2010), "The goal of systematic evaluation and rigorous research is to make reliable, valid, useful, and enlightened decisions and interpretations" (p.1).

Basic Concepts

Determining criteria is the process of developing the research question by reviewing issues and considerations to establish the purpose of the inquiry (research).

Evaluation is making decisions based on specific questions, issues, or criteria and supporting evidence.

Evaluation research is the process used to collect and analyze data or evidence.

Recreation services are the human service organizations and enterprises related to:

bullet

parks

bullet

recreation

bullet

tourism

bullet

commercial recreation

bullet

outdoor recreation

bullet

education

bullet

sports

bullet

therapeutic recreation

1.1 The Basic Question: What is Systematic Inquiry?

Recreation professionals use the process and techniques of systematic inquiry (evaluation and research) to help them make enlightened decisions and improve what they do.

Evaluation: the systematic collection and analysis of data to address some criteria of judgments about the worth or improvement of something.

Goal of evaluation is to determine "what is" compared to "what should be."

Research: the systematic investigation within some discipline undertaken to establish facts and principles to contribute to a body of knowledge. The goal of research is not necessarily to assist in practical decision making.

According to Mitra & Lankford (1999), "The ultimate goal of leisure research is to produce an accumulating body of reliable knowledge that will enable use to explain, predict, and understand leisure phenomenon that interest us." This knowledge would be used to promote positive human growth and development through recreation and leisure services.

Therefore, it is imperative for park, recreation and leisure professionals to acquire the research skills necessary to read current professional research, design and conduct their own research projects.

Systematic (Formal) Evaluations

A recreation director who uses observation, listening and discussions with participants and staff to evaluate programs is using an "informal evaluative" process.

A formal or systematic evaluation provides rigor when outcomes are complex, decisions are important, and evidence is needed to make enlightened or informed decisions or interpretations.

1) criteria (hypothesis, research questions, guiding questions, working hypothesis, purposes, measures, or objectives)

2) evidence (data collected and analyzed using appropriate designs and methods)

3) judgment (interpretations expresses in conclusions and recommendations)

Criteria + Evidence + Judgment = Evaluation

Continuum of Evaluation - Everyday Living to Systematic

Everyday Living

bullet

feelings/Thoughts

bullet

descriptive designs

bullet

experiments

Systematic

Conducting formal evaluations is the basis for more efficient and effective operations, staff and programs.

Effectiveness: relates to changes of the results of a program of intervention

Efficiency: is how changes or results happen

Important Characteristics of Evaluation

bullet

Evaluation is a process.

bullet

The goal of evaluation is to make decisions by determining value or worth.

bullet

The most common way to evaluate is to measure and judge how well objectives are met.

bullet

The results of evaluation should lead to decision making about and for a specific situation or context.

bullet

Evaluation may be formal or informal.

bullet

Evaluation within an organization should be ongoing with evaluation systems.

bullet

Evaluation is continuous and does not necessarily occur only at the end of an event or activity.

bullet

Responsive evaluation is based on the premise that evaluation should respond to issues and concerns within an organization.

bullet

No magic formulas for evaluation exist.

1.2 Evaluation and Research: Viva la Difference

Evaluation: the systematic collection and analysis of data to address some criteria of judgments about the worth or improvement of something.

Research: the systematic investigation within some discipline undertaken to establish facts and principles to contribute to a body of knowledge. The goal of research is not necessarily to assist in practical decision making.

Both evaluation and research are characterized by clearly delineated protocol in collecting, processing, and analyzing data. Evaluation is a specific form of applied research that results in the application of information for decision making.

Basic and Applied Research

Applied Research: studies conducted to provide answers to immediate problems or issues, such as program evaluations.

Basic Research: studies conducted toward long-range questions or advancing scientific knowledge.

Differences in Objectives and Purposes of Research and Evaluation

Research Evaluation
Tries to prove or disprove hypotheses. Focuses on improvement in areas related to programs, personnel, policies, places and facilities.
Focus on increasing understanding or scientific truth. Focus on problem solving and decision making in a specific situation.
Applies scientific techniques to testing hypothesis or research questions on findings  related to theory. Evaluation generally compare the results with organization goals to see how well they have been met.
Using theory and sampling techniques, should be generalizable to other situations. Not interested in generalizing results to other situations.
Research is conducted to develop new knowledge. Evaluation undertaken when a decision needs to be made or the value or worth of something is unknown.
results are published to add to the body of knowledge about a specific topic or theory. Results are not usually shared publicly

 

book icon Examples of Journals in Park, Recreation and Leisure Services 
 

bullet Journal of Experiential Education
bullet Journal of Leisure Research
bulletJournal of Parks and Recreation Administration
bulletJournal of Physical Education, Recreation, and Dance
bulletLARnet the Cyber Journal of Applied Leisure and Recreation Research
bulletLeisure/Loisir (formerly) Journal of Applied Recreation Research
bullet Leisure Sciences
bulletNational Intramural-Recreational Sports Association (NIRSA) Recreational Sports Journal
bulletTherapeutic Recreation Journal

Sharing of Common Methods

Research includes elements of evaluation and evaluation requires the use of research techniques. (See page 12 in the text.)

The use of methods are applied differently because research relies on theory and evaluation relies on application and decision making.

The same protocols and rules of methodology that apply to research apply to evaluation.

Reasons and applications are the major differences between evaluation and research.

Theory

Primary aims of theory are to "fit data to a theory" or "generate a theory from data."

Researcher's Dictionary

Hypothesis: a tentative statement about relationships between two or more variables.

Theory: An explanation about the cause of a specific phenomenon by describing a relationship between variables or constructs.

Construct: A concept used to integrate in an orderly way diverse data.

The Research Process

There are seven steps recognized in a quality research project. They are:

  1. Identify the Problem or Issue, and State the Possible Relationships of Variables

  2. Review and Analysis of Relevant Literature and other Studies.

  3. Specify the Hypothesis or Research Question(s)

  4. Develop a Research Plan and Study Design, and Decide on Data Collection Method(s)

  5. Choose Subjects, Conduct the Study, and Collect the Data

  6. Conduct Data Analysis, and Report Findings and Results

  7. Discuss Implications or the Findings, make Recommendations, and Generalize Results

1.3 The Trilogy of Evaluation and Research: Criteria, Evidence and Judgment

Three components of evaluation

Criteria: the standards or ideals on which something is evaluated or studied.

Criteria is the basic organizing framework for evaluations, similar to a hypothesis or research question.

Criteria must be stated specifically enough so that it can be measured.

Evidence: is data, and data are pieces of information that are collected and analyzed to determine whether criteria are met.

Important aspects of gathering data include the timing, type of data, sample size and composition, and techniques for handling data.

Two major types of data are:

    Quantitative: refers to numbers from measurements and result in some type of statistics.

    Qualitative: refers to words used to describe or explain what is happening.

Judgment: is the interpretation for the value of something based on evidence collected from predetermined criteria.

Judgment is the final step in the evaluation process and includes a number of conclusions and recommendations.

Trilogy Summary

Criteria

bullet

Determination of a problem and a reason for doing evaluation or research

bullet

Examination of goals and objectives (if they exist)

bullet

Development of a broad evaluation questions or a research problem statement to be addressed
 

Evidence

bullet

Method selection, including instrument design and pretest

bullet

Sample selection

bullet

Actual data collection

bullet

Data analysis (coding and interpretation)
 

Judgment

bullet

Presentation of findings

bullet

Development of conclusions

bullet

Development of recommendations (for improvement, applications, or further research)
 

1.4 Why Evaluate: You Don't Count if You Don't Count

New Concepts in the 21st Century

Best Practice: an aspect of an agency, process, or system that is considered excellent.

Benchmarking: is a way to identify best or promising practices because it is a standard of operation that enables an organization to compare itself to others' performance or to some standard or average.

Evidence-based Practice: refers to a decision-making process that integrates the best available research, professional expertise, and participant characteristics. It is an approach to assure that the programs conducted in recreation have the potential to make a difference in people's lives.

Major Reasons for Evaluation

  1. Determine accountability - describes the capability of a leisure-service delivery system to justify or explain the activities and services provided. It reflects the extent expenditures, activities, and processes effectively and efficiently accomplish the purpose of an organization or program.

  2. Assess or establish a baseline - an assessment is the gathering of data that is put into an understandable form to compare results with objectives.

  3. Assess the attainment of goals and objectives - determine if sated objectives are operating and/or whether other objectives are more appropriate.

  4. Ascertain outcomes and impacts - attempts to determine what differences a program has made.

  5. To determine the keys to successes and failures - this is used to document processes that are used to obtain certain objectives. is similar to evidence-based practice and helps determine what contributes to a successful program or what creates problems or failures.

  6. To improve programs - is related to quality control as a key practical reason for evaluation. Professionals evaluate staff, programs, policies and participants to make revisions in their existing programs.

  7. To set future directions - all evaluations should result in changes  for the future.

  8. To comply with external standards - which may be the government, a funding agency, or professional body.

Other Reasons to Evaluate

There are many reasons for evaluation. Seldom would a professional only have one specific reason for evaluation.

  • Postponing decisions or avoiding responsibility: used to "buy-time" until some issue can be figured out.
  • Program justification or staff elimination: evaluations done with the purpose of program justification or staff eliminations but bias is a threat to the validity of this type of evaluation.
  • Public relations impact: evaluations done explicitly for public relations may not result in an improved program unless the results are applied.
  • Funding requirements: evaluations conducted for grant or funding requirements are not often as effective as those with include stakeholders or participants.

Fear of Evaluation: difficult without goals and objectives; fear of results (negative); un-educated on evaluation process; costly; and time consuming.

When Not to Evaluate: do not evaluate

  1. unless you are committed about making decisions to improve your program.

  2. if your agency has serious organizational problems.

  3. if you do not have goals and objectives that can be measured.

  4. if you already know the outcome

  5. if you know the disadvantages outweigh the advantages.

Knowing How to Evaluate: because professionals do not know how to develop an effective evaluation process, how to analyze the data, and/or how to interpret the data in a useful way to assist in making decisions.

1.5 Approaches to Evaluation: Models and More

Six Approaches/Models to Evaluation

A Pseudo-Model: Intuitive Judgment: relates to day-to-day observations made that provide information for decision making. Intuitive judgment is useful, but a systematic approach to determine criteria, collect evidence, and make enlightened decisions is also necessary.

Professional Expert Judgment: a form of evaluation using professional judgment or expert opinion. It may be either hiring an external  evaluator or consultant or using a set of external standards. A standard is a statement of desirable practice or performance. A standard is an indirect measure of effectiveness. Many park and recreation professional associations have created standards for their specialty. These are often in the form of accreditation standards. These standards may be criterion referenced or norm referenced.

Goal-Attainment Model: is the predominate evaluation/management model because it uses pre-established goals and objectives to measure outcomes. A goal is a clear general statement about how the organization meets its purpose or mission. An objective is defined as a written intention about an outcome (see page 37).

Logic Model: this is a form of Goal-Attainment which helps a programmer where the program is going. It provides a framework for considering how to think about program evaluations and assessment of participant outcomes. It also provides a means for integrating planned work with intended results of the work (see page 40).

Goal-Free (Black Box) Model: is based on examining an organization, group of participants, or program, regardless of the goals. The point is to discover  and judge actual effects, outcomes, or impacts without considering what the effects were supposed to be. The purpose is to determine what is really happening. data used may be either qualitative or quantitative but seems to work best with qualitative methods.

Process or Systems Approach: the model is process oriented and is used to create an understanding of an organization and it is capable of achieving agency and program outcomes (products). This approach is often used in management planning such as Program Evaluation and Review Technique (PERT) and Critical Path methods (CPM). Using the Systems Approach an entire organization or only its components can be evaluated.

1.6 Those Who Fail to Plan, Plan to Fail: The Five P's of Evaluation

Evaluation requires PLANNING!

Five P's

bullet

Program quality and improvement

bullet

Personnel

bullet

Places

bullet

Policies/Administration

bullet

Participant Outcomes

Few recreation agencies are committed to continuous and systematic program evaluation.

Evaluating Systems: A systematic plan would include: establishing goals and objectives; establishing conclusions from previous evaluations; examining strategic or long-range plans; and creating a schedule (see page 56).

Personnel: Staffing is the largest expense in most recreation agencies. A professional and productive staff has a direct impact on the efficiency and effectiveness of the organization. the benefits of staff evaluations include improved job performance and providing feedback for personal development of staff. Staff evaluations may be conducted mid-year (formative) or at the end of the year (summative).

Policies/Administration: Evaluation is also used to analyze policies, procedures and management issues.  Evaluation of public opinion, cost-benefits, performance based programs, economic impacts, and planning.

Places (Areas and Facilities): Evaluations include number of users, safety and legal aspects. Pre-established standards and often used in evaluating provisions for parks based on population (carrying capacity), levels of service, and risk management. Geographic Information Systems (GIS) now offer unique ways to monitor many types of information related to parks and recreation areas and facilities.

1.7 From Good to Great: Evaluating Program Quality and Participants

Benefit: anything good for a person of a thing. It also relates to a desired condition that is maintained or changed. A benefit also equals and outcome or end result.

Four Areas of Benefits

bullet

Individual

bullet

Communal

bullet

Economic

bullet

Environmental

Programs are not just a bunch of activities that are planned for people. Programs should have a clear purpose and should have identifiable goals. A quality program results in activities that are designed and implemented to meet certain outcomes that address specific community needs.

Four Basic Levels of Program Evaluation

Participation

1. inputs,

2. activities,

3. people involvement

Reactions

4. reactions - responses from participants

KASA Outcomes

5. KASA = K (awareness, understanding, problem solving); A (feelings, change of interest, ideas, beliefs); S (verbal or physical abilities, new skills, improved performance, abilities); A (desires, courses of action, new decision)

Actions

6. practice change outcomes,

7. long term impact on quality of life outcomes

Value of designing outcomes and quality programs is in using systematic ways to improve the probability that desired outcomes are achieved.

Eight Action Steps - Designing a Quality Program

  1. Ask (assess) participants

  2. Ask staff

  3. Assess current practice

  4. Brainstorm

  5. Choose strategies

  6. Take action

  7.  Share your plan

  8. Evaluate

An important premise of program quality is to use intentional and purposeful actions to create positive change through an on-going cycle of improvement (Henderson, Bialeschki & Browne, 2017, p. 73).

Impact Research: the proof of outcomes from recreation programs/activities.

Effective Measurement of Participant Outcomes

bullet

determine criteria (what do you want to measure)

bullet

determine what data are needed to measure outcomes

bullet

collect and analyze data

bullet

compare to the expected outcomes

bullet

apply the findings in conclusions and recommendations

1.8 A Time for Evaluation

Timing of evaluations can profoundly affect the process, as the temporal sequence changes the evaluator's approach.

Evaluation may be conducted at the beginning (assessment) during the process (formative) or at the end of a program (summative).

Assessment examines the type of need and is used for additional planning. It is a process of determining and specifically defining a program, facility, staff member, participant's behavior or administrative process.

Formative evaluation uses an examination of the progress or process.

Summative evaluation measures the product or outcome or overall efficiency.

Needs Assessments - are conducted in a community recreation program and are used to determine the differences between "what is" and "what should be." Assessment evaluation determines where you want to begin.

Formative and summative evaluations may not measure different aspects of recreation, but their results are used in different ways. Formative evaluation will address organizational objectives (efficiency and effectiveness) and summative evaluation will address overall performance objectives, outcomes and products.

1.9 Designing Evaluation and Research Projects: Doing What You Gotta Do

Planning a research project

bullet

choosing a model to guide you

bullet

determine the timing and area (P's) you want to evaluate

bullet

select specific methods to use

Design: is a plan for allocating resources of time and energy.

Design constraints = financial; time; and human resources

Developing Plans for a Specific Evaluation Project

  1. Why - what is the purpose of the project

  2. What - which aspects of the P's will be evaluated

  3. Who - who wants the information and in what form & who will conduct the research

  4. When - timing and time-line

  5. Where - sample size and composition

  6. How - how to collect and analyze the data, methods, techniques and ethics

1.10 To Be or Not to Be: Competencies and the Art of Systematic Inquiry

Systematic formal evaluation requires - education, training, and practical experience.

Internal vs External Evaluations:

    Advantages: knows the organization; requires less time to become familiar with the organization; more accessible to other staff and less intrusive; easier to make changes from the inside

    Disadvantages:  more objective; professional commitment to the field not the organization; credibility based on professional experience and competency; more resources and data from other organizations  (see table on page 98)

Developing Competencies

  1. knowledge about the topical area to be evaluated

  2. knowledge of how to design evaluation systems, developing planning frameworks and writing goals and objectives.

  3. have a comprehensive knowledge of all the possible evaluation research methods

  4. able to interpret data and relate the results to the criteria

  5. know what to look for in analyzing qualitative and quantitative data using appropriate strategies and statistics

  6. understand how to use results for decision making

  7. able to address the political, legal, moral and ethical issues encountered in conducting an evaluation

  8. appropriate personal qualities (professional, trustworthy, objective, responsiveness, good people skills)

1.11 Doing the Right Thing: Political, Legal, Ethical, and Moral Issues

Politics are the practical wisdom related to the beliefs and biases that individuals and groups hold.

bullet

understand the group or organization before the project is started

bullet

provide evidence to support any claims or conclusions

bullet

make sure everyone understands the purpose of the evaluation before starting the project

Legal issues may arise in evaluations. Make sure your responses are coded and anonymous.

Ethical issues deal with issues of right and wrong and professional standards of the profession. Ethics involve:

bullet

be realistic about the results and a projects values and limitations

bullet

privacy is to assure confidentiality and anonymity in the evaluation

bullet

coercion is not allowed, no one should be forced to participate

bullet

written consent may not be required but may be useful in the evaluation

bullet

do no harm in the evaluation. Make sure no harm comes to anyone for their participation

bullet

participants right to know the results. People who contribute data have a right to know the results

Moral issues relate to what (right or wrong) the evaluator may do while conducting a study.

bullet

inappropriate or inadequate samples

bullet

cultural or procedural biases

bullet

must report all results (positive and negative)

bullet

extended delay in publishing results

bullet

ensuring quality control throughout the project

 

[Unit 1]

Copyright 2011. Northern Arizona University, ALL RIGHTS RESERVED