Thursday, 20 March 2014

Module 3, Assignment 5, Survey

For Assignment 5 I created a survey regarding the purposes of using and the reliance people have upon cell phones.  I attempted to include a variety of questions including multiple choice, short answer, open-ended, and a rating scale.  I was surprised that this short survey took so much consideration with its design and I learned how much time and effort is required to create a survey.

I administered the survey to five people ranging between the ages of 16 to 80 years old to ensure that the questions and response choices were appropriate for all cell phone users spanning those age groups. I was not able to find a participant for the 0-11 year old age group.  Observing the participants complete the survey revealed where frustrations with the length, format, and clarity of questions were for the different age groups.

My original survey is available at the following link:
Cell Phone Survey, Original

The participant feedback for this survey was extremely helpful in regard to the quality of the survey for both how I worded the questions and with ensuring each question and response choice was clear to all respondents. Piloting ensured I was asking the right questions for the information I was seeking as well as with excluding questions that would collect extra information not needed for my purpose.  I realized that if I gave this survey to 100 or 1 000 or 10 000 people which included an irrelevant question, I would have data that I did not really require but would then have to deal with the information which would amount to time and effort that I would not need to invest.

I received positive feedback from all participants about the ease in completing this survey and the brevity of it.  One participant commented that they would actually do this survey if it was given to them because it was so quick and easy to complete and another said it was fun to do. Participants were able to identify the purposes of the evaluation after completing it and felt that it included all aspects needed for the intended purposes of the survey.

One participant noted the design element of question order that I intentionally applied and how this helped transition from one purpose of the survey to the next purpose with questions that appropriately addressed the intentions of the survey.  Participants felt the survey was culturally appropriate, did not contain bias or leading questions, and the response choices were very appropriate for them.

It was an enlightening experience to hear participant feedback about the survey after they completed it.  Their comments about the survey caused me to make some changes as well as to reflect on other aspects of the survey which sparked my thoughts in making further adjustments.  When looking at the survey more closely after the feedback was collected, I noticed that my rating scale did not have numerical values!  It seemed to me that a rating scale should so I made this adjustment. The changes I made to the original survey are from both participant feedback and my own reflection and the revised survey can be viewed at the link below.  

The following changes were made to the original survey:
-including the choice ‘prefer not to answer’ for the participant's age and gender to make the survey less threatening and more inclusive to all possible participants;
-removing an unnecessary question about other possible purposes for a cell phone if that feature were available;
-changing the rating scale from a vertical list with boxes to check to a horizontal scale with numbers;
-changing the order of response choices for one question;
-adjusting the wording of one response choice to make it less specific so it was an appropriate choice for more participants;
-I added a response choice to clear up confusion that was occurring for some of the participants regarding use of specific cell phone features;
-I redesigned the final question as two separate questions to ensure the data collected would clearly apply to each question; and,
-I reworded the two final questions to clarify what I was asking as the use of the word ‘other’ was confusing for one participant since I had already used the word ‘other’ in a different context on the survey.

I found that analysis of the data collected informed my purpose for the survey.  I did not have a preference for qualitative or quantitative data while compiling the data but did find that most participants utilized the open-ended questions.  These questions revealed information that I could not gather with quantitative data. "Stories are data with a soul", Brene Brown, and the open-ended responses widened my own perspective. Piloting my survey was a very valuable endeavour and I will not think twice to include the piloting of data collection tools, as appropriate, in the process of all evaluations I may do.  

I have some lingering questions about survey design that I would like to document with this assignment:
~Do all rating scales have to have numerical values?
~Should a brief description of the survey’s purpose be included with the survey?
~What should I do if a survey is incomplete when submitted?  Use the data that is filled in or eliminate that survey and its data all together?  Would I then include that I did not use that survey data in the evaluation report?

Thanks for taking the time to read my blog!  I welcome your comments, questions, and further suggestions for improvement regarding my Cell Phone Survey.

Sunday, 9 March 2014

Module 2, Assignment 4, Logic Model

The link below will take you to the Logic Model which accompanies the Prairie Spirit Film Festival Program Evaluation Plan for assignment 3 in the previous post. 

Comments and questions are welcomed!

Film Festival Logic Model, Assignment 4

Saturday, 8 March 2014

Module 2, Assignment 3, Program Evaluation Plan

The Prairie Spirit School Division Film Festival is the focus of the evaluation plan I am submitting for assignment 3.  This film festival has been an annual event for students in the school division since 2008.  Students work with the support of their teachers, and division office personnel if requested.  Students are able to navigate, problem solve, and support each other with many aspects of the creative and technical aspects of pre-production, production, and post-production of their films.  Student film entries are judged on the filmmaker's use of Comprehension Strategies and conveying meaning rather than on a polished final product.  Included with this Evaluation Plan is a Gantt chart.   
My evaluation plan and accompanying Gantt chart can be found at the respective links below.  I welcome your comments and questions!

Film Festival Program Evaluation Plan, Assignment 3

Gantt Chart for Film Festival Program Evaluation Plan, Assignment 3

Friday, 7 February 2014

Module 1 Assignment 2, Choosing a Program Evaluation

In reading Klomp, Dyck, and Sheppard’s 2003 "Description and evaluation of a prenatal exercise program for urban Aboriginal women" published in the Canadian Journal of Diabetes, 27: 231-238, I feel that Scriven’s goal-free model would work well to evaluate the Prenatal Exercise Program for Urban Aboriginal Women. The program goals are not clearly stated in the description referenced above allowing for an exploration of what the goals and impacts of the program are.  With the goal-free model the evaluator “….searches for all effects of a program regardless of its developers’ objectives.” (Stufflebeam and Shinkfield, 2007, p. 374).  I feel that using this model will identify both the anticipated and unanticipated outcomes and, by gathering all this information, program developers may more deeply understand the program’s benefits or shortcomings for any future efforts of health professionals and Aboriginal peoples with addressing diabetes and related health issues in the Aboriginal population. 

I then wondered who will most directly benefit from this prenatal exercise program and how it might be brought to the larger group.  In applying the consumer-oriented approach (Stufflebeam and Shinkfield, 2007, p. 374) to this evaluation model, the participants, their children, and the larger Aboriginal population will see their needs addressed. The evaluation focus would not be solely on the program itself but will value the consumer’s experience and need.  “Irrespective of the goals the evaluator must identify actual outcomes and assess their value from the perspective of consumer’s needs.” (Stufflebeam and Shinkfield, 1985, p. 312).  The participant’s voice will be heard when determining the effects of the program and the reasons for the observed outcomes and effects.

I also feel the goal-free evaluation tool brings forth a less prescriptive evaluation approach allowing the evaluator to be responsive to participants.  There is a larger picture that can be focussed on than just “…a means and an end….”  As a specific demographic was the focus of this exercise program, the evaluation needs to respect the attitudes and values of the participants.  The goal-free model, according to Scriven, will guide an evaluation that is “…less prone to social, perceptual, and cognitive bias; … and more equitable in considering a wide range of values.” (Stufflebeam and Shinkfield, 2007, p. 374).  There is significant emphasis on the humanistic side of this program regarding implementation and supports.  I wonder if this attention was present when the first actions in developing the program were taken or, if it was the statistical data that brought about the program designers’ efforts in creating the program initially.  The focus on participants is observed through the many supports provided to them for their regular attendance and participation in the program.  Using the goal-free, consumer-oriented approach gives the evaluator the chance to authentically understand the participants as well as the people delivering and supporting the program to gain better insight into the program’s effects and perspectives on why they occurred.  

Applying the goal-free model allows the evaluator to determine the path of the evaluation and can be responsive to the various factors within the program while gathering qualitative and quantitative data.  I feel that both formative and summative assessment should be applied to this evaluation effort to effectively identify the program goals and its effects.  The two year program was flexible and adjusted mid-course regarding services provided to participants; therefore, using formative feedback and assessments, flexibility and responsiveness to all involved can be supported.  Gathering not only the quantitative data regarding gestational diabetes mellitus regarding the participants and for the long-term study of diabetes developing in their children, but also the qualitative information including the participants' stories, values and attitudes in relation to this study will be highly relevant in determining the program goals and effects.  One further evaluation I would suggest incorporating is a cost-benefit analysis, such as from the Provus model, to further evidence how funding impacts the program.  Many human resources were put in place for the success of the program along with services and access to other resources for the participants to achieve successful participation.  Analysing the costs of these resources to the program benefits could be of use as the effects of the program are explored and the program replicated or expanded.  These costs could also be compared to the long-term anticipated costs of diabetes 2 which can develop in the participants and their children in the future. 

I believe using the goal-free, consumer-oriented approach with a cost-benefit analysis will provide a comprehensive report, which includes and respects participants' cultural perspective, and will see the summative report as a culturally responsive evaluation that brings validity to the program.  

References:

Stufflebeam, D. & Shinkfield, A. (2007). Evaluation Theory, Models and ApplicationSan Francisco, CA: Jossey-Bass. 

Stufflebeam, D. & Shinkfield, A. (1985). Systematic Evaluation. USA: Kluwer-Nijhoff Publishing.

Saturday, 1 February 2014

Ecur 809 Module 1 Assignment 1, Program Evaluation Commentary

With hopes that I would locate and be able to use an evaluation pertaining to my work, I was very happy to find an evaluation about a teacher professional development model that is similar to one I currently participate in.  

In March 2004, the Center for Research on Teaching and Learning presented their findings on three Professional Development Laboratory programs implemented in Community School District (CSD) 20 in New York City for the 2002-2003 school year. This document can be found at Evaluation of the Professional Development Lab (PDL) Programs in Community School District 20 .

The Professional Development Laboratory, a professional development unit instituted in 1989, focuses on professional development and training programs for teachers and school administrators.   They developed and implemented the following three PDL programs in CSD 20 which are the focus of this evaluation:
  • PDL for New Teachers Program was implemented over the course of three years and brought new teachers with 2 or fewer years of experience to an induction program for the purposes of increasing their professional competency in four focused teaching areas, providing them with high-quality mentoring, and visiting Resident Teacher classrooms for observations and learning opportunities to improve their pedagogical skills;
  • PDL for Middle School Social Studies Program, was also three years long and provided training institutes for participating teachers to build their professional competency in five focused teaching areas along with further developing their leadership and communication skills with principles of mentoring; and, 
  • Learning Through Practice Leadership Program (LTPLP), a seven day training program was delivered to develop leadership skills of selected district teachers and staff developers.
The intended purpose of this evaluation was to document the implementation of the Professional Development Lab programs for their middle school teachers in Community School District 20.  It was also designed to determine the impact of the PDL programs upon the mentors, new middle school teachers, and the selected middle school teachers involved as well as the students served by the programs.

I believe this evaluation is a mixed model incorporating Scriven’s goal-based summative approach along with participatory empowerment elements.  There is a clear goal and data was used to determine if the goal was achieved by the end of the program.  To collect qualitative and quantitative data, a wide range of tools were implemented: questionnaires, interviews, surveys, rating scales, teacher observations, teacher assessment logs, and a state achievement test for students, along with statistics on teacher retention rates from New York City's Department of Education. Formative evaluations were completed on a weekly basis to guide teachers and their mentors in the ongoing work of supporting teacher’s needs as they worked toward the goal of improving "...teaching and learning by helping to build the professional competency of teachers." (p. 2) which was the common goal among the three programs.

I found the report to be credible with disclosures about the context and limitations and the inability to report on one area of evaluation which was restated twice in the report rather than mentioned and hidden.  There are many strengths to the evaluation including the detailed information provided about each program being implemented, the underlying rationale, and research supporting this professional development model.  As well, complete information about the implementation of the programs and contextual information further helps the reader paint a clear picture of the program, implementation, data collection, and evaluation findings.  Many evaluation tools were implemented and along with quantitative results included to indicate teacher and student improvement and the submission of completed assessment tools, this comprehensive evaluation included comments and recommendations from the participants while their identities were protected, being identified only by their role.  Many of the tools used to collect the data are familiar tools in the teaching profession with processes that are widely accepted in the educational community.  Also, tools did not just collect data at the end of the school year but throughout the year to measure the impacts of the various components provided to participants for their successful improvement.   Finally, I feel the evaluation has included abundant information so that it is not only useful to stakeholders but also to others interested in this professional development model.

One of the aspects of this evaluation that I felt was a weakness is that four of the evaluation tools were being piloted in this process which questions their reliability and validity.  This would have not been such a big issue if there was a second established tool to cover the same areas being evaluated that the pilot tools were addressing but, it led to having inconclusive findings for one of the evaluation queries regarding teacher efficacy. Teachers simply did not return the completed evaluation which makes me wonder if it was a theory failure or an implementation failure.  In regard to the conclusions and recommendations, the majority of comments made reflected the findings but recommendations concerning the middle school social studies program was lacking.  Student achievement scores for middle years social studies did not improve as a result of their teachers being in the program and I would have liked a more detailed comment for the single recommendation to "...pay greater attention to the collection of more complete data." (p. 46). Contributing to the content of this single recommendation was the failure of teachers to collect and submit student work.  As student work submissions were inconsistent they could not be used for the evaluation in a significant way.  I do not think more student work submissions would completely address the lack of teacher improvement and would like to have seen recommended change in the areas focused on for selected middle school teacher improvement.

I have learned much from the process of completing this assignment, especially regarding the intensity of completing an evaluation as well as about the lab classroom professional development model.

Thanks for reading!  
I welcome your comments, 
Corinne