Evaluating the Stephanie Alexander Garden Foundation

The following sample Health critical analysis is 2889 words long, in APA format, and written at the undergraduate level. It has been downloaded 530 times and is available for you to use, free of charge.

The Stephanie Alexander Garden Foundation is a particular movement that was began as a result of educating children on the importance of eating fresh fruits and vegetables. The philosophy of the particular foundation is to engage children in wanting to have positive eating habits and experiences with food. Started in 2001, cook and food writer Stephanie Alexander collaborated with the Melbourne school district of Australia in order to create the program. It was an approach that would begin with the assistance of both national and state government funding. The foundation became committed and dedicated to enriching the lives of all children and to a certain extent, individuals by assisting children in skills-based learning of how to grow and cook their own food. The program was made and continues to be kept up by the contributions and dedication of the Australian government, schools, certain organizations and individuals who understand the importance of educating children about healthy food choices and enjoying the taste of every single food they consume (“About the Program,” 2013).

The Purpose of Evaluation

The main objective of the evaluation will be to assess whether or not the foundation is achieving the objectives and goals that it set out to do at the onset of the program. Some of the questions that will be asked are: are the children receiving the necessary education on food that many contributors have wanted? To what degree does the foundation educate? In other words, what types of educational tools are they using to contribute to getting children to understand the essentials of growing their own fruits and vegetables? In order to fully understand the scope of the program and to evaluate it, an outcome evaluation will be performed that will assess how well the foundation is educating children on knowing how to grow and cook their food as well as the fundamentals of food education and eating healthy. The logical model that will be used is a Tiny Tools Results Chain, which assesses the positive and negatives of an intervention. Logical models "provide a graphic overview of a program" ("Eight Outcome Models," 2005). Tiny Tools Results Chains provide a critical assessment of a program ("Develop program theory/logic model," n.d.; "NGO-IDEAs Tiny Tools for Impact Assessment,” n.d.). In this case, the intervention is getting children educated on the necessity of eating well and right, in combination with learning how to grow fruits and vegetables on their own.

Framing the Evaluation

The rationale for the evaluation is to assess the effectiveness of the program in helping children to understand and ascertain the best ways to eat and grow their own fruits and vegetables. Consequently, the findings from the evaluation will be used to know the best course or rather a way forward for the program and whether what has been done thus far with it is successful. If the particular findings are positive, the various stakeholders will utilize them to focus on creating additional food education workshops and classes for the children and for the adults as well. The stakeholders (contributors) will also be more inclined to continue contributing to the cause of food education in Melbourne, Australia that in turn, betters the lives of children as a result.

Critical theory factored heavily into the design. Critical theory critiques the dynamics of a situation or a plan and how that particular situation or plan alters society. In this evaluation, the foundation has in effect affected society with regard to food education, hence, on the basis of critical theory - the foundation can be examined and analyzed. Other theories were considered including Game Theory and Social Phenomenology.

Evaluation Design

The design of this evaluation will be a quantitative research design. For this evaluation, the dependent variable is the effects of food education and the independent variable is food education itself as food education is not altered in any form or fashion by the experiment. Quantitative research is used "to quantify data and generalize results from a sample to the population of interest" ("Qualitative vs. Quantitative Research," 2013). To obtain the necessary data, questionnaires will be distributed to a minimum of 120 individuals who work at the foundation in order to understand whether the foundation's mission is working or not. There will be a total of 2 questionnaires given: one on food education and the foundation; and the other on whether the children are becoming more inclined to grow and cook their own fruits and vegetables once they leave their particular school in Melbourne. The individuals given the questionnaire will be selected at random to avoid any bias in the evaluation. While it is understood that the individuals who are given the questionnaires will have certain opinions related to the questions at hand, they will be chosen at random so that the response will be varying and not slanted in one direction or another.

Key Program Questions

Since there will be two questionnaires, a total of 15 questions is needed for the purposes of this evaluation. The first questionnaire will have a total of 10 questions and the second questionnaire will have 5 questions. The first questionnaire will be solely related to food education and the effectiveness of the foundation in assisting children in understanding the importance of eating healthy. The second will be more in line with how well the community of Melbourne has received the foundation and whether the children who have participated in the workshops and classes the foundation offers continue to grow and cook their own fruits and vegetables as a result. The answer choices will be done on a 5 point Likert Scale and will be designated as Strongly Yes, Yes, Neutral, No, and Strongly No.

(Questionnaires omitted for preview. Available via download)

Carifio & Perla (2007), Boone, Jr. & Boone (2012), and Losby & Wetmore (2012) all argued that Likert Scales are particularly effective in understanding arguments, problems and issues through scientific reasoning and methodology. Furthermore, they allow for a critical assessment of the underlying concerns of an environment. By using a Likert Scale, it gives a statistical and/or empirical understanding of a problem.

There is a rationale for separating the questions as it will allow those selected to answer them to make a profound deliberation on the quality of the foundation. Moreover, the focus of the first set of questions pertains to food education as a whole more than it does the foundation, while the second set is focused solely on the importance of the foundation and its specific soundness in Melbourne and in turn, Australia.

Successful responses will engender discussion on what is working and what is not with regard to the organization. The Likert Scale is very helpful in data collection as it points to specific responses without muddying or bias per se in that those evaluating the foundation are selected at random and the questions are posed in a way that draws a yes or no answer. Neutral responses will not be counted, but they need to be on the questionnaires in order to vary the scale and allow individuals to denote when they can either agree or disagree with what a question is asking them.

Data Synthesis

Two of the primary factors that often go into data collection are reliability and validity. In essence, how reliable and valid are the data that have been collected? Golafshani (2003) argued that reliability is based upon three different measurements citing Kirk & Miller's (1986) delineation on quantitative research. The three measurements are 1) measurement degree, 2) how stable is a measurement over time and 3) measurement similarity. One of the key indicators of reliable data is consistency and also that the data if collected again would be similar or alike (p.598-599). Lafaille & Wildeboer (1995) and Drost (2011) in their research studies stated that validity is based upon how significant a particular instrument is to a particular experiment. Drost (2011) took it a step further and noted that data is only valid to the extent that it is not influenced by errors and that data is only valid and reliable to a certain extent of the scope of what is being measured and asked in a research study (p.104-105). Essentially, to Drost's (2011) point, data from experiments is only sufficient to a certain degree to the evaluator and therefore, said evaluator has to take that into account when deciding what data to use in his/her experiment.

For the purposes of this evaluation, the data will have to be examined thoroughly given the context of the situation. This evaluation is assessing the fundamental aspects of food education per the Stephanie Alexander Garden Foundation's objective in Melbourne. Thus, how the individuals randomly selected in the study respond will be crucial to synthesizing and generalizing the findings associated with said conclusions. The Better Evaluation website adds that "it is useful for an evaluation to be explicit to which its finding might be appropriately translated to new sites and situations" (p.1). Therefore, the findings that are discovered from this evaluation should be able to be translated to other organizations and/or foundations that have a similar or like objective with regard to food education.

The data synthesis process will also involve certain evaluative criteria, such as what responses will be used? Are more yesses than no's being used or will there be an equal amount of data collection as to not skew the results? While it is understood that the purpose of the evaluation is to have both a positive and negative outlook on the foundation, it was said at the onset that neutral replies will be thrown out. Thus, the evaluation will factor in only yes and no responses. The Likert Scale set up to 5 points will provide a more definitive yes and no response to some question as noted by the strong yes and strong no choices. It will be important for the evaluator to put all strong yesses and no's into two separate piles so there is definite precision in the conclusions given the undoubtedly varying amount of opinions on the foundation.

Reporting

The data will need to be assessed and types up in a written report or visualization in order to engage the foundation on what they can improve on and what they are doing well. The data will essentially yield three elements that the foundation will need to be aware of. The following is a funnel diagram representation of what the data will signify.

(Diagram omitted for preview. Available via download)

The Better Evaluation Website states that data visualization will need to be accessible. In order to ensure that this happens, the data will be organized into a briefing and distributed to the stakeholders of the foundation. This will ensure that they understand "how the program can be improved, how the risk of program failure can be reduced [and] whether the program should continue" ("Report And Support Use," n.d.).

Managing the Evaluation

The evaluation will be conducted by an external organization that conducts focus groups. It is essential that internal evaluators are not selected as to avoid bias. Pannucci & Wilkins (2010) argued that bias needs to be removed as much as possible in all research. It is important however that bias is understood first before it can be removed. Bias can be observed in any phase in research including the data collection, analysis of data, and study design. There is some degree of bias in all research, but keeping it to a minimum is the most crucial element in conducting an experiment or any form of research for that matter (p.619). Bias also removes to a certain degree any kind of ethical issues that may result. If an individual from the foundation were to conduct the evaluation, there would be bias because they work there and they would want to ensure that the foundation is looked at in a positive light and that the participants answer strong yesses to all of the questions on the questionnaires.

There is, of course, the potential for bias by using an external service to conduct the evaluation. Collier & Mahoney (1996) noted that there must be some form of selection in the criteria for research even though there is some level of bias in that also. Selection bias occurs within a myriad of situations and even random selection can be seen as having bias because of the criteria used to construct the process. It is essential then for there to be definitive variables at the beginning of the research design (p.59-60). To Collier & Mahoney’s (1996) point, the external bias may be due to the evaluators there knowing about the foundation and its purpose and having an opinion on whether they are helping or hindering the cause for food education.

A timeline will need to be set for the evaluation conduction. The following is a phase diagram that addresses each component of the evaluation.

(Diagram omitted for preview. Available via download)

Conclusion

The evaluation will hopefully offer insight into the fundamentals of food education and why it is important that children eat properly but more importantly, why they should grow and cook their own fruits and vegetables. The questionnaires will undeniably provoke discussion on the foundation's core essence and further insight may be needed to ascertain the scope of how profound their program is. Additionally, the stakeholders associated with contributing to the foundation will also be brought in to analyze the results or rather make conclusions based on the findings and the presented briefing. There is a reason to believe that despite the issue of bias in the evaluation that is inherently present in all types of research that it will be executed in a smooth fashion, especially given the fact that the evaluators will be external. It is the hope of this evaluation that the Stephanie Alexander Garden Foundation will have a sufficient and concise understanding of whether their approach and technique has worked. The foundation was established in 2001, therefore, there has been time for the results of the initiatives of the foundation were to have enriched the lives of the children in Melbourne.

References

About the Program. (2013). Retrieved from The Stephanie Alexander Garden Foundation website: http://www.kitchengardenfoundation.org.au/about-the-program

Boone, Jr., H. N., & Boone, D. A. (2012, April). Analyzing Likert Data. Journal of Extension, 50(2), Retrieved from http://www.joe.org/joe/2012april/pdf/JOE_v50_2tt2.pdf

Carifio, J., & Perla, R. J. (2007). Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes. Journal of Social Sciences, 3(3), 106-116. doi:10.3844/jssp.2007.106.116

Collier, D., & Mahoney, J. (1996, October). Insights and Pitfalls: Selection Bias in Qualitative Research. World Politics, 49(1), 56-91.

Develop program theory/logic model. (n.d.). Retrieved from Better Evaluation website: http://betterevaluation.org/plan/define/develop_logic_model

Drost, E. A. (2011). Validity and Reliability in Social Science Research. Education Research and Perspectives, 38(1), Retrieved from http://www.erpjournal.net/wp-content/uploads/2012/07/ERPV38-1.-Drost-E.-2011.-Validity-and-Reliability-in-Social-Science-Research.pdf

Eight Outcome Models. (2005). Retrieved from Harvard Family Research Project website: http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/evaluation-methodology/eight-outcome-models

Golafshani, N. (2003, December). Understanding Reliability and Validity in Qualitative Research. The Qualitative Report, 8(4), 597-607. Retrieved from http://www.nova.edu/ssss/QR/QR8-4/golafshani.pdf

Lafaille, R., & Wildeboer, H. (1995). Validity and Reliability of Observation and Data Collection in Biographical Research. Antwerp, Retrieved from http://eurotas.org/uploads/pdf/valideng_full.pdf

Losby, J., & Wetmore, A. (2012, February 14). CDC Coffee Break: Using Likert Scales in Evaluation Survey Work [Report]. Retrieved from CDC website: http://www.cdc.gov/dhdsp/pubs/docs/CB_February_14_2012.pdf

NGO-IDEAs Tiny Tools for Impact Assessment. (n.d.). Retrieved from NGO Ideas website: http://www.ngo-ideas.net/tiny_tools/

Pannucci, C. J., & Wilkins, E. G. (2010, August). Identifying and Avoiding Bias in Research. Plastic and Reconstructive Surgery, 126(2), 619-625. doi:10.1097/PRS.0b013e3181de24bc

Qualitative vs. Quantitative Research. (2013). Retrieved from Snap Surveys Ltd. website: http://www.snapsurveys.com/qualitative-quantitative-research/

Report And Support Use. (n.d.). Retrieved from Better Evaluation website: http://betterevaluation.org/plan/reportandsupportuse