I have recently seen the AI chatbot ChatGPT everywhere in my social feeds. ChatGPT uses GPT-3, a large language model developed by OpenAI, to generate natural-sounding responses to user input. It can engage in open-ended conversations on various topics and can even provide information on specific subjects if trained on relevant data.
I saw Kyle Bowen from Museums As Progress use ChatGPT to write an invitation to the professional community he co-facilitates. I couldn’t help but wonder what ChatGPT should write the best practice for program evaluation in the style of the 10 Commandments. The results were surprisingly spot on.
AI’s 10 Commandments of Program Evaluation
- Thou shalt clearly define the purpose and goals of the evaluation.
- Thou shalt involve stakeholders in the design and implementation of the evaluation.
- Thou shalt use appropriate and reliable methods for data collection and analysis.
- Thou shalt ensure the evaluation is objective, unbiased, and transparent.
- Thou shalt report the findings and recommendations in a clear and concise manner.
- Thou shalt use the evaluation findings to inform decision-making and improve program performance.
- Thou shalt communicate the results of the evaluation to all relevant parties.
- Thou shalt maintain confidentiality and protect the privacy of those involved in the evaluation.
- Thou shalt use the evaluation to continually assess and improve the program over time.
- Thou shalt involve the program staff and beneficiaries in the interpretation and use of the evaluation findings.
I have a few favorites on this list. Number six stood out to me because this is why I am passionate about evaluation. Yes, you should have your own goals and objectives (see Commandment 1) but at the heart of those goals is the desire to make better decisions and improve the program, product, or service.
Number seven also spoke to me. As evaluators, you must communicate your newfound understanding to as many stakeholders as possible. However, we may not always be able to communicate directly with those participating in the evaluation, but this is where the transparency in commandment number four comes in. This transparency lets participants know you are responsible and reliable with how you use their time and the information you are collecting.
Nine and ten are powerful because they speak to the timeline of evaluation use. As you design your evaluation, it is essential to think about it not simply as a one-time thing you do to check a box but rather as a method that should, over time, assess change continuously. When you do enterprise results, you must incorporate program staff or product developers to make sense of the data. The input they will provide as direct facilitators or someone very close to the product or process is intensely valuable and should not be overlooked.
So how did AI do? Beyond re-ordering these, it is a solid list of ten essential pillars of program evaluation. Which do you think are the most important?
Want to know more about Empowered Development Consulting? Reach out to me, Meghan Schiedel, and find out how Empowered Development Consulting can help you.