Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Guerrilla (or Agile) Evaluation for Learning

Workplace Learning & Development professionals have a problem -- too often they don't get enough (or any) feedback on the efficacy of their designs. What can we do to fix that?

Related Books

Free with a 30 day trial from Scribd

See all

Guerrilla (or Agile) Evaluation for Learning

  1. 1. Guerilla Evaluation Closing the Feedback Loop Julie Dirksen – Usable Learning (c) Usable Learning 2014
  2. 2. Houston, we have a problem… Learning and development has a problem, but, in particular, elearning REALLY has a problem.
  3. 3. 10,000 Hour Rule Deliberate Practice requires frequent, often expert, feedback.
  4. 4. My favorite instance...
  5. 5. It’s not just the time…
  6. 6. How many of you regularly get to see people use your learning? So how do you know if it works?
  7. 7. Typical Evaluation Measures Kirkpatrick's Levels 1. Reaction (participants' opinions) 2. Learning (pre/post test) 3. Behavior (measurable behavior change) 4. Results/ROI (return on investment)
  8. 8. Issues with Typical Measures • Levels 1 & 2 are not meaningful • Levels 3 & 4 are difficult and costly o Require access to the full target audience o Measuring behaviors requires extensive and costly observation o Difficult to implement without pre-existing organizational performance metrics in place o Difficult to attribute due to confounding variables
  9. 9. The Evaluation Venn Diagram Enough budget or resources or resources to measure Enough control over the environment Good methods to evaluate All too often, they don’t overlap at all.
  10. 10. ROI Calculation
  11. 11. We are measuring what we can control • Seat Time • # of learning objects • # of people trained • Completion Status • Pre/post scores Why don’t we just weigh them? The Inestimable Gloria Gery
  12. 12. Streetlamp Effect So, there’s this story…
  13. 13. It’s kind of like this…
  14. 14. So, what can we do about this? visible desirable feasible What is in this intersection?
  15. 15. Guerilla Evaluation • A quicker and less expensive method to ensure a feedback loop that can be used to assess and improve the training intervention • Not intended to be a full measure of efficacy • Qualitative measures of: o Retention of information o Attitude o Anecdotal or Observable behavior change for a small sample size
  16. 16. Based on Nielsen's Guerilla HCI In 1994, Jakob Nielsen wrote a highly influential article called Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier The article addressed the reasons software development teams rarely did usability research to improve the design of software interfaces. Studies showed that qualitative feedback quickly became repetitive after 5-6 users, and that working with a small sample could provide meaningful design feedback.
  17. 17. Right vs. Better
  18. 18. It’s like Traditional PM vs Agile VS To Do Doing Done “By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work.” - Clay Shirky
  19. 19. Keep the cycles short Why feedback is like weather prediction
  20. 20. Formative - User Testing Standard Usability Testing The first part of the evaluation process is standard usability testing that involves watching end users interact with the software, followed by a short interview. Typical evaluation measures such as a pre/post test could be incorporated here.
  21. 21. Summative - Follow up Interview • Can be used in conjunction with other evaluation measures • 30-45 minute follow-up interviews that occur 4-6 weeks after the training intervention • Small sample group (~6 users per audience) • Structured interview questions
  22. 22. Structured Interview Format Structured interview questions relating to: ● Learner impressions/feedback ● Most memorable elements ● Small number of retention questions related to key learning objectives ● Anecdotal usage of the material (How have they applied the ideas from the training?)
  23. 23. Brinkerhoff Success Case “Performance results can’t be achieved by training alone; therefore training should not be the object of evaluation” • Part 1: Survey to determine who was successful and who was not • Part 2: In depth interviews with a selection of successful and not successful users Find Out Quickly What’s Working and What’s Not
  24. 24. Cohort Analysis • Follow smaller groups through level 3 analysis 100 90 80 70 60 50 40 30 20 10 0 Week 1 Week 2 Week 3 Week 4 Cohort 1 Cohort 2
  25. 25. Signaling Ask the magic question: If you woke up tomorrow and it was all perfect, how would you know?
  26. 26. Look for data • xAPI • Google Analytics
  27. 27. What do you think? • With 2-3 people around you, make a list of quick and dirty evaluation options. • As soon as you think of one, come up with another one as quickly as possible.
  28. 28. Questions? • Thanks for coming • Contact: o Julie Dirksen o o o Twitter: usablelearning
  • CarliffRizalCarleel

    May. 15, 2018
  • emcod

    Dec. 9, 2016
  • Aungmin

    Nov. 15, 2016
  • CeciliaMartin4

    Apr. 1, 2016
  • MichaelMaier1

    Mar. 7, 2016
  • SueMcWha2

    Feb. 26, 2016
  • SvenAronsson

    Sep. 18, 2015
  • organogram

    Nov. 7, 2014
  • hennis

    Oct. 10, 2014
  • dings

    Oct. 10, 2014
  • visualrinse

    Oct. 9, 2014
  • kellyrprince

    Oct. 1, 2014

Workplace Learning & Development professionals have a problem -- too often they don't get enough (or any) feedback on the efficacy of their designs. What can we do to fix that?


Total views


On Slideshare


From embeds


Number of embeds