Dealing with WAGs and SWAGs and Empirical Evidence in the Pursuit of Performance

Warning: Opinions Coming

WAG = Wild Ass Guess

SWAG = Scientifically-based Wild Ass Guess

Empirical Evidence = knowledge by means of direct observation or experience. Empirical evidence (the record of one’s direct observations or experiences) can be analyzed quantitatively or qualitatively. Through quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions which cannot be studied in laboratory settings, particularly in the social sciences and in education. (Wikipedia).

We all come across many claims in our professional pursuits about what works in Instructional Design/Learning – and Performance Improvement.

Many times, unfortunately, the claims are hype from people with something to sell. I know, I sell stuff too. And it bothers me when I read things that are “over-sold” – when what’s being touted is not fact – but fiction. Sold with hard numbers.

So I have come to immediately distrust hard number claims. My BS detector comes on immediately and goes into red-alert. At my ISPI Charlotte Chapter last week we had a great presentation about “HPT Myth Busters: Separate Fact from Folklore” presented by Jeanne Farrington, CPT, EdD, Mary Norris Thomas, CPT, PhD and Clare Carey, CPT, EdD.  A great session – which I video recorded for sharing with our members 12 months from now.

And their session – which was an update from the 90 minute session of theirs done at the ISPI International Conference that I recorded back in 2010 – is available here. The handout for the session from last week is available for a short time on our chapter’s home page here.

At the end of this highly interactive session some of our attendees asked about other HPT claims – and one of those was about the 70-20-10 Learning Model. Our presenters didn’t have any info about this to tell us where that claim fell into their 4 categories – which are:

  1. Snake Oil
  2. Intuition
  3. Respected Practice
  4. Evidence-based

So the next day I reached out into one of my professional networks – which happens to include about 80 members of ISPI International – and includes our three speakers – a group of mostly Past Presidents, past members of the Board of Directors and other highly involved ISPI members – many whom have received the top awards of the Society that is my main professional home. People that I mostly trust. People who mostly are familiar with the research.

I wrote this group:

This question came up at our recent ISPI Charlotte Chapter meeting: Is the 70/20/10 model for learning valid?

In searching the Internet via THE Google I can find lots of posts/etc. saying that there is a lot of research out there – but I find that most are from people/firms with something related to sell.

http://www.hrvoice.org/training-and-the-bottom-line-realigning-efforts/

According to the American Society for Training & Development, there is a 70/20/10 rule when it comes to learning and development.  Research shows employees learn:

  • 70 per cent through real-life and on-the-job experiences;
  • 20 per cent through mentoring or coaching; and
  • 10 per cent through formal training.

But then:

http://www.nickjhowe.com/2010/05/lets-kill-a-few-learning-holy-cows/

 Dr Doug Lynch. Doug opened my eyes about a few things, but most notably about 70:20:10.

Doug asked a couple of simple questions: (a) is 70:20:10 true, and (b) if so how do we know?  Everyone in the learning space seems to assume (a) is true, but we all get a bit vague about (b).  The answer to (b) is almost always “because I read it in ____ (insert your favourite training magazine title here)”.  Doug therefore set his post-grad students a simple challenge: find the source of the 70:20:10 concept.  The results are at best worrying and at worst frightening.  The following is taken from information presented by Doug at the event):

  • If you google “70:20:10″ you get 2.25m hits.  That’s right, 2.25m.  Hits are split between the education model, and the business resource management model of the same name
  • “Informal learning” gets you 402,000 hits, as of the time of writing this post.
  • 70:20:10 was the subject of the 2009 ASTD study, “Tapping the Potential of Informal Learning” (exec summary PDF here)
  • There is even a Wikipedia article
  • Informal learning has been covered in just about every training publication and in the mainstream media, including the Harvard Business Review

The problem is that almost no-one, including the Wikipedia article and HRB cites the original research for 70:20:10 applied to education.

So what does the research have to say on 70:20:10?

  • If you step away from the mainstream, you get 46,800 hits with in Google Scholar
  • If you drill down to what might be called ‘authoritative sources’, things get a little narrower.  There are a grand total of 46 EBSCO (Peer reviewed) Articles
  • If you examine the peer reviewed articles, there is not one single empirical study that validates 70:20:10

*** *** *** ***

Let me repeat that last line here: If you examine the peer reviewed articles, there is not one single empirical study that validates 70:20:10

*** *** *** ***

And one of the responses that I received back that day from my ISPI Crowd was from Richard E. Clark, PhD and APA Fellow – and who also happens to be one of the Keynote Speakers at the next ISPI International Conference in Toronto next April – who wrote back to the group:

The claims you describe are BS – a fantasy.

We do have evidence about how much someone learns from a well designed training course in an organization  (measured by how much more they know after training than they did before training) and that number is about 20%.    The citation for the review is: 

Arthur, W. R. Jr, Bennett, W. Jr, Edens, P. S. and Berll. S. T. (2003). Effectiveness of training in organizations: A meta analysis of design and evaluation features. Journal of Applied Psychology, 88(2), 234-245.

They found an average increase of 20.62% in learning (Kirkpatrick level two)  in published studies.  The report is attached.  I used the term “well designed” because people do not publish training studies that fail to get solid results or those that fail – period.  So the published studies are the tip of a big iceberg that hides a significant percentage of failures. If those were averaged into the percentage we’d be deeply embarrassed at what we’ve accomplished.

No one has reviewed controlled studies of learning from mentoring that I’ve seen and I’ve looked.

No one has studied learning on the job in any systematic way and published the results.

So the only solid evidence we have right now is that the average gain from training in organizations (summarized over the past two decades) is about 20%.  And we do not know what it is about those more successful training courses that made a difference. The meta analysis is not helpful in identifying what works only how much it works on average.  

Dick

Just as I thought – quite frankly.

For when I followed up myself on this before asking my ISPI Crowd for their Wisdom and Insights – the claims of research that I found in many places on THE Internet (certainly doesn’t mean that I found them all) could all be tracked back to only one original research study – done by asking for self reporting by 30-some people – about how they learned stuff.

And then “that” was cited in some of the write ups I found –  and then those write ups were cited by others – and pretty soon there were claims that a lot of research “proves this model” – and that “it’s not disputed” – etc.

Except – you’ll recall – that I found this in my searches: If you examine the peer reviewed articles, there is not one single empirical study that validates 70:20:10

Most of those citing research were people/organizations that just happened to sell stuff related to the 70-20-10 model, framework.

And they claimed that there were/are many organizations who have adopted this – to great success. Some big name companies.

Which isn’t surprising – because it doesn’t take much effort to find that many big name companies also bought into Designing Learning for Learning Styles which has been debunked, employ the MBTI for things it was intended to do (select people) but have been pretty much disproven by the National Academy of Science and others, work hard to deal with multi-generational differences in Learning which has also been debunked, and on and on and on….

If we are to be good stewards of shareholder equity – we need to avoid snake oil.

We need to turn on our BS detectors. We need to have a healthy skepticism about such claims with numbers in them. What do these numbers represent? Where did they come from? How were they calculated? Under which conditions are they ‘talking about” – all conditions – or just a narrow slice of all?

There is just too much Foo Foo.

And there is an ages old saying: Caveat Emptor – Let the Buyer Beware – because there has been Foo Foo in the marketplace for a long, long time.

Which is why I (and others) have a section of my web site dedicated to capturing what I have found – so that I can share that with others. Mine is here – and it lists several other sources.

Some will tell me that “70-20-10” is a framework. Some will say it’s comparable to findings that Formal Learning only accounts for 20% of learning and Informal accounts of 80%. Others will tie it to the Pareto Principle of doing 20% of everything possible will give you 80% of the results – so don’t go for doing 100% of everything due to the diminishing returns.  And to all of that I can only say: huh? Then why use these numbers as they are not valid?

In a college Rhetoric class back in 1976 I was taught to beware of stuff that starts off with certain claims or certain language (typically inflammatory) to predispose the reader to the bent of the speaker/writer. It’s the “set-up.” They give themselves away – I was taught. Look for it!

I make claims myself – but I try to label them as WAGs or SWAGs or Evidence-based – and cite the source and situation for each of the three types. WAGs related to intuition – my intuition. SWAGs related to respected practice – usually my practice and seeing what works and what doesn’t. And Evidence-based – usually comes from my Crowd – who are doing the research or just know the research. I have not done controlled studies – that you too can repeat. I rely on others for that.

Those that read my writings often find this: “As always – it depends.”

The only universal truth I’ve found yet, is ironically: there are no universal truths.

In my work I focus on the terminal measures and objectives of Performance Competence – the ability to Perform Tasks to produce Outputs to Stakeholder Requirements – and then measure that after the intervention to be compared to the baseline numbers – or let the client do that – because sometimes/many times they do not want to share that with me and others. But they usually – but not always – will check to see if an improvement in performance results – especially as I have made theta easier by specifying the performance and measure in my analysis results – that preceded the Design efforts, and Development efforts and Implementation efforts.

A good related find – from before this last ISPI Charlotte Chapter session that I found on “GAMIFICATION IS BULLSHIT” – is here. I particularly like the definition of what BS is –

We normally think of bullshit as a synonym—albeit a somewhat vulgar one—for lies or deceit. But Frankfurt argues that bullshit has nothing to do with truth.

Rather, bullshit is used to conceal, to impress or to coerce. Unlike liars, bullshitters have no use for the truth. All that matters to them is hiding their ignorance or bringing about their own benefit.

And that’s why Caveat Emptor as a saying has been around since the days of the Roman Empire. Or perhaps longer.

And that’s why good stewards have their Foo Foo Detectors working 24/7/365 – except for leap years when it’s 24/7/366.

The Pursuit of Performance – sustaining it and/or improving it – requires that we not blindly accept claims that are intended to convince and sell – rather than inform or educate.

If you do know of any empirical studies that validates 70:20:10 – please identify them and provide links if possible in the comments section below.

# # #

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.