Learning Blog

Random despatches from places where L&D meets software and systems

Learning on the couch

Fast forward to 2029. In UK businesses classroom-based instruction is now little more than a dim and distant memory. The word “training” is seldom if ever used. Web 4.0 is heralded as the Next Big Thing. People’s personal learning plans transcend home and work, a distinction that many struggle to maintain. 

Not everything is better but most things are at least shorter, faster, cheaper and available instantaneously. Content is available multi-format, has been completely commoditised and can be sliced, diced and mashed up any way you like. Time to competence is the learning measure most valued by businesses. Intellectual capital is routinely quantified on all FTSE 500 balance sheets. Portfolio careers abound and the ability to learn new skills just in time massively separates the men from the boys. Gender-specific language has all but been expunged.The numbers of consultants, facilitators, coaches and back-office staff have been salami sliced repeatedly and series of efficiency drives have resulted in unparalleled levels of automation, self-service and off-shoring. Endless help and support is available so long as you’re online. People have forgotten how they ever got by without video conferencing, collaborative tools and portable devices that really, really do let you do anything, anytime, anywhere.

And yet not everything in the distributed home/work place is rosy. Recognising that their loyalty and engagement indices are on a downward slide, bigger businesses are reconciled to having to re-humanise the employee / associate experience. Instinctively the first thing they turn to is technology. The following is a not unrepresentative excerpt from a dialogue between a member of staff and a Workplace Support Bot (“Roger”) being trialled by one of the bigger private sector Shared Service operations:

Roger:

What can I do for you today?

User:

Don’t know really. I suppose I’m a bit depressed.

Roger:

I’m sorry to hear that. Why do you think that is?

User:

Well, you know, we’re all alone in a Godless universe – that kind of thing. Just the usual, really.

Roger:

Please go on.

User:

Well, one thing I know for certain is that I really miss talking to people face to face. What’s happened to going to see people, eh? I used to moan about people not being bothered to pick up the phone. Now I’m emailing the person the other side of the desk.

Roger:

And is that a problem for you?

User:

It’s not a problem, it’s just a pity.

Roger:

What does that suggest to you?

User:

OK, so we’re getting lost in a world that’s massively over-processed. It’s gone too far. And all this mania around automation, process improvement and eliminating waste, it does my head in. I’m not saying there isn’t any waste to eliminate, of course there is. And most of it is bad waste. But there’s good waste too. You can’t expect to be genuinely creative without making a friend of the waste paper bin. And to top it all off, I’ve got a headache.

Roger:

Tell me about your Mother.

User:

What?

Roger:

I’m sorry. I meant to say “tell me about your place of work”.

User:

You know, the usual. I’m wading through this enormous pile of materials trying to work out how it can be turned into beautifully usable digital format. The people who wrote it are long gone. I’m not allowed to talk to any customers, no one can tell me who the client is, it doesn’t make any sense anyway and it’s probably 20 times too big to pass any Fit For Purpose test. It weighs a ton.

Roger:

Is weight important to you?

User:

Well, you know, we’re supposed to be living in Just-in-Time land. No-one’s got time to wade through piles of stuff. I’m having to have my desk reinforced just to support all the legacy documentation.

Roger:

You could try getting some weight off.

User:

I beg your pardon?

Roger:

Earlier you were telling me you were depressed.

User:

I know. I should be glad really. I’m fine really. At least I get to learn stuff every now and again.

Roger:

I’m not sure I understand you fully.

User:

Learning stuff. You know, it’s important. Feeling like you’re making the most of your time on the planet. Helping others do the same.

Roger:

What does that suggest to you?

User:

Learning. Having a learningful life. Making the most of it. You know.

Roger:

I’m not sure I understand you fully.

User:

No I don’t suppose you do.

Roger:

Please go on.

User:

Serves me right for talking to a machine, I suppose. <ctrl-Q>

Rewind to the present day. So, why the psychotherapy?

The above is of course a parody of a system that was first shown nearly 50 years old and was itself a parody. Named after the female lead in Bernard Shaw’s “Pygmalion” who was similarly programmed to pretend to be something she was not, ELIZA (Communications of the ACM, January 1966) was one of the very first attempts to simulate an Artificially Intelligent co-conversationalist. Its inventor, Joseph Weizenbaum (1923 – 2008) deliberately chose Rogerian, or non-directive, psychiatry as the prototype conversational genre because that demanded comparatively little background knowledge, learning, understanding of conversational conventions, and so on, while still sounding plausible. Weizenbaum was so appalled by the reception his program received – there are stories of some users becoming so entranced with the system’s mirage of understanding that they asked its inventor to leave the room so that they could be alone with their therapist – he devoted much of the rest of his career to admonishing people about what should and shouldn’t be expected of technology. Still one of the best books ever written about computers, his “Computer Power and Human Reason: from Judgment to Calculation” (1976) argues that while it may be OK to leave “decisions” to computers, “choices” are a rather different kettle of fish. Choosing is a uniquely human and personal capability, more about judgment than calculation. Choosing necessitates recourse to emotions and Weizenbaum reminds us that being “wise” means a lot more than merely being “intelligent”.

 

Previously in these pages I described a conscious decision – sorry, choice – to leave something in our cellar (Authorware disks in “Old Dogs and New Tricks”, E-Learning Age, July / August 2010). This month I spent ages hunting through the various boxes trying to get something out, namely Weizenbaum’s book. What prompted the search was the dawning realisation that what I see happening all around me at work is similarly and worryingly reductionist. As a businessman I understand what we’re doing and why we’re doing it and of course much of it is to do with cost reduction but as a learning professional as interested in effectiveness as much as I am efficiency I feel anxious that we are currently occupying an eerie hinterland: learning is happening in far fewer business classrooms (no bad thing I hear you cry); but social learning hasn’t yet taken a sufficient hold to make good on its promise to more than compensate.

 

The situation we find ourselves in is that – hopefully temporarily - we’re simultaneously expecting too much from content and not enough from context, or, more especially, people – whether synchronous or asynchronous, near or far. It’s not for the want of trying but we are seriously constrained in various ways: of the top ten most commonly used social learning technologies, the intranet where I work bars access to many and there are problems with the others. Social learning is starting but we’ve a long way to go. 

 

Meanwhile reduction number 1 is the assumption that there’s nothing else for it but to downsize trainer FTE, substitute in piles of e-learning product while expecting to effect the conversion with rapid e-learning tools and technology with no loss of learning effectiveness. Reduction 2 is to underestimate the time, costs and skills required to do even reduction number 1. And 3 is doing 1 and 2 without expecting to take full advantage of the wider context – social learning included – and do it all in a vacuum.

One last analogy from Artificial Intelligence (AI): proponents of “strong AI” have as their goal modelling artificial intelligence that matches or exceeds human intelligence; while “weak AI” is more around getting software to “get clever stuff done” without necessarily expecting to write people out of the equation. It seems to me that we’re now at an interesting place. Strong (e-) learning isn’t going to do it for us. Weak (social) learning is a better way forward.

OK, moan over. Glass half full, where I work is still one of the best L&D jobs imaginable. Glass half empty, it could be so, so much better. Welcome to the real world. As a Real Roger might have said, “I’m sorry we’ve run out of time - same time next week?”

 


TOP