What will AI’s role be and how far should we go with the technology as it becomes available?
As AI is becoming more and more of a reality, its role in learning and development is going to inevitably increase. The questions we need to ask are: What will AI’s role be and how far should we go with the technology as it becomes available?
Learning and development in the flow of work is all the rage. The benefits are legion, but one of the most important ones – at least in the author’s humble opinion – is the ability to control the opportunity cost of learning and development solutions. What I mean is that, even with smooth e- and m-learning solutions in place, it is still common for the learners to have to take time away from their work when doing their learning.
This costs time and therefore money in lost opportunities. As technology allows us to reach even more learners, the total opportunity cost multiplies. When dealing with thousands of learners, even 10 minutes of time away from work each is a significant opportunity cost. That’s why any good LX Designer will fight to condense learning into the shortest possible time, while still creating the impact on the business that is required. It’s a real balancing act, but it’s what our customers are paying for.
One way we can do this when using technology is to take a “one click less” philosophy to LX design, meaning we try to take user interactions away one at a time until we really get to the minimum amount needed. We also, in any solution, try to reach “minimum viable content”, taking things like prior knowledge and specific individual needs into account.
This is the stage we’re at now.
What comes next?
Let’s think about where learning really occurs. A large proportion of it comes from random situations. We call this “learning the hard way”, “learning from experience”, “the university of life” and so on. If you think about it, you learned some of your most important life lessons this way.
So, how can we harness this randomness? If we could insert “random” learning situations into an individual learner’s life, then we would truly be promoting learning in the flow of work. Learners perhaps wouldn’t even know they were learning at all and it would feel natural. We could call these situations learning moments. They should be small, targeted and regular and appear in learners’ work lives as if by magic.
How are we going to do this in the future?
AI does the analysis; humans design the content
This is where AI comes in. Data collection is growing exponentially, and we potentially have the possibility to fully monitor everything that our learners are doing – including their health and location – automatically and in real time. Whether we should do this is another question (more about that later).
So, if something doesn’t go to plan, we should be able to analyse what went wrong and immediately provide a learning moment for the individual, so that next time they can improve. We can also even predict when something is likely to go wrong and provide the learning moment in advance as a preventative action.
Here is an example of how this could go in practice and remember that, in our future world, we can monitor everything:
A salesperson has a pitch with a new customer booked for tomorrow. AI could do some (or all) of the following:
- Look at the customer’s social media profile and try to predict the type of person that they are – e.g., What types of argument styles are they most influenced by? Are they formal or informal in nature?
- Analyse the salesperson’s level of preparation for the pitch – e.g., Have they prepared good material? Have they practiced? How long did they spend preparing? What kind of visuals and examples are they using? What is their argument?
- Analyse the salesperson’s physical state in the build-up to the pitch – e.g., Have they slept enough? Is their blood sugar high enough so that they can perform? What are their stress levels like?
In terms of solutions, some of these require a programmed response – e.g., “Eat a Snickers” or “You should go to bed at 9pm”. AI can do these automatically. Others require a human touch. For example, different pitch styles for different customer types could be co-created by the sales team. Effective visuals and examples that have worked before can be uploaded to the AI so that they can be sent to the individual in advance.
Sounds good, what’s the catch?
I alluded to this earlier. For me, and probably for most of you too, the main problem is whether we should do this. Is business impact creation more important than privacy? How much right should employers have to essentially spy on their employees? This is a thorny question. Here in Europe, GDPR has come into force this year, which seems to suggest that the above solution may be possible technically, but not within the law. GDPR is also in place for our own good – i.e., individuals can’t give up their rights, even if they want to.
However, there will still be some areas outside of legislation where some of this may be applicable and we can allow individuals to choose. As long as any monitoring is strictly opt-in, and no-one is pressured or forced to opt-in, it could work. However, as of now, I feel that this still needs a lot more consideration to make it practical and workable. It may be that the technology appears before we have actually thought carefully enough about its application.