3 Symptoms of Whiz-bang E-learning Disease & How to Cure It

3 Symptoms of Whiz-bang E-learning Disease & How to Cure It

Gimmickry rules the day on e-learning authoring platforms. Learners deserve more: evidence-based strategies for real-world outcomes.

The hard-fought victories of the Flash-media e-learning days pale in comparison to the onslaught of today’s tricked out e-learning modules.

Rapid design tools like Articulate Storyline and Adobe Captivate have not just made e-learning development faster and easier, they have also powerfully shaped the expectations of e-learning designers and users, biasing them toward experiences that include a lot of whiz-bang clicking, dragging and dropping, and a host of other insufferable and ineffective bells and whistles.

Ultimately your audience pays the price for these cheap tricks, which rest on pseudoscientific ideas about the psychology of learning, user engagement, and technology and fail to deliver value to those who must suffer through them.

In an effort to put an end to the madness, here are three popular e-learning gimmicks that are unsupported by data and have little to no evidential-basis, paired with a data-driven principle to follow that will help you design e-learning that actually works.

E-learning Gimmick #1: Drag and drops

Pseudoscience: Drag and drops make content interactive

Real science: Interactivity helps only when it fosters deep learning

The ubiquitous “drag and drop” has become one of the most celebrated features of authoring platforms, so much so that they are now table stakes to even enter the e-learning platform game.

A drag and drop exercise involves using your mouse to drag one object onto another object to form a matching pair. It is often deployed as a categorization task to illustrate how one set of things belong to another set of things.

Example of a

 

The classic drag and drop.

Above all, they are heralded as a way to make e-learning ‘interactive.’

That begs the question: Since when did moving your mouse and clicking become the hallmark of interactivity? In fact, probably the least interactive thing you can do on a computer is clicking and moving objects on the screen.

More importantly, the heart of a drag and drop is really just a series of true or false questions, which is the hallmark of surface learning – an approach to learning that focuses our attention on information alone. It doesn’t promote or contribute to deep learning, which involves an exploration of the meaning behind the information and its application to our daily lives (see Marton, F., & Saljo, R. (1976). On qualitative differences in learning: I. Outcome and process. British Journal of Educational Psychology, 46(1), 4-11)

Deep learning is what we need for applied skills; the kind of skills we use in the workplace. Any organization, professional association, or education provider worth its salt wants its audience to engage in deep learning, not surface learning when doing their continuing education.

Patti Shank, an internationally recognized authority on learning design, gives us a breakdown of surface versus deep learning in her book, Practice and Feedback for Deeper Learning (2017):

Table explaining demonstrating differences between surface and deep learning.

So ditch the drag and drop – the data tells us that interactions are only valuable when they promote deep learning and deep thinking (see Anderson, T. (2003). Modes of interaction in distance education: Recent developments and research questions. In M. Moore (Ed.) Handbook of Distance Education. (p. 129-144). Mahwah, NJ.: Erlbaum.)

E-learning Gimmick #2: Quizzes

Pseudoscience: Quizzing demonstrates understanding

Real science: Understanding doesn’t matter – transfer of training to the workplace does, and quizzing doesn’t promote transfer of training.

The whole idea of using quizzes to test understanding derives from the “school model” of learning that we were all brought up on, where evaluation looms large. School boards and ministries who set curricula have to have a way to demonstrate that students have ‘learned’ what they were supposed to learn, and the tools they most often reach for to do this are quizzes and tests.

Because you were brought up on the school model, you may be unknowingly clinging this idea too, even though it is largely irrelevant to the kind of learning that is required in the workplace.

Here’s the crucial question for people who are dead-set on using quizzes in e-learning: How much of the job in question is devoted to writing quizzes?

I have yet to hear an answer other than ‘none.’ So why are we asking people to practice taking quizzes?

Screenshot of an e-learning quiz.

 

Simulations – not quizzes – help employees apply new knowledge and skills on the job.

Fortunately, the workplace isn’t school. And the way to evaluate whether someone has learned something isn’t performance on a quiz. It’s performance in the workplace. That is, it is whether they have transferred this new knowledge and skill from the learning environment to the workplace environment.

There is good data and science on what promotes transfer of training, and it doesn’t involve quizzing (see Alexander, A.L., Brunyne, T., Sidman, J., & Weil, S. A. (2005). From gaming to training: a review of studies on fidelity, immersion, presence, and buy-in and their effects on transfer in pc-based simulations and games. DARWARS Training Impact Group.  The core concept is practice. Creating learning experiences that allow people to practice a new skill – and receive feedback – in high fidelity simulations enables them to effectively perform that skill in the real-world when the time comes.

So stop wasting your precious budget – and people’s time – designing quizzes, and start creating simulations that help them practice what they need to do on the job.

E-learning Gimmick #3: Characters

Pseudoscience: Characters and avatars make e-learning more engaging

Real science: Relevance is what makes e-learning engaging, and point-of-view (POV) design supports relevance

Because we are hooked on the school model, we think that learning has to be delivered by a teacher who is an expert and who tells us what we need to know. This aspect of the school model has been integrated into e-learning by creating a virtual teacher who delivers the information in a module, usually as the narrator.

Example of an e-learning character gallery, a symptom of whiz-bang e-learning.

 

Engagement is driven by relevance, not the use of characters.

One of the most glittery features of modern e-learning platforms is therefore a vast character library that designers can use to deliver information. This feature leads to boring information dumps, where you leave feeling like you’ve been talked at by a patronizing digital drill sergeant.

Anyone on the receiving end of these kinds of e-learning experiences knows in their heart of hearts that they aren’t engaging.

The scientific data tells us that engagement is a matter of relevance (again, see Shanks’ Practice and Feedback for Deeper Learning). It is easier for people to more actively engage with e-learning when they see a specific and important purpose for the learning in their daily lives. When people see how they can apply the learning to a real problem that they are facing, they engage. It’s that simple.

Knowing this, it is clear that designing an experience from the user’s point-of-view, which directly map onto real-world problems they have to solve, is better than trying to listen to a third-party avatar that drones at you. Relevance peaks, along with engagement.

It is indeed time to consign glitzy, pointless e-learning tactics to the same dustbin as Flash-media and move forward with evidence-based e-learning strategies and techniques that demonstrate measurable outcomes.

Amen.

— Aaron Barth

Aaron Barth, Ph.D., is president and founder of Dialectic. He is a frequent speaker on topics such as unconscious bias in the workplace, and the power of science, design thinking, and technology to help accelerate employee learning, transform organizations and support employee and customer engagement.

behaviour-change_d&i

Think Strategy First for a Diverse and Inclusive Workplace

Take a minute and look around your workplace. Does everyone look like you? Do some people dominate conversations and situations at the expense of others? When hiring, do you hear: “We need another Glen or Joanne?”

A ‘yes’ to questions like these signals the presence of unconscious bias, circumventing your best efforts to promote diversity and inclusion. As an HR professional, you know it’s your job to work to eliminate biases in order to avoid the pitfalls that even the biggest organizations – including Starbucks and Amazon – have fallen into. Companies everywhere are scrambling for ways to deal with unconscious bias. And you can’t count on artificial intelligence to save you – it can be just as guilty as the rest of us.

Your first instinct when dealing with unconscious bias may be to set up more training. While training is important, I recommend you take a step back and consider the problem from a broader strategic level to identify multiple behaviour-changing levers that can work in tandem to make your workplace more diverse, more inclusive and less biased.

Before we get into the details, let’s start with clear definitions of unconscious bias and unconscious bias training.

What is Unconscious Bias?

Unconscious bias refers to learned, deeply ingrained stereotypes, views and opinions that we are unaware of because they happen outside our conscious control. They surface automatically – triggered by our brain as it makes quick judgements about people and situations. Unconscious biases are influenced by factors such as our background, cultural environment, context and personal experiences and they affect our everyday behaviour and decision-making.

What is unconscious bias training?

Unconscious bias training and interventions aim to increase awareness of unconscious bias and its impact on people who belong to traditionally underrepresented groups in terms of characteristics such as age, race, sex, disability, religion or belief, sexual orientation, gender, marriage and civil partnership status, and other attributes.

Some of the aims of unconscious bias training are:

  • to reduce implicit/unconscious bias towards members of underrepresented groups as identified above
  • to reduce explicit bias towards members of these groups
  • to change behaviour, in the intended direction, towards equality-related outcomes

For more on unconscious bias training, please see the U.K.-based Equality and Human Rights Commission’s report, Unconscious bias training: An assessment of the evidence for effectiveness (March 2018).

Training, while important, is only a single tactic to raise awareness about unconscious bias and promote positive diversity and inclusion outcomes. The challenge is that most people will say they are unbiased – in fact, individuals who take Harvard’s IAT test are most likely to self-report as egalitarian. However, deeper probing continues to reveal systematic biases:

When individuals are asked about their feelings and attitudes towards others, they are most likely to self-report as egalitarian (orange bar), while deeper testing (blue bar) reveals systemic biases.

Unlike self-awareness and beliefs, behaviour is observable. We can measure it scientifically. We can track changes and make adjustments in our programming to make sure it is making a real difference. If measurable changes in behaviour are your goal (moving beyond what people say or believe to what they do), you require a strategic approach or framework that involves both teams and individuals and is embedded at multiple levels throughout the organization.

A strategic framework for diversity and inclusion looks like this:

A behaviour change approach to a diverse and inclusive workplace. Use the strategic framework of purpose, process, practice and people to minimize unconscious bias and create change in your organization.

Here’s how this strategic framework plays out in terms of real activities with tangible outcomes:

Purpose

The shared values, beliefs, and purpose that lead people’s behaviour

Yes, it’s important to state that your organization values diversity and inclusion. But we all know that talk is cheap. Efforts that only extend to “core values” statements on your website may be worse than not doing anything at all. It’s essential to walk your talk by integrating your stated values into activities within the other three organizational levels (process, practice and people). Doing so will set you well on your way to creating a culture that honours and supports diversity and inclusion.

Process

Policies and procedures that guide people’s behaviour

Turning policies on diversity and inclusion into concrete workplace procedures that people can easily follow is critical to creating the ‘ways of working’ that you want to engender throughout your workforce. Hiring is a high priority place to start, including recruitment strategies, resume review and interviewing techniques.

Practice

Tools that nudge and support your people’s behaviour

It’s entirely possible to change people’s behaviour without expensive, productivity-zapping education and training. One way is by augmenting people’s environments – whether physical, cognitive, or social – with tools or “behavioural nudges.”

From a simple checklist all the way to a fully integrated performance support like Turbotax (accounting software that takes individuals through a step-by-step process of answering simple questions to file their tax returns), these tools effectively take the thinking out of an activity, and influence the way people perform the activity through subtle, often unconscious cues. (This point is critical, since it is people’s thinking that is precisely the problem when it comes to diversity, inclusion and unconscious bias.)

For example, framing interview questions in certain ways on your standard interview script can disavow interviewers of particular biases that may differentially support certain candidates. The interview script is an example of a tool that deploys nudges by framing questions to drive egalitarian questioning and responses.

Note that it’s not necessary to change the thinking of interviewers; we need only have them follow a particular script that is nuanced in the right sorts of ways.

For many organizations, these “behavioural nudges” are an untapped method of creating behaviour change. They are easy to implement and tremendously cost-effective.

People

Learning experiences that change people’s behaviour

Of course, we can also change observable behaviour by developing each individual’s personal knowledge and skills. To do it right, however, you need to design training in a way that goes beyond changing their minds to improve behaviour-based skills.

I’m a big fan of in-person training that uses scenarios and simulations to facilitate practicing skills and applying knowledge.

I like to flip the traditional way of teaching and begin by challenging people to solve an applied problem, while providing them with coaching and feedback along the way. Any information they need – any knowledge – can be provided as optional resources they can use to solve the problem.

Another avenue to consider is e-learning, which provides a scalable solution to unconscious bias and diversity and inclusion training if you have a large employee base and are looking for cost efficiencies. The research shows that for unconscious bias training, e-learning can provide the same outcomes as in-person training. (For more on this point, see section 3.2 in the Equality and Human Rights Commission’s report, Unconscious bias training: An assessment of the evidence for effectiveness).

A blend of in-person training for leaders and e-learning for employees is a good mix if you have a big company. The leaders get the hands-on experience: they wrestle with the ideas and can help shape the implementation in the organization, while employees get a scalable e-learning version, with no loss in learning outcomes.

A strategy integrated across these four organizational elements – purpose, process, practice and people – is your best path to real, measurable behaviour change and a more diverse and inclusive workplace.

— Aaron Barth

Aaron Barth, Ph.D., is president and founder of Dialectic. He is a frequent speaker on topics such as unconscious bias in the workplace, and the power of science, design thinking, and technology to help accelerate employee learning, transform organizations and support employee and customer engagement.

 

The intersection of your organizational goals, literature review and applied research helps illuminate a specific audience for your public education campaign.

One Big Mistake That Will Derail Your Public Education Campaign

As the executive director of a not-for-profit, how many times have you seen this happen: Your team designs a public education campaign with utmost care, beautifully produces it, sends it out into the world, and then … nothing. You may get some likes or clicks, but can you say it actually changed anything? Did it succeed in inspiring people to champion your cause, share your news, solve a problem, sign a petition, donate, attend an event, bring others, volunteer or step up to lead?

If the answer is ‘no’, read on.

The fact is that most public education campaigns fail to deliver on the ultimate objective: effecting behaviour change. That’s because most campaigns are designed to reach as many people as possible – an admirable goal, but not an effective strategy. The hard truth is: when you try to appeal to everyone, you generally appeal to no one.

We’ll say it again: When you design a campaign for everyone, you design for no one.

It may seem counterintuitive, but when you identify a single key audience and narrow your focus to specifically serve their needs, your efforts gain momentum and your results grow.

Start with science and let your findings lead you.

The intersection of your organizational goals, literature review and applied research helps illuminate a specific audience for your public education campaign.The key to creating public education campaigns that deliver behaviour-changing results starts before you develop the campaign with a scientific discovery process that involves forming a hypothesis that supports your organizational goals. Then you conduct research to pinpoint and define a specific user group or target audience as a focus for your campaign. You want to fully understand who the people in this group are, how they prefer to engage with you and how they learn.

By using a variety of applied social scientific methods like interviews, focus groups and workshops, as well as literature reviews of relevant areas of knowledge, you can gain deep insight into your stakeholders’ motivations, emotional experiences, and learning preferences. This process will illuminate your target audience’s information-seeking behaviour, their aesthetic preferences, preferred access technologies and social media habits. Competitive analyses can be conducted to learn from related programs about what works well.

All these insights are used to precisely tune your outreach efforts to address your target audience’s challenges, goals and educational needs. In this way, the voices of your constituents become the heartbeat of your programs. You are also able to identify unifying principles and ideas for designing an ecosystem of program assets to reach this group and beyond. And when you add in data collection and measurement techniques, you’ll have all the elements of a powerful public education campaign that delivers real, sustained results.

In this way you’ll dramatically increase the probability that your highest priority audience members will get something valuable from your research, and other people will still see value in too.

Also see: Beyond Reports: Moving From Awareness to Behaviour Change.

About Dialectic

Dialectic believes that design is a collaboration that fuses analytical and creative strengths with deep domain knowledge, to produce communications that connect. Contact us for more information on how to create public education campaigns that drive sustained behaviour-change results.

Machine-Learning

Is Your Workplace Culture Machine Learning Ready?

Machine Learning, AI and Change Management: 3 Ways Leaders Can Prepare Their Organization for Success

For decades, futurists have been predicting that the workplace will one day be radically transformed by artificial intelligence (AI) and machine learning.

But it’s only fairly recently that the technology has begun to demonstrate its potential. AI and machine learning technologies are becoming more and more accessible and affordable thanks to the exponential growth in computing power, the development of more and more sophisticated algorithms, and the massive amounts of data generated by billions of people every day.

“One day” is here.

Parts of the financial sector, such as mortgage approvals, have been automated for years. Chatbots and virtual assistants are taking over many customer service functions traditionally handled by people. Automation in factories has eliminated jobs in some areas while creating employment in others.

While AI and machine learning is benefiting businesses and organizations in every sector, the inevitable disruption also brings major challenges. Too often, the conversation focuses on the extremes: whether human workers will become obsolete, or that AI will relieve people of repetitive or dangerous tasks and increase profits while allowing humans to pursue more creative pursuits. Organizations need to shift that conversation.

The difference between success and failure will depend on how well leaders prepare their workforces for a world where employees work with — rather than compete against— AI, machine learning, robots and other “smart” technologies.

What are machine learning and AI?

What do we mean by AI and machine learning? People often conflate the two concepts but there are key differences.

In a nutshell, machine learning is the use of mathematical procedures (algorithms) to analyze and find patterns in data which can then be used to make predictions about the world. For example, retail stores use machine learning to analyze customers’ purchasing history in order to predict which products will be in demand and make appropriate decisions about things like inventory or pricing. As more and more quality data are made available to the algorithm, the greater the reliability of its predictions. The algorithm “learns” without requiring the intervention of a computer programmer to tell it what to do next.

It’s a lot like how people learn. You observe what goes on around you, drawing conclusions and identifying patterns about how the world works. You then use this knowledge to deal with new situations as they arise. Machine learning’s advantage over human learning is its ability to process much more data, to deploy more sophisticated algorithms to identify patterns, and to do this at great speed. This leads to fast, reliable predictions that can exceed human decision-making capabilities and mitigate unconscious bias (which we will discuss in more detail below).

Machine learning is a subset of AI. AI is the replication of human analytical and decision-making capabilities through the application of sophisticated machine learning. There are AI applications that can perform as well or better than humans at a variety of tasks, from playing chess to assessing creditworthiness to spotting a cancerous tumour on an x-ray. AI is a form of intelligence, but it is not conscious in the way that human minds are.

Machine learning is everywhere

Machine learning has been used in many industries for years. Most people engage with it in some form every day. It’s used for fraud detection and credit scoring, placing real-time ads on mobile devices and websites, web search results, spam filtering, image and pattern recognition, sentiment analysis in texts, and more. Machine learning provides the “brains” behind everything from pizza-making robots to self-driving cars to computer programs that write articles for the sports and finance pages. Sectors that are leading the way include:

  • Banking and finance: loan approvals, automating stock market transactions
  • Retail: sales forecasting, managing inventory and logistics
  • Marketing: customer insights, targeted promotions
  • Health care: diagnostics, predicting ER wait times and allocating staff resources
  • Utilities: developing smart grids to manage electricity supply and demand
  • Manufacturing: automating repetitive and/or dangerous tasks, optimizing quality and efficiency
  • Professional services: accounting, legal research services
  • Recruitment and education: application screening, customized learning
  • Call centres and customer service: automated inbound and outbound calling to support sales and marketing, answering customer queries

How machine learning can improve your business

It’s been estimated that in many fields, predictive models can outperform human decision makers by 20 to 30 per cent. Leveraging machine learning and AI can help make your workforce more efficient and deliver valuable business intelligence to improve your value proposition, customer service, logistics, marketing efforts, and more.

But simply putting more data in the hands of employees will not, by itself, translate into improved business performance. A successful transition will depend on the ability to create a machine learning-ready workplace culture. In practice, this means that in addition to investing in the latest technology, companies will also need to invest in training and enhancing the skillsets of their employees.

That’s because, in the end, machine learning is just math. It is only a tool, and like all tools, algorithms are only as good as the people who design them and use them, and the materials — the data — that they’re given to work with. Ultimately, it will be a human’s job to ensure that the data being collected are relevant and that the insights it shows are being leveraged properly.

The importance of critical thinking

So how do you position your workforce to effectively carry out the crucial role of making decisions and acting on the insights from machine learning algorithms?

You need to build a culture that values critical thinking.

Critical thinking is the ability reflect upon information in order to form good judgments and actions. To borrow from the algorithm analogy, a good critical thinker takes information as input, and outputs good judgments and actions. In this case, the critical thinker takes the insights and predictions produced by machine learning algorithms as inputs, and outputs sound judgments and actions that will move the organization forward.

Leaders need to develop the capacity for critical thinking across their organization so that as machine learning gets more integrated into everyone’s work, the organization as a whole is making the best decisions it can.

Context is everything

Machine learning is really good at making predictions about a very narrow set of things. It is a ‘vertical integrator’ of information — or, it has ‘vertical intelligence.’ Its intelligence is narrow and deep (relative to humans). Humans are really good at integrating a wide set of information. We’re horizontal integrators. Human intelligence is wide and shallow (relative to algorithms).

Since business decisions don’t happen in a vacuum, organizations need to leverage humans’ ability to integrate machine learning insights into the wider organizational and business context to make good decisions — and to avoid bad ones. For example, there are many algorithm-based market intelligence tools than can help businesses understand their customers and develop messages that will resonate with them. However, if the winning formula predicted by the algorithm is, say, already being used by one of your competitors, or is based on a small or suspect data set, you wouldn’t run with these messages because to do so could cause confusion and even damage your brand. Human intelligence and creativity are still required to fine-tune messages that connect with the right audience at the right time.

Understanding the larger context helps us identify when algorithms produce suboptimal results. Recently, Amazon abandoned its AI-powered recruitment engine when it was discovered to have a built-in bias against hiring women. That’s because the computer models used to vet applicants were based on resumes submitted to the company over the previous decade. Most of the job applicants in that period were men — a reflection of male dominance in the tech industry. Amazon’s system taught itself to prefer male over female candidates, but the machine wasn’t biased — the data set was.

So, machine learning isn’t magic — it depends on people asking the right questions to solve the right problems in the first place, and interpreting machine learning insights in a human real-world context. To make all that happen, leaders must equip their employees with three critical thinking abilities: the critical thinking mindset, skill set and tool set.

3 ways to build a workplace culture that’s machine-learning ready

1. The Critical Thinking Mindset

The core of critical thinking is really an attitude or mindset that comes down to a very simple behaviour: the ability to consider an idea without necessarily accepting that idea as true.

When you can entertain a thought without automatically accepting it, you are able to open up a space between understanding something and acting on it. And that’s precisely what you need from your workforce in order to reduce your risk-exposure when integrating more machine learning technologies: your workforce needs to be able to reflect on machine learning algorithms and insights before blindly putting them into practice.

The following behaviours demonstrate when an individual has adopted a critical thinking mindset:

  • Open-mindedness – intellectual curiosity and a willingness to suspend judgment
  • Intellectual responsibility – objectivity, humility, and a commitment to the consequences of thinking critically
  • Independence of thought – intellectual autonomy and the courage to speak one’s mind
  • Respect for others – sensitivity, empathy, and a willingness to consider other points of view.

2. The Critical Thinking Skill Set

We know it is possible for algorithms to be biased, as the Amazon example above makes clear.

The reason they can be biased is an example of the “garbage in, garbage out” principle. Machine learning algorithms learn — without reflection — from data sets. So if the data sets are biased in some way — say, by containing more men than women — then the predictions that the data sets deliver will be biased in the same way.

That’s why having some checks and balances on machine learning is important. And humans armed with critical thinking skills are well-suited to this task because of our horizontal intelligence.

One way to enable your team’s critical thinking skill sets is through Pearson’s RED Critical Thinking Model. Based on a widely used test to evaluate cognitive ability, the RED model emphasizes that to think critically, we must:

  • Recognize assumptions and demonstrate the ability to separate fact from opinion
  • Evaluate arguments by analyzing information objectively and accurately
  • Draw conclusions based on available evidence.

Workers equipped with critical thinking skills can put machine learning insights to the test to see if they make sense — both in relation to the set of constraints imposed on the algorithm and to the broader context of the business or organization.

3. The Critical Thinking Tool Set

Building employees’ confidence in their critical thinking abilities will empower them to incorporate machine learning insights into their decision-making. In turn, that will drive innovation.

Organizations can help their people build their confidence by providing them with decision-making tools and job-aids such as this simple seven-step process.

Importantly, the tools you select should guide people’s thinking in ways that avoid unconscious biases. For that reason these tools should be located in the workplace environment (so they are external to the individual and therefore not subject to individual biases) and they should provide a common process that everyone can agree upon to deliver results that everyone can trust.

There are three kinds of tools and processes we recommend as a starting point:

    1. Build a decision-making workbook
    2. Provide critical thinking mentorship from already strong critical thinkers in your organization
    3. Ensure that decisions stand up to scrutiny by identifying a team whose job it is to challenge your organization’s ideas, processes and assumptions.

Machine learning and AI can deliver competitive advantages but only if the entire organization is committed and open to change, from the CEO and top IT administrators to mid-level managers and frontline staff.

Companies that invest in equipping employees with critical thinking skills to guide this never-ending process of analyzing data, evaluating insights, making predictions, measuring results and making the adjustments will achieve better business results, increase productivity and continue to drive innovation.

About Dialectic

Dialectic provides education and tools to help people transition to a partnership with the AI-enabled robot that may be in their future. We focus on the psychological, social, emotional, and cognitive needs of humans in that partnership, and how companies can help people adapt to that world through training and tool development. Contact us to learn more.

Marcus's Story graphic – a text message based experience to move college administrators towards behaviour change in the fight against AIDS & HIV

Beyond Reports: Moving From Awareness to Behaviour Change

How Not-for-Profits Can Turn Awareness Into Real Change

Most leaders of not-for-profit organizations are well acquainted with the constant conflict between a deep desire to do your utmost to advance your mission and the harsh reality of limited resources.

You’re trying to make every dollar stretch as far as it will go and you’re trying to satisfy the expectations of funders and all those involved in your cause. So you do the research, analyze the information, and create and distribute reports or guides to educate the public.

You hope those reports have the desired impact. But at the same time you know it’s incredibly difficult to make your organization’s voice heard amongst all the competing interests out there. And you know it’s difficult to demonstrate the impact your efforts are having when it comes time to report back to funders or write that next grant application.

It’s even harder – and seems riskier – to go beyond reports and try something different when there is so much at stake.

The benefits of reports

The reality is that research IS essential and reports have their place. There are a number of reasons why reports have long been considered the gold standard in public education:

  • They tell the whole story, serving as a container for a deep level of information.
  • They are a form of communication that people recognize, which signals credibility.
  • They can be validated through primary and secondary resources.
  • They visualize the data.
  • They reach a large audience.
  • They are preferred by policymakers who want the validation of the report in their hands to gain buy-in for issues.

The limitations of reports

So far, so good. But the fact is that traditional reports also have many significant limitations:

  • People don’t read, they scan – at most they might read 20% of the text on a page (Nielsen Norman Group, 2008).
  • There are no useful metrics – with PDF reports we can only measure if people have clicked on the link, shared it or liked it. We don’t know if they’ve actually read it.
  • Information overload: You run the risk of burying the most important research, data and stories in long-form publications.
  • Reports can transfer knowledge but they rarely change behaviour.
  • Reports are untargeted and impersonal: When you design for everyone, you actually design for no one. Trying to reach a broad audience often means missing the very group of people who most need to hear your message.

The keys to behaviour change

We know that change happens when people DO something different, not when they simply KNOW more. In a nutshell, this is the main failing of reports – they don’t enable people to do something new.

Here are ways that people can do something different – steps that are small, incremental and vitally important for building momentum for a campaign or a cause:

So, how can not-for-profits encourage people to take steps that create real behaviour change?

  1. Use your research to pinpoint and define a specific user group or target audience to fully understand who they are and how they learn. When you do this, you significantly raise the probability that your highest priority audience members will get something valuable from your research. Don’t worry, other people will still see value in it, too.
  2. Supplement reports by creating other learning assets that specifically target those people and appeal to their mindset and motivations. Fit the delivery method to the task at hand. For example, if you are trying to make a data-driven argument to compel your target audience to action, use an infographic. Or if you are trying to help them develop a new skill requiring practice and feedback, use a simulation-based e-learning module.
  3. Finally, consider how to measure some of the ways people can change to validate, calibrate, and adjust the tactics of your campaign and the design of your outreach assets based on user feedback. Once data is collected you can design new campaigns and optimize old campaigns based on real information versus simply guessing.

Marcus’s story (currently in beta) is an interactive learning experience to create awareness and reduce the spread of HIV among college students.

Marcus’s Story

For example, we’ve designed a number of reports for the Human Rights Campaign, including Making HIV History: a Pragmatic Guide to Confronting HIV at HBCUs.

One idea we’ve proposed to increase the impact of this important report is Marcus’s story, which is a text chat between several characters.

Marcus’s story targets a specific audience – college administrators, who are in a position to make a significant impact towards the goal of an HIV- and AIDS-free generation. It is designed to create empathy for students living with HIV on college campuses and would be part of a larger public education campaign to create awareness and reduce the spread of HIV among college students.

Immersive learning experiences like Marcus’s story can be used to model best practices for preventative health care, HIV campus policies and more. In this way, administrators could gain the practical tools and resources they need to reduce the risk of HIV on college campuses, increase HIV awareness and prevention, and provide better support for students living with HIV.

Play the beta version of Marcus’s Story.

Vitalogue Thanksgiving scene talking about advanced care planning

E-learning Game Helps Families Discuss Advance Care Planning

The Thanksgiving holiday season is the perfect time for families to have important discussions about advance care planning.

Advance care planning involves making decisions about the type of care you would like to receive if you become unable to speak for yourself due to illness or an accident.

“It’s vitally important to talk to loved ones about your wishes and values regarding the health and personal care you would want to receive in the future — because none of us can predict when or if we might become ill and unable to speak for ourselves,” says Audrey Devitt, Waterloo-Wellington Geriatric System Coordinator for the Canadian Mental Health Association Waterloo Wellington (CMHAWW).

Advance care planning is an issue that affects everyone, not just those who find themselves in hospital due to a serious illness or injury or who may be nearing the end of their life.

vitalogue image of elderly man talking to daughter

“Unfortunately, advance care planning is also a topic that too many people avoid. We need to change that — and Thanksgiving, when families across Canada gather to celebrate their blessings, is a good time to start.”

Devitt is one of the health-care professionals involved in launching Vitalogue, an e-learning game designed to encourage and support important conversations between patients and families about advance care planning.

Vitalogue is a scenario-based game, created through a collaboration involving St. Joseph’s Health Centre Guelph, Conversations Worth Having Waterloo Wellington, Hospice of Waterloo Region, Hospice Wellington, and CMHAWW.  Vitalogue was created by Dialectic, a Guelph-based e-learning solutions provider, and leverages insights from the Game Design and Development Program at Wilfrid Laurier University.

The game puts players in the shoes of the patient to help create empathy and understanding of the decision process from their point of view. Although it was designed for health care professionals, the real-world scenarios are an effective tool to help everyone practice the skills involved in having these difficult conversations.

“We designed the game around real problems, and the decision-making process that people go through when they’re facing these incredibly difficult and highly personal questions,” said Aaron Barth, Founder and President of Dialectic. “Each choice the player makes incrementally impacts the patient’s outcome, ultimately leading to better or worse results for them and their family. The simulation lets players practice the skills they need to help families and their loved ones arrive at decisions that are best for them.”

Play Vitalogue: A game about Advance Care Planning

See The Making of Vitalogue (video)

scientific-HR-blog-feature-3

3 Mistakes HR Leaders Make Trying to Fix Corporate Culture

Toxic corporate cultures can be incredibly difficult to fix, despite the valiant efforts of HR leaders to create programs to improve morale and boost productivity.

You know that a positive workplace, with an innovative, collaborative, self-motivated and high performing team is essential to your company’s success. So why is it so hard to get employees to bring their best to work every day? What does it take for them to be happy on the job? How do you design HR programs that yield the desired effects?

Intuition vs Science

The hard truth is that many well-intentioned HR programs fail to fix corporate culture because those developing them don’t strike a balance between using their intuitions and using science. Please note the word balance: this is not about elevating science above intuition and hard-won experience or vice versa – in fact, that’s the whole point: both aspects must be given equal weight in your thinking to achieve a successful outcome.

Keeping that balance in mind, here are the three common mistakes that we see HR leaders and managers make when creating programs to address company culture challenges, and how to fix them.

  1. They don’t get to the heart of the business problem.
    Correctly identifying and framing the business problem that your culture change solution addresses is essential to achieving your desired outcomes. This means considering your company’s higher-level strategic goals, and drawing as straight a line as possible from those goals to the kind of culture that best enables them. Your culture change initiatives are unlikely to be successful – and probably aren’t worth doing – if they do not support your company’s overarching objectives.To get to the heart of the business problem, you must engage leadership in thoughtful, critical reflection about their strategy and its link to your culture solutions. For the leadership team, we recommend holding both individual and group meetings to achieve agreement and alignment.The main deliverable from these activities is a strong business case in favour of the culture you want to create – and the changes that are necessary to get there –  in terms that connects with leaders and with their strategic vision for the company. Defining and framing the business problem saves time, money and resources in the long run.
  2. They don’t delve into the scientific literature to fully understand the real drivers of good company culture.
    Evidence-based problem solving requires that you seek out all pertinent information that can help you design an ideal solution, and this means conducting a thorough review of the scientific literature relevant to your project. This is where you begin to test assumptions and ensure you avoid confirmation bias, which is the tendency to look only for information that confirms what you already believe. (People rarely, if ever, look for disconfirming information.) What the science will tell you is what works and what doesn’t, based on empirical evidence. That’s the kind of confidence you need to create the right programs to create the change you want.People also tend to skim the surface and base their decisions on readily available sources of information such as blogs, magazines and popular books. While these sources can be informative, it’s important that your search for empirical support also includes sources of the highest credibility, including scientific books and peer-reviewed scientific journal articles. The deliverables from your literature review are empirically supported frameworks and models that illuminate the nature of culture and culture change, provide support for a solution to your culture change problem, and contribute to your understanding and ease of implementation.
  3. They don’t constrain the problem and use evidence to test assumptions.
    To find a viable and focused solution, it’s also important to constrain the problem, that is to clarify how the problem actually arises in your organization and with your people. You do this by conducting applied research that includes employee input about their view on the current company culture, and their ideas about how things could be improved. This allows you to gather the evidence you need and further test your assumptions. It also allows you to uncover variables and leverage points unique to your organization that will lead you to design the best culture change solution.What does this look like in practice? One approach is to conduct a gap analysis using social scientific methods. To do this, use the framework you gleaned from your literature review to develop a set of research questions about your company’s culture that you want to answer. Then collect data that answers these questions using surveys, focus groups, interviews, and embedded observation methods like job shadowing or ethnography. Next, analyze the data and compare it against your model. The gap between what your framework predicts and what your evidence tells you is the gap you need to close with your solution.

Scientific HR for organizational success

Following the steps outlined above balances intuitions with scientific evidence – and forms the foundation of an approach that we at Dialectic call Scientific HR. In this way you’ll be better able to design the right solution for your unique challenges en route to creating a great corporate culture with productive, motivated and high performing team members.

Do you have a difficult organizational culture challenge to solve? Learn more about Dialectic’s Illuminate to Elevate methodology, which integrates science and the creative arts to help companies of all sizes break through barriers to new levels of growth and success, or get in touch to get started right away.

HRC LGBTQ Youth Report collage

#HRCYouthReport Addresses LGBTQ Teen Experience

We couldn’t be more proud to see our partners at the Human Rights Campaign Foundation release their groundbreaking report on the status of LGBTQ youth in America.

The 2018 LGBTQ Youth Report – which is based on the largest survey of LGBTQ youth to-date – reveals the overwhelming challenges LGBTQ youth face at home, in school, and in their communities, and points the way to effective measures to address these systemic issues.

Our team of researchers and designers supported the production of the report, integrating data-driven insights and evidence-based design to reveal and elevate the stories and experiences of LGBTQ youth. For example, we chose a sketchbook as the report’s fundamental design concept, which gave us a platform to simultaneously display the survey’s boldest statistics, while allowing readers a glimpse into the lives of LGBTQ youth, their private thoughts, feelings and experiences.

More than 12,000 respondents aged 13-17 representing all 50 states and the District of Columbia, participated in the survey, sharing their personal experiences of family rejection, bullying and harassment.

We’re excited to have worked with HRC on this important report and to have supported their efforts to advance equality for all LGBTQ Americans.

Learn more about the HRC Foundation and University of Connecticut 2018 LGBTQ Youth Report.