Skip to main content

Introduction

Aims and overview of the toolkit

The Digital Inclusion Evaluation Toolkit’s aim is to support you, wherever you are on your digital inclusion journey, to create positive change for people experiencing digital exclusion.

This toolkit explores the steps that any community group or organisation could take in order to evaluate the impact of a digital inclusion project, or pieces of digital inclusion activity. It looks at what evaluation is and why it’s helpful for digital inclusion, what the principles of good evaluation are, and it explores the basic building blocks needed for evaluating digital inclusion work. This toolkit shares some examples resources and case studies, alongside tips for analysing your data.

Intended audience

This toolkit is aimed at community groups and organisations delivering, or thinking about delivering, a digital inclusion project or piece of activity.

What is evaluation and why is it helpful for digital inclusion?

Evaluation is the process of measuring the success of something, against defined values or criteria. There are different reasons why you might want to evaluate the work that you do:

  • To understand what’s working well and less well, to help identify how a project or service can be improved.
  • To meet the requirements of funders to demonstrate how funding has been spent and the difference it has made.
  • To assess the difference we’re making over the longer term, to understand what impact we’re having on a wider social problem, such as digital exclusion.

Although designing evaluation can seem complex and time-consuming, there are easy ways to start collecting information to help you make assessments of how you’re delivering a service or project and whether it is having the effect you want. If you are doing this already, hopefully this toolkit will help you to think how to apply this to evaluating the impact of your work on digital inclusion.

In relation digital inclusion, it is important to evaluate services and projects to help you understand:

  • who is accessing digital inclusion support and who is not, to help identify any groups of people who remain excluded from accessing support to use the internet
  • whether the support provided is meeting people’s needs around digital inclusion. There has been lots of focus on building people’s basic digital skills and confidence, but we also know that increasing numbers of people are struggling to afford to access connectivity or appropriate digital devices
  • the wider impact of being digitally included. Evaluation will help generate deeper understanding of the positive changes to people’s lives from support to get online, and in turn what people are missing out on, if they are digitally excluded. This understanding can help us make more compelling arguments to funders, policymakers and other stakeholders for the need for funding and infrastructure for digital inclusion

Principles of good evaluation

Evaluating digital inclusion support is no different to evaluating any other kind of project or service, and there are guides and resources available on how to evaluate projects and services. Below are a few principles of good practice to guide your evaluation work.

Work backwards

Think about what change you want to see and work backwards to identify what you can measure that will help you recognise if you’re heading towards that change. You might want to consider creating a Theory of Change, a visual map of the impact, outcomes, outputs and inputs of your digital inclusion work.

Less is more

Focus on asking 2 or 3 questions really well, rather than collecting lots of overwhelming data. You can build up your evaluation approach over time, adding in more questions or types of data as you build capacity for this work.

Numbers and words tell the best stories

A mix of types of data is best for reporting what you’ve done and the difference you’ve made in terms of digital inclusion. People like to see the headline statistics in terms of the number of people supported and the amount of positive impact, but real stories of how people have experienced change and the difference it has made to their lives can be really compelling too.

Contribution not attribution

Don’t worry about whether or not you can wholly claim you’re the reason for change; it’s enough to be confident that you have contributed in some way to that change, and to be honest about that.

How to use the toolkit

This toolkit is designed to be a starter point for anyone looking to explore how evaluation can help with their digital inclusion work. The toolkit is not an exhaustive list, but should be able to provide the basic building blocks for those wishing to explore this area or learn a bit more about different tools and techniques.

The toolkit is broken down into 3 levels, designed to cater to the specific types of evaluation you may wish to do. The basics of what is contained in each level can be found below. Also remember that the levels build on each other and are not exclusive, you may decide to include and draw elements from each of the levels.

Level 1: capturing who and what

Those at the start of their evaluation journey or those looking to collect basic data can start here. Tier 1 looks at very basic data and mostly focuses on the people you are supporting and any additional data you might need to record.

Level 2: evaluating experience of support

This section is aimed at those further along their evaluation journey, or those who require more detailed data, e.g. satisfaction information. Building on data from Tier 1, in this tier you can look deeper at your data and start to look at outcomes from your support.

Level 3: evaluating outcomes of support

This section looks at more advanced types of measuring impact and evaluation. This section explores the use of baseline and follow up surveys, and using qualitative and quantitative data to analyse and assess a wider range of evaluation for your work.

Level 1: capturing who and what

The primary aim at this first level of evaluation is to understand who you are reaching through your digital inclusion project/service and what support they are receiving. This reflects the ‘outputs’ of your work. You can then use this information to assess whether this meets the aims of your project/service, whether there are any gaps in terms of the types of people you are reaching, and whether the support you’re providing is meeting the needs of those people to get online. You may already be collecting much of the information you need already; the key is organising it and understanding how to use it to evaluate the who and what of your delivery.

What to measure and how

Evaluation question

  • What digital inclusion support you’re delivering?

What information to capture

  • Types of support being delivered (e.g. basic digital skills classes, access to mobile data, access to devices, digital champions support etc.).
  • Amount of support being delivered for each type, in a defined period of time, e.g.:
    • Number of hours of skills support delivered.
    • Number of mobile Sims/data vouchers distributed.
    • Number of devices loaned/distributed.
  • Amount of support received by each person in a defined period (e.g. 2 months).

Sources of information

  • Delivery data e.g. from organisation calendar; records of data/devices distributed.
  • Data from registration/sign-in sheets; appointment records.

Evaluation question

  • Who you’re reaching with your support?

What information to capture

  • Relevant demographic information for digital inclusion, e.g. gender, age, ethnic identity, disability status.
  • Information about the barriers they currently face to digital inclusion e.g.:
    • Lack of access to mobile data/WIFI connection.
    • Lack of access to appropriate digital device.
    • Lack of basic digital skills to get online and use the internet.
    • Lack of confidence to do things online.
    • Difficulties with English language.
    • Difficulties due to a disability or other long term condition.

Sources of information

  • Short questionnaire used at registration.

Tips for collecting information

On the support you’re delivering

Set up a process to regularly collect this data (e.g. monthly) to enable you to have it all in one place to report on your outputs.

On who is receiving support

  • Collecting information about the people you support when they first engage with your service or programme supports understanding your wider delivery, as well as digital inclusion support. A short questionnaire conducted as part of the registration process could be built into any customer management system you currently use, or through a simple paper form with regular entry of data from the forms into a spreadsheet or other digital system.
  • When asking about demographics, use categories that align with external data sets to ensure categories are up-to-date, and allows for comparisons of your data with other data sources. If you are not required to report on specific demographic characteristics by your funders, we recommend using the ONS Census or National Survey for Wales categories to ensure your data aligns with other sources.
  • Remember not to collect more information than you need, and to give people the option not to respond to these questions if they don’t want to. Remember that storing personal information must be done in line with GDPR.
  • When asking about existing barriers to digital inclusion, use phrases such as barriers to ‘getting online’ and ‘doing more online’ instead of ‘digital inclusion’. Ask people to select any barriers that apply to them.

Tips for analysing the data

  • Begin with simple counts and proportions (percentages) of the type and amount of support you’re delivering in a defined period, and the number and types of people you’re reaching. If you have targets for a service or project, compare this data against set targets to evaluate the amount of support delivered, and whether any changes need to be made.
  • Utilise tools like MS Excel pivot tables to analyse recipient demographics and explore any differences that are surprising or interesting, e.g. are the people receiving basic digital skills support older on average than those seeking access to data connectivity? Does this tell you something about how to tailor the support you provide to different groups?
  • Use data sources to compare your support groups to the wider population to identify any differences or gaps, e.g. Lloyds Essential Digital Skills Index.
  • As you build up your data you can look at any changes over time in the amount of support being delivered and the types of people you are reaching.

Level 2: evaluating experience of support

At the second level of evaluation, the aim is to understand how people have experienced the support delivered and whether this has met their expectations or needs in relation to digital inclusion. By extending data collection from Level 1 (‘who’ and ‘what’), you can gauge support effectiveness across different demographics and types of assistance, pinpointing areas for improvement.

What to measure and how

Evaluation question

  • Experience and value of digital inclusion support received?

What information to capture

  • What support has been received (e.g. type, number of hours).
  • Rating of support received, e.g. in terms of:
    • relevance
    • usefulness
    • overall satisfaction
  • Consider using a scale to capture each rating, e.g. 3 point scale (poor, adequate, good) or 5 point scale (1 to 5, very poor to excellent). E.g. “How useful has the support you have received been?”:
    1. not at all useful
    2. somewhat useful
    3. very useful
  • Brief comments on support received, e.g. by asking ‘Do you have any feedback on the support you have received?”.
  • Wider experiences of receiving support and recommendations for improvement, captured in people’s own words.

Sources of information

  • Delivery data or ask in a short questionnaire.
  • Short questionnaire or other rating exercise such as placing a sticker on a scale on the wall.
  • Open text question in short questionnaire, or other method e.g. writing feedback on post-its, to stick on a poster on the wall.
  • Semi-structured conversations (individual or small groups). Can be audio or video recorded, with people’s permission.

Tips for collecting the data

  • Offer diverse feedback methods: paper questionnaires, interactive activities such as asking people to put a sticker on a scale on a poster on the wall, or balls into buckets labelled with the ratings.
  • Provide options for feedback delivery, including paper or online surveys, to accommodate for varying degrees of digital exclusion.
  • Find ways to make it easier for people to give honest feedback without worrying about any implications of giving negative feedback. You can do this by:
    • making questionnaires anonymous and providing a box where people can put their completed (paper) questionnaires
    • create opportunities for people to complete questionnaires privately e.g. in the corner of a room, or after a staff member has left the room
    • clearly communicating the feedback’s purpose: to enhance services and support
  • Conduct interviews in comfortable, private spaces, allowing time for rapport-building and using visual aids for communication if necessary. When English language skills are more limited or people lack confidence to speak, using cards with pictures (e.g. of a mobile phone, of an internet shopping site etc) can help stimulate conversation.
  • Obtain permission before recording conversations and check afterwards if the person is happy with how you will share their story and any quotations.

Tips for analysing the data

  • Start with calculating simple counts and proportions (percentages) of type and amount of support received.
  • For ratings of support received, calculate the average score for each type of support received to be able to compare the value of the different digital inclusion support you provide. This will help you identify any priority areas for improvement.
  • Pay attention to the range of scores, if the range is wide (big difference between lowest and highest scores) you could explore this in relation to other data you have collected about the people you’re supporting, e.g. explore variations in ratings by factors like age or barriers to digital inclusion to enhance tailored support strategies.
  • Summarise support feedback by theme, distinguishing positive, negative, and recommendations for improvement.
  • Anonymise notes or transcripts from interviews/group discussions unless permission is granted to share personal details in a case study.
  • Create summaries from each interview/discussion to capture details and quotations around:
    • reasons for seeking digital inclusion support
    • barriers faced around accessing/using the internet
    • expectations for and experience of receiving support
    • what support has enabled them to do in terms of accessing/using the internet
    • what changes this has contributed to in their lives
    • any barriers they still face or recommendations for improving support

Level 3: evaluating outcomes of support

At this third level of evaluation, the focus should be on measuring the impact of digital inclusion support and changes in individuals’ lives. Select a few relevant outcomes aligned to the type of digital inclusion support you provide and what is most important to your organisation. Ideally, gather both baseline and follow-up measurements to track changes over time, though this can be challenging so remember it’s ok to start just by collecting good follow-up data, and adding it to the other data you collect described in levels 1 and 2.

While it’s preferable to measure impact over time, this is difficult to do as it can be hard to keep in touch with the people you support over several months and encourage them to give feedback further down the line. It’s better to start by collecting short-term outcomes and gradually incorporate long-term measurements.

Choosing outcomes to measure

Below are suggested outcomes related to digital inclusion that you might want to use to measure the difference your support is making. Select 2 or 3 short-term outcomes to start with, which reflect the type of support you provide and your priorities as an organisation. You can then build on this at a later date to collect data on longer-term outcomes. Ideally, the outcomes you choose to measure should reflect a theory of change underpinning your digital inclusion support or project.

Short-term outcomes

  • Increased digital confidence.
  • Increased basic digital skills.
  • Increased sense of being able to stay safe online.
  • Increased access to mobile data/WIFI.
  • Increased access to digital device.
  • Increased motivation to use the internet.

Longer-term outcomes

  • Increased sense of social connection/reduced loneliness or isolation.
  • Increased ability to engage with health and care services online.
  • Increased ability to manage finances or bills online.
  • Increased opportunities for learning.
  • Increased opportunities for employment.
  • Increased sense of wellbeing.
  • Increased sense of independence.

Collecting data to measure outcomes

Follow-up only

If you only have capacity to collect follow-up data to measure impact, consider asking individuals if they’ve experienced the outcomes you’ve chosen to measure. You can utilise a brief questionnaire, expanding on Level 2’s questions, to gauge support experiences and satisfaction.

For measuring long-term outcomes, maintain contact details of supported individuals and ensure permission for follow-up. If direct contact isn’t feasible, consider online surveys to capture data on longer-term outcomes, though response rates may vary. In many cases you might have to rely on sending out an online questionnaire to capture data on longer-term outcomes, if the people you’ve supported are no longer coming into your organisation. This may restrict who feels able to respond and give feedback.

Alternatively, conduct interviews or group discussions with a select few after several months to delve into long-term impact, potentially offering incentives (e.g. free lunch or voucher) and potentially generating case studies of the longer-term impact of your digital inclusion support.

Baseline plus follow-up

If you can gather baseline data before individuals receive support, you will be able to measure more accurately the change in people’s levels of digital inclusion. This involves asking identical questions at baseline and follow-up, comparing responses to analyse the differences between them and hopefully the improvement over time. Ideally, integrate baseline data collection into the collection of demographic data at registration as described in Level 1, potentially replacing questions on barriers to digital inclusion, as they cover similar ground.

Tips for analysing the data

Analysing follow-up data only

  • Calculate counts and proportions (percentages) for responses to your questions, e.g. number and proportion of people who have ticked ‘agree’ to each of the outcome statements.
  • If you’ve gathered data on digital inclusion barriers during registration, compare it with outcome data. E.g. 60% of people on registration indicate that a lack of access to a digital device is a barrier to doing more online (40% indicate they have access); on follow-up 70% of people agree that the support they have received has increased their access to a digital device. This shows a positive impact on access as there has been a 30% increase from registration where only 40% stated they had access.
  • To explore the link between support delivery and digital inclusion outcomes, compare support types and satisfaction levels (from Level 2) with outcome data. Analyse how varying support types or levels correlate with positive outcomes, and investigate whether people who have rated the experience of support more highly have indicated greater change in outcomes.

Analysing baseline and follow-up data

  • To measure change between baseline and follow-up, a simple approach is to calculate the average score for each rating question at both points. Compare the averages: an increase in average score between baseline and follow-up will indicate a positive impact.
  • A further step to assess if the difference between baseline and follow-up is statistically significant (as opposed to any ‘normal’ variance we might expect to see), is to run a t-test on the difference between the 2 average scores. This requires having a good sample size (eg at least 50 responses each at baseline and follow-up).

Common barriers with evaluation and how to overcome them

Limited time to do evaluation

  • Use/build on what data you already collect: e.g. appointment information for 1-to-1 support; class registers for basic digital skills classes.
  • Build data collection into your existing activities, e.g. use an online survey as an activity within a digital skills class.
  • Keep data collection as simple as possible: asking one question well is better than asking lots of questions and not having time to analyse the data.
  • Keep analysis simple: counts and percentages of responses are often sufficient for reporting and can be done relatively quickly in Excel or similar. For qualitative data from interviews or group discussions, create short summaries of people’s experiences and support these with short, direct quotations where possible.
  • Couple simple statistics with strong stories of real experience for the most impact.

People are reluctant to participate or give their feedback

  • Make it fun: think about a creative, easy way for people to share their feedback e.g. giving them a sticker and asking them to put it on the wall to indicate their response to a question.
  • Keep it anonymous (as far as possible): think carefully about whether you really need to collect personal or demographic information. There are GDPR implications for storing personal data, and you may put some people off if you request it in a survey. If you don’t need to report on it, don’t include it in a survey (or make it optional).

Using online surveys to collect data is challenging with people facing digital exclusion

  • Although online survey methods may not be accessible for many people facing digital exclusion, they can be used effectively within digital skills training sessions, as an exercise for building people’s confidence and ability to complete surveys online.
  • It’s also important to offer other ways for people to feed back. Simple paper questionnaires that people can complete and return on site can be very effective.
  • Using more creative methods to give feedback quickly whilst on site can be helpful too, such as stickers on a poster, balls in a bucket or asking people to write a comment about the impact of the support on a blank postcard.

People may just tell us what they think we want to hear

  • Find ways to make data collection confidential/anonymous: use a closed box for people to put their responses into; don’t ask them to provide personal information; use a single, simple question to elicit feedback e.g. via balls in a bucket.
  • Make it really clear that you’re keen to improve your service and provision and want to hear about the good and the bad. Help people understand how their feedback will be used by the organisation.

Conclusion

This Digital Inclusion Evaluation Toolkit is designed to be a starter point for anyone looking to explore how evaluation can help with their digital inclusion work. It’s not exhaustive or prescriptive. Whatever your current level within the toolkit, it doesn’t mean you are more or less advanced in your digital inclusion work, that your work has more or less meaning. Digital inclusion is a journey and it’s going to look different from one community group to another. Every organisation will be on their own journey, unique to their needs and the needs of those they work with and support at any one time. Learning from one another is crucial on this journey.

Digital inclusion isn’t just about technology or digital; it’s about inclusion and ensuring everyone is part of the conversation and journey. Existing digital inclusion networks can support your organisation, helping you connect, learn, and develop to ensure no one is left behind.

Get involved

National Digital Inclusion Network

Formerly known as the Online Centres Network, the National Digital Inclusion Network, coordinated by Good Things Foundation, is made up of organisations across the UK, all working to fix the digital divide. These organisations are called digital inclusion hubs and have access to support and services through Good Things Foundation to free digital data, free digital devices, basic digital skills learning platform Learn My Way, and other resources. 

Hubs in the Network deliver a range of free services that help people access or learn how to use the internet locally in their own communities. Joining the National Digital Inclusion Network means organisations can access the support, tools and knowledge to reach and help more digitally excluded people.