User Research Meet up – UR and Performance Analysis (…long but excellent…)

Alison from Office of National Statistics – “Understanding our surveys and how people complete them”:

Paper surveys are problematic.  There’s a 3 month lead in to change a form,  forms often returned incomplete.  It is logical to reduce the burden on people and businesses in particular, because paper surveys are time intensive. Online surveys can still throw up error messages.

Performance Analytics provides insight into how Users progress through a survey.  It can show when users leave the survey early:

  • linger time
  • total page views
  • errors
  • timeouts
  • exits
  • saves & returns
  • page print requests

These analytics can help identify the pain points in the survey. Pain points are when the survey asks for:

  • The survey asks for financial information such as expenditure on R&D.
  • The proportions of expenditure under certain categories.
  • Questions which skim or scan the same but are asking for figures for different years.

Learnings from this:

  • Understand the need from the statisticians. and compare that to the need of the User.  Use less front end imposed validation on the fields of your surveys and cleanse the data behind the scenes.  (don’t ask for proportions or %, ask for the raw data)
  • Users don’t seem to read guidance, ask better questions to encourage people to answer questions right without referring to guidance.
  • Errors seem to occur when questions start with the same introduction.
  • Users don’t always act the way you expect. More people leave a survey when asked about their home heating than they did when asked about their sexual identity! (Though this may be the small group of people likely to be offended by it and exit the survey – the exit data isn’t enough to determine the cause – contribution from the floor)
  • Users generally want to print things at the end of surveys.
  • Users like confirmation of completion.
  • Users may like to prepare the information they need before they begin a survey
  • Users make lots of use of the back button….this may be to avoid answering demanding sections of a survey.
  • SPSS can be coded to read free text (contribution from the floor)

John Waterworth – Potential problem we face with the imminent arrival of GDPR:

You may only be able to use performance data to analyse your services competency if you state upfront that it is being collected for that purpose.  Otherwise, the service may not be able to use the data it collects.

Third party survey tools may not enable this level of analytics – I recall hea


Richard from HMRC Digital Data Academy – “Using analysis to turn Data into Wisdom”:

Roles:

  • Performance analysts – embed them into the agile teams at discovery stage of any service, because the nature of the data we need to collect may change as the service develops.
  • Skills and Capability – we need a team to teach our people the skills they need, this ended up being filled by academics who deliver bespoke learning for individuals. (contact the team to find out details of free courses)
  • Data Science – bringing in experts to create the right environments and write bespoke code to do tasks like text mining effectively.
  • Economics – We need to understand the cost per transaction for our services.  Most of our costs are in set up costs for a service so we need economists to understand total cost of ownership for each service.  Digital services have not reduced call centre traffic.
  • Statistics – we need to forecast the likely surges in demand for services such as tax return filing and segment the Users into agents and citizens.
  • Qualitative Analysis and Social Media – Academics have been recruited to understand the gaps within the data that is collected to prevent phishing scams, particularly mis-use of the Government logos.  This function provides insight into the voice of the user through mining social media.  Twitter conversations are very different than Facebook.  There is a move to images and away from text.  There are 9 subgroups which discuss Government Taxation, 4 are outside of their control and use bots to publish damaging information.  Social media analysts can deal with the bias those bots bring.
  • Recruitment – bring it in house to match the calibre of people you want to recruit.

Recruit through Academic publications as well as Civil Service Recruitment.   Being digital is about being disruptive.  This is a disruptive approach to civil service recruitment.  Has Civil Service Jobs become an echo chamber?


Louise and Lorna from Data.Gov.Uk – “Digital Inclusion and data”

Data.Gov.UK started in 2010 to make government more transparent by publishing open data under an OGL.  One example is LIDAR – lasers fired from a plane which is used to give insight into topography for things like where to post radio masts.

Measuring digital data skills is a different set of skills from digital inclusion.  Topic Knowledge vs Digital Skills.  Data literacy has a massive impact upon users ability to access meaning from the data.  The workarounds to access meaning may reside in a team not an individual.  One person may find the data but need another to understand it.

Lorna and Louise have iterated two versions of a data digital inclusion scale:

 

We need to be clear about the definition of data, because it affects how users respond to it.  People who talk about learnings from data may not be the person who crunches that data.

They have created some Data Resource Cards which are broken into sections explaining how each type may use it. These are really excellent.

We may need to consider how we enable Users to visualise the data.

This resonates with a blog I wrote on technophobia and the digital inclusion scale .  Government may end up limiting access to data that is not published under an OGL because those who access it may be suspicious about sharing their data with government in order to gain access to it.

@Loup73 and @Lorna_tang

John Waterworth:  You may be confident about using digital in general, but your confidence in engaging with a particular service may be different.   This may be related to anxiety about, for example, applying for Asylum or a particular benefit.


Stephen from GDS – “Using Big Query to Visualise User Journeys

There is a common request for analysts is to explain how people move through a website.  This is best answered with a visualisation.  Visualisations help others understand the data and get excited about it. Big Query helps make this process easier:

You can paste the table output of big query into google sheets, don’t use the automated help for this.  Use Sankey Snip and it will create a lovely visualisation of routes through a service for a cohort of users.  This can be done for individuals.  JS Fiddle enables more detailed use.  Stephen will be publishing this as a blog in the near future.

Can you analyse all the possible user routes between A and Z and their frequency of use.  This may help with the interpretation of Exit Surveys by showing the journey that led to the feedback.  You could track transactions between sites – this is much easier if the data is gathered from one central Google Analytics account.

TfL are now able to understand all the different routes taken by users between two points.


Mike from DWP – Reconciling UR and evidence from evidence:

UR evidence and analytics can sometimes provide conflicting evidence, does this mean that one or other is wrong?  It usually means that the way we articulate the findings are at odds with each other.

Analytics is probability sampling – every one has an equal chance of being a survey, it’s designed to answer questions about how often.  Using a change as a marker means that a large sample taken before and after the change should represent entire population

UR – selecting people to get a broad range of coverage of situations and circumstances and views,  it’s designed to answer questions about why.

How should we articulate findings to avoid conflicts?

User Researchers should be saying that “People from all/a wide range of backgrounds/capabilities struggled with this” style of language.

Analytics either supports this or suggests there’s another characteristic driving it that we’ve not captured in UR or the analytics.

John Waterworth : Theres a big difference between saying “our findings are that most” and saying that “some of the 10 users we met showed this characteristic.”  however if we work with 15 users and they all struggle with the same thing, then there is a thing to resolve.  In Government we cannot ignore statistically insignificant groups of people.  All Users have a right to access the service.


Retro:  “If you torture data long enough, it will confess to anything you like.”

In groups, thinking about the last year, what have you liked, learned and longed for in the relationship between User Researchers (UR) and Performance Analysts (PA)?  Come back with 5 themes:

Common themes within the themes:

  1. Data can messy and difficult – visualisations help understanding of the team.
    • Telling compelling stories matters, but it can skew decisions based on the quality of that presentation (see PowerPoint caused the Challenger disaster). (John – GDS)
    • Having both disciplines on the team can help wider understanding of the findings of PA and UR to avoid teams failing to engage with the data (Katie – GDS)  Involve the whole team in analysis. (John – GDS)
    • Presenting headline findings can alter the perception of senior managers. (Richard – HMRC)
  2. Neither UR nor PA can answer all the questions by themselves, the disciplines are complementary.
    • Raising good questions matters irrespective of the discipline. (Katie & Peter – GDS)
    • UR and PA should plan to work together to develop compelling evidence of the performance of a service to show improvements (Peter – GDS).
    • Smart Regulation matters – if you have a blunt KPI it will influence behaviour. (Richard – HMRC)
  3. More time is needed for the two disciplines to work together to develop mutual professional understanding and appreciate of each others contributions and improve communications between them and to the wider team.
    • One practical way to encourage this is to encourage quarterly missions.  The team should specify the evidence they will gather to prove the success of the mission at the start, this will involve the PA much earlier in the process. (Peter – GDS)
    • There needs to be a shared understanding of the level of evidence required and process to initiate a change to a design.  Statistical significance matters. (Mike – DWP)
    • Bringing the PA into the team might lead to situational bias, a PA might well be able to work across several projects (from the floor)  Peer Review might help this so UR and PA get their work on the analysis reviewed by a peer. (Richard – HMRC)
  4. Embed PAs in teams from Discovery onwards. Constructive alignment should mean that the PA can help to build a service that is easy to measure the performance of a service.
    • Draw people in with PA and persuade them with UR. Strategic decisions should require more evidence and decisions based on marginal gain. (Katie – GDS)

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s