For those of you who are not Star Trek fans there was a fabulous episode where the crew of the Enterprise were swamped with little furry creature called Tribbles who multiplied in numbers during the episode causing mild hilarity and concern for the crew. (Also the Tribbles play a surprise but welcomed return in the latest Start Trek film).
Now to answer the question in the title, there are major differences between a Tribble invasion and patient feedback, namely Tribbles multiplied rapidly and exponentially whereas Patient feedback…unfortunately does not.
To get an idea of activity I scraped each GP practice in England and Wales from NHS Choices and stored details of patients rating frequency in a simple MySQL database. From this database I was able to do some simple analysis.
I collected data on how many GP practices had received a rating… simple as that. I did collect figures on what the rating scores were, however as it will become clear, the small number of ratings at the time of the data scrape meant that it would be difficult in most cases to derive anything from the ratings themselves; only the count of people leaving a score overall.
With the NHS Choices ratings, I discovered that on average there was 3.6 ratings left per GP (varying between 0 and 120) with a average 0.089% response rate from each GP from their practice population.
Going back to my attempt at humour, I believe patient feedback systems are built in the hope that there will be the equivalent of a Tribble invasion with feedback pouring in and expanding rapidly. However as we can see this is not the case. We have yet to develop the perfect system and process to get value from feedback on and offline.
I would argue though, while there’s been a scramble to use web technologies to get the patient’s voice heard, a key area has been left alone. We are yet to see innovative analysis of the many forms of patient feedback collected, such as focus groups, emails and letters. In any one GP practice there will be post it notes, MS word docs and spreadsheets full of comments and feedback. What is missing is the analysis of this with key themes and meaningful sentiment measures generated.
There are numerous tools that can be used to analyse text and provide insights from large text based datasets. This can range from Nvivo through to using Python programming i.e using Natural Language Processing via libraries such as (NLTK).
My focus at the moment has been developing a python programme that can analyse various forms of text to categorise comments, measure some form of sentiment and to distinguish between issues relating to service provision. For example a recent project has required I start to design a range of directories to measure 1000′s of comments against to categorise each comment against a set criteria (service related comments, transport issues, marketing issues plus comments about people or places ). This form of analysis would enable NHS service providers to feed comments through and create triggers, reports and deeper insight. This could also power new online comparison services through sentiment tagging. I would like to see if this could be applied in the healthcare context.
At present patient ratings have struggled to make the impact that allow comparisons of competing services to take place. This has been mainly because patient feedback and responses from service providers is quite a nebulous mix of information to sift through both for service providers and users. More innovation is required.
It’s not all doom though, Patient Feedback services play an important role in the transparency and choice agenda, as (spoiler alert!!) it just takes one Tribble to save the day!!