May’s Care Conversation heard from Care Quality Commission Chief Executive David Behan on the regulator’s future direction
“There are three main things I’ve been doing in the 20 months since I took on this job,” David Behan told Care Conversation delegates. “And they all relate to how an organisation that’s been troubled sets about re-defining what it’s there for.”
The first of these was around purpose, he said – “why does the CQC exist?” This was a “bitterly contested” area, and part of the ‘buffeting’ the CQC had experienced was because it had not been clear about its own purpose. “Put simply, our purpose is to make sure that health and social care services provide people with safe, compassionate, high quality care, and to encourage improvement,” he told the seminar. “But it’s not the regulator that puts quality into services. The most we can do is hold a mirror up to that service to see if the quality is there. Our reports exist so that husbands and wives, sons and daughters, friends and neighbours can see what that service is like.”
This raised questions of where a regulator sat in terms of the ‘improvement landscape’, and, clearly, organisations that had become isolated and inward-looking would need help to improve. “We’re a regulator – we license people to enter the market, we monitor, we inspect. Therefore we need to work out how we work with others – what’s the information and intelligence that needs to be shared?”
The second key activity was around people, he told delegates. This had meant looking at whether there were the right people in the organisation to deliver its purpose, and had led to some changes at senior level. “I actually think the CQC was under-managed,” he said. “We needed a much more considered approach to getting the right people.”
It had also been vital to address not just the capacity of inspectors, but their capability. “Our inspection teams will include clinical experts as well as CQC staff,” he stressed. “Our workforce will become bigger. Not just our directly-employed workforce but the people working in our name, and to this end we’re investing heavily in our academy to make sure inspectors are equipped with the right knowledge and skills.”
The relationship with providers in the past had been both too close in some ways and too remote in others, he said. “It needs to be a respectful relationship – we’re not there to catch people out. What a ratings system is about is recognising the good and outstanding, and getting the poor and dangerous service provision out of the market – it’s push/pull. Saying ‘you meet/don’t meet this standard’ is pretty easy – you need to be able to arrive at a judgement. We’re there for the people who use the services. We’ll help providers, but that’s a by-product of what we do.”
The third key activity was around values, he stated. “To say that I inherited an organisation that was bruised is an under-statement. But I’m less interested in what our values have been in the past and much more in what kind of organisation we want to be, and how we go about that.”
Part of this was about re-designing inspection methodology – moving away from a generic model towards something much more specialised, from individual inspectors towards teams, and from compliance – “pass or fail” – towards judgement. “There are five key questions we ask about services. Are they safe, are they effective, are they caring, are they responsive and are they well led,” he said. “What gets talked about less is how we become intelligence-driven in our work – how we prevent another Mid Staffs.”
There were a range of quantitative and qualitative tools that could be used, he said, including staff satisfaction surveys. “Surprise, surprise, if you don’t have satisfied nurses then you don’t have satisfied patients, and junior doctors will also flag up issues. Metrics like these aren’t definitive, but they can be used to inform our inspection plans and key lines of enquiry.”
The plan was that all inspections would be informed by this kind of intelligence and information, although inevitably the quality of available information varied widely, he said. “So there’s the five key questions, intelligence, monitoring – these then drive whether any action is required and our ratings system.
“We’re at the beginning of a journey,” he told the seminar. “We now need to start applying and rolling out these methodologies. It will begin to look and feel different from October this year, but not all services will be rated in this way until March 2016. But we’re not doing this for anybody’s greater good apart from the people who use these services and need that quality assurance.”