Social Investment Series Post #4: How to measure effectiveness and outcomes within a Social Investment approach
The phrase “understanding what works” has been commonly used in relation to social investment.
SIA’s website states that “Social investment involves...setting clear, measurable goals and focusing on what works”.
In her speech at the Social Investment Conference in Nov 2024, Minister Willis stated that her vision for social investment was one “where we use data to learn more about what works for who, and then ruthlessly hold ourselves to account for changing more and more lives for the better”.
In reality, understanding what works means understanding whether a service is effective, and if so, for whom. Asking this question also allows us to explore who a service isn’t working for and why this might be the case.
Understanding effectiveness not only requires assessing whether intended outcomes are being achieved (and for whom), but should also consider attribution – i.e. is this due to the service itself, or would this have happened anyway?
These questions speak to the heart of Social Investment. Understanding effectiveness is essential for service providers who want to do more of what is working and less of what isn’t. Answers to these questions are also important for commissioners and philanthropic investors who want to ensure their resources are being allocated in an optimal and highly impactful way.
Under a Social Investment approach, data and analytics will be vital tools for measuring and understanding effectiveness. This blog post explores the techniques that can be deployed to answer these questions with the rigour that is required.
How does the Social Investment approach differ from the status quo?
The concept of measuring effectiveness is not new.
There are many well accepted approaches to measuring effectiveness alongside a widely recognised hierarchy of evidence that ranks the quality of research methods that can be used. There are also numerous evaluation experts within New Zealand, who have been undertaking outcome and impact evaluations for decades. Further, commissioners often require that investments are coupled with some form of outcome or impact evaluation, with a key focus of these evaluations including understanding effectiveness.
So, while understanding effectiveness is not unique to Social Investment, there are specific things we expect will happen under the Social Investment approach.
Firstly, the approach will most likely require a more robust approach to measuring effectiveness. Or at the very least, a greater focus on understanding the impact of government investments.
Secondly, with a greater focus on outcomes - which are often long-term outcomes - different tools or methods may be required.
Outcome-based contracting will require more clarity and agreement on outcomes
One of the key drivers of the increased focus on understanding effectiveness is that the Social Investment approach will likely require the contracting of outcomes between service providers and government agencies.
This differs from status quo. Service contracts are typically paid out based on output targets being met or services being delivered in a certain way. This is often because these elements are easier to define and simpler to measure.
With outcome-based contracts, payments to service providers are based partially or wholly on outcomes being met.
Before any service contract is entered into, parties should be clear and agree on:
What the outcomes are
How the outcomes will be measured (e.g. what analysis methods should be applied)
What data sources will be used support this measurement
Our next blog post will go into more detail about how data, evidence and analytical tools can be used in practice to measure effectiveness.
One of the benefits of this upfront outcome-based thinking is that it will likely improve the quality of future outcome or impact evaluations.
Unfortunately, these evaluations are often considered late in the piece, when service delivery is already underway. This means that there is less upfront focus on the three elements outlined above.
This is especially true in relation to what analysis methods that should be applied and the collection of data sources throughout the delivery of a service. This is difficult to do retrospectively, and as a result evaluators may be limited in their ability to apply a more robust approach when it comes to understanding effectiveness.
There is an opportunity to better leverage the value of administrative data
As mentioned, the second potential difference is that the greater focus on medium to longer term outcomes may require different tools or methods to be applied.
The aim of social services is to change the lives of individuals, whānau and communities for the better.
However, the conundrum is that this benefit may only be realised long after a service is received.
One mechanism that could be utilised to understand the connection between a service and longer-term impacts would be through using other datasets. For example, an agency’s administrative data or data contained within Stats NZ’s Integrated Data Infrastructure (IDI) could be useful for this purpose.
This information could be used to:
Understand participants’ experiences beyond the service delivery period (noting this requires that we wait long enough to see these results). Examples include an intervention to get NCEA 1 with the outcome of employment, or an intervention of breast cancer screening with the outcome of higher survival rates.
Link outcomes across areas like education, employment, and health.
Include cultural and wellbeing measures, through the use of surveys like Te Kupenga. A good example of this is our work with Te Hiringa Mahara, where IDI data was used to capture a range of wellbeing outcomes across domains.
The use of these datasets may come with challenges. For instance, ethical concerns around social license, and the fact that utilising these datasets requires technical skill and IDI research capability. However, Nicholson Consulting are currently collaborating with Stats NZ and the Social Investment Agency on a project which will make the IDI more accessible and consistent through our Code Modules project.
Additionally, there may be practical issues associated with shorter term programmes being unable to observe outcomes over the medium to longer term duration. In this situation other techniques may be applied, in which proxies or other research is used to demonstrate effectiveness.
In most cases, the additional effort to measure outcomes is worth it
We understand that the need for more in-depth measurement of outcomes can place an additional burden on participants and providers.
However, we are also aware that this information is ultimately what service providers want, and (in most cases) the benefit derived outweighs the additional cost.
From our discussions with providers, there is a strong desire to learn more about what is working, and what they may wish to change or stop.
Data will only take us so far in this analysis, but when coupled with more qualitative techniques, it can provide service providers with powerful information which they can apply to things such as eligibility considerations, how they advertise their services or their approach to service design and delivery.
It is also important to consider the ethical implications of not properly evaluating a service or programme.
Participants could be exposed to unwanted impacts, receive services that don’t align with best practice, or lose trust in the system. From the perspective of investors, the opportunity cost is that they could have reinvested in other, more effective services.
What we need is a systems thinking approach
There are still several questions that will need to be addressed by the Social Investment Agency or individual agencies to make this work in practice.
Some key questions for them to consider include:
Who will be responsible for paying for such analysis to take place? Will this be incorporated as part of the funding package? Or will the contracting agency wear the cost?
Will there be standardised methods required to be applied? Will a library of indicators be made available? Both of these could streamline processes and enhance consistency and comparability across different services and outcomes.
Who will be responsible for undertaking the analysis? Undertaking analysis requires time, technical expertise and access to relevant data. Agencies may be best placed to support analysis.
With further maturity in systems thinking, we are hopeful that more focus on understanding effectiveness will lead to improved service delivery and investment practices within Aotearoa.
With better services and smarter investment practices, we will be well positioned to drive better outcomes for our communities.
Are you or your organisation thinking about how to know if your investment or service is effective? We’d love to kōrero! Reach out to us at hello@nicholsonconsulting.co.nz