Accounting for advocacy
Lessons from recent assignments reviewing work to influence policy and practice
More and more foundations with more and more money are disbursing more and more grants to more and more non-profit advocacy groups to bring about more and more social and political change.
False hopes
The expectation, particularly amongst Monitoring, Evaluation and Learning (MEL) managers, is that provided advocacy groups deploy the right strategies targeting the right stakeholders, with the right narratives at the right time, they should achieve the right sort of results.
This may well work. But when results aren’t achieved, grantees are seen as not having been up to the job. Evaluation results - even if they do not appraise grantee performance - may well be used as a resource by portfolio managers (especially where they might support prior beliefs) to discipline particular grantees, and even withhold, reduce or divert funding away from them to grantees which are perceived to have performed better.
This is unfortunate given that it is well documented (for instance here) that change (in terms of policy content, process, behaviour of stakeholders, attitudes or discourse) happens or not as a result of the interweaving of intentions and actions of all stakeholders concerned, shaped by relationships of power and authority.
Advocacy groups are subsequently constrained and enabled by a multitude of factors and actors. So what happens might include a combination of what grantees intended, did not intend and/or did not want. ‘Poor’ outcomes may arise despite good advocacy, while limited or mediocre advocacy can contribute to ‘good’ outcomes.
The best that advocacy groups can do in this context is to make informed choices about the objectives to pursue, audiences they target, the tactics they deploy, the allies they partner with and the narratives they use, to effectively embed themselves in this interweaving - engaging with and persuading people with power to seriously consider their argument - and to adapt to circumstances as they change on an ongoing basis.
But with most foundations’ income generated in the financial markets, they tend to be beholden by a belief that social and political problems can be solved through the market. They are also informed by a culture which is pre-occupied by return on investment where the bottom line is replaced with the achievement of outcomes and where the ends often justifies the means.
This results in a sweeping aside of reality and a hope that non-profits are more in control than they are and can predict the future more than they can - even if this is not really how businesses grow in the private sector.
The pressure that foundation leaders are under to achieve results, is often projected into portfolio managers, grantees and evaluators/researchers who are under pressure to find evidence of impact and demonstrate value for money.
Working under excessive pressure especially from funders can make reflection amongst grantees during ‘delivery’ more difficult, get in the way of making good decisions and promote a ‘rush to action’. Grantees might prioritise more visible and tangible strategies such as expanding social media reach, securing media coverage and more visible campaign strategies over more subtle (albeit important) shifts in attitudes and behaviours amongst key mid level policymakers behind closed doors.
Generating value for money analysis is hugely challenging. For instance, if disaggregated expenditure data exists across the portfolio, comparing figures across organisations and countries can mislead, as partners are often paid different amounts partly due to the cost of living and the purchasing power of the local currency while grantees often have different ways of accounting for their activities and investments. More, quantifying in money terms outputs such as policy briefs and media campaigns or outcomes such as changes in policy is difficult requiring an array of assumptions.
More generally, evaluation processes can cease being an opportunity to reflect and learn and instead become an auditing exercise. Grantees feel pressure to exaggerate the contributions they make to outcomes and avoid being frank about some of the challenges they face in undertaking high quality advocacy, for fear this might be used against them. They might also be reluctant to introduce evaluators to their key stakeholders uncertain about what they might say about them.
How might foundations learn about their advocacy portfolios?
So if you’re a foundation leader and are convinced by this argument - that advocacy groups are ultimately limited in what they can achieve and contribution to outcomes is hard to disentangle - how might you go about measuring the effectiveness of your advocacy portfolio(s)?
I suggest that foundations emphasise the quality of grantees’ advocacy processes in addition to the contributions they make to outcomes.
Work with grantees to explore how and why they made the choices they made. This includes finding out how they defined the problem they are working on and/or the sort of change they pursued, the extent to which they explored how change would arise, understood key stakeholders and relationships which shaped them, the sorts of changes required amongst specific stakeholders, the tactics they would have to deploy to bring these about and the feedback and response systems that are in place to learn and improve their decision making.
In addition, I would suggest foundations emphasise individual qualities as well as organisational dynamics.
In complex, unpredictable contexts, where staff don’t have all the answers, and where what works in one place does not work in another, grantee staff need to be comfortable with messiness and uncertainty and give weight to local knowledge and feedback.
Moreover, serendipity or chance often play a large role in, for instance, creating (or closing down) policy windows. Campaigners can rarely distinguish in advance actions that might be ‘good’ or ‘bad’.
Actions that are taken are usually a matter of practical judgement, informed by ‘rules of thumb’ discussion with a variety of stakeholders and shaped by certain values or beliefs about what is ‘good’ as well as experience of past processes and events.
So to what extent then are staff curious and humble about what they know/can do, self-aware and open to a diversity of perspectives? Are they reflective about how change is or is not happening and why, and reflexive about their own role? With regards rules of thumb, to what extent do they remain tacit and unquestioned? Or are they explicitly tested or improved upon?
Given the likelihood that influencing plans are shelved and/or rewritten, staff will need the space to experiment, learn and adapt. Organisational dynamics usually shape what individuals can and can’t do. So, does the organisation (or a group within it) promote risk taking practices? Is the organisation willing to embrace new ideas and to ‘think out of the box’? Is experimenting, risk taking, and failure applauded or criticised? Are people rewarded based on their ability to conform or disrupt? And do staff (and volunteers where they are recruited) have the space to apply their deep-rooted understanding of the context to an initiative?
Furthermore, where advocates are working for an international organisation, the nature of linkages between an international organisation’s representation in country and its headquarters will to some extent influence what on-the-ground campaigners can and can’t do (or who they can and can’t work with), accountability requirements and if and how additional resources and support might be used for policy work. Hence understanding the context within an international organisation is just as important as understanding and engaging with the national/local context.
These sorts of questions are hard for an evaluator, with no relationship with grantee staff to come in after a campaign and find answers to. So, in addition to conducting larger scale assessments on a periodic basis, I suggest foundations provide accompaniment support to grantees in the form of a trusted learning partner.
While they would generate ongoing monitoring and learning data to inform formal evaluative assessments, they would also accompany grantees, taking up the role of a critical friend, supporting them to learn and improve, bring to the surface difficult questions and issues, help to unblock persistent problems and strengthen MEL capacities.
Ultimately, foundations need to change the expectations they have of their grantees and be more curious about their advocacy processes, organisational dynamics and the qualities of their staff.