When I was in Kenya in 2012, I remember a sense of excitement about the government’s plan to introduce program budgeting. This is not the kind of thing that elicits heart flutters outside of the public finance community, but it can certainly quicken the pulse of some of our PFM colleagues.
To be honest, I was not sure that program budgeting was going to work in Kenya, but I thought that it would be a step toward achieving two things the country’s budget process badly needed.
First, it would force the government to explain itself. A program budget implies at least a narrative around the budget that tries to justify the allocation of funds to specific areas. It is hard for citizens (and even legislators) to engage in reasoned debates with government about a bunch of tables without a clear sense of what the government is trying to do with the money represented by the figures in those tables.
Secondly, it could prompt a more rigorous attempt to define “what the government is actually trying to do” in terms of concrete outputs. Program budgeting demands at least some attempt to measure, with indicators and targets, what government is producing with the money it receives. While imperfect, such performance information is also essential to any good conversation about budget priorities.
Although program budgeting in Kenya did amount to a step forward in both these ways, it was also riddled with inconsistencies. Programs were constantly in flux, outputs and objectives were hard to understand or track, and performance indicators lacked baselines and seemed to change in random ways from one budget year to the next. I left Kenya at the start of 2018, but these challenges continue, as documented in analysis carried out as part of our budget credibility initiative by the Institute of Public Finance (IPF-Kenya).
Program budgeting, and more broadly the inclusion of performance information in budgets, is not unique to Kenya. Versions of these reforms have been tried around the globe, from the rich countries of the OECD, to a wide array of middle and lower-income countries in Latin America, Africa and Asia. Recent estimates suggest as many as 80 percent of African countries are in some stage of program budgeting reform.
In a paper prepared with the World Health Organization last year, we demonstrated a number of challenges facing countries that have tried to implement program budgeting with a focus on the health sector. This work raises concerns about the quality and consistency of the performance information, and particularly performance indicators, that are included in government health budgets. We highlighted this issue in our four country case studies from Brazil, Indonesia, Mexico and the Philippines.
But the more I have looked at the quality of performance information around the world, across different sectors and countries, the more depressed I become. It is simply impossible in many countries to make any sense of the performance information in the budget, which regularly contradicts information in other sources, or cannot be linked to spending information due to inconsistencies in names, indicators, targets, baselines, and so on.
Our recent assessment of irrigation budgets in five countries confirmed these findings in Kenya and Brazil, but also in Albania, the Dominican Republic and Mozambique. To take just one example from the Dominican Republic, a target for water flow varied across four different documents from 716.76 m3/s in the 2013-2017 sector agency’s strategic plan, to 207.75 m3/s in the original version of the national multi-year plan, to 277.25 m3/s in the sector agency’s accountability report, and 407.75 m3/s in the 2018 update to the national multi-year plan. What is one to do with such numbers?
Partner work on budget credibility in some of the countries already mentioned, but also Argentina and Ukraine, found gaps in linking spending to performance data, even when both kinds of information were available. We find that nonfinancial targets are met when budgets are not spent, that targets are not met when budgets are spent, and that at times targets are exceeded or underdelivered by very substantial margins even when spending is not radically different from the budget. What should we make of all of this? At a minimum, my sense is that performance data alone without narrative justification from government that explains why targets are met or not is simply not that useful.
I should clarify that not all countries that have performance information have program budgets; that is really beside the point. All countries that include performance information in their budgets are supposedly including it to improve the discussion about budget priorities and implementation between executives, legislatures and the public. There is no other reason to include such information in the budget, whatever form of budget it is.
Yes, performance information is also used for internal management purposes in government, but internal management demands a wider and somewhat different set of performance information that need not all be published. Published information is public information, and that is who it is for.
So, what should we make of this? Scores of countries introduce new information that holds the promise of shifting the budget conversation toward outputs and outcomes, better linking money to services – and most of it is incomprehensible, inconsistent, or totally useless.
One interpretation is that the introduction of performance information in budgets is driven by international technical assistance, and takes the form of “isomorphic mimicry”: the fancy name for copying other people so you look good, while imitating principally the form and not the function of seemingly successful innovations. Whether this is so or not, there is no question that these reforms are almost entirely technical in nature, driven from the top by budget offices with little public input. There is therefore a fundamental mismatch between the supply and the demand for reform, which leads to bells and whistles with no one to ring or blow them, a point I made about cutting edge public finance reforms in Kenya some years ago.
If we think about the performance information in the budget as primarily speaking to the public, then it should be the case that this information actually addresses public concerns. Otherwise, there is unlikely to be all that much interest in it. And if there is no interest in it, how does it serve the purpose of improving the conversation about the budget? No one owns the indicators, no one is accountable for meeting the targets…and no one cares.
It follows that if performance information is going to make any difference, it should be selected and decided upon through a public process. If what this performance information measures actually matters to the public, there is likely to be more ownership all around, including from legislative representatives who should be reviewing this data whenever they are deliberating about the budget, and from the media, that should be reporting on it.
Is seeking public input into the programs and performance indicators that government agencies use pie in the sky? The Mexican government did not think so. In 2016, the Mexican finance ministry led a government-wide public consultation on existing indicators. Citizens were encouraged to submit comments on the current performance framework. The government received over 200 submissions which informed discussions with agencies about revising their indicators.
If performance information in budgets is going to be more than a shiny gewgaw, we will need much more of this kind of citizen input around the world. Otherwise, the practice of incorporation of such information into the budget will be viewed as a cynical ploy to appear transparent and accountable, while avoiding both. And that will likely lead governments to trash the whole experiment. Garbage in, garbage out, indeed.