Background
Section 2

Methodology

This section describes the methodological issues for cross-country quantitative analysis of Public Expenditure and Financial Accountability (PEFA) data. It highlights the challenges in converting the PEFA letter grades to numerical values and the challenges in weighting and aggregating these numerical values. It also summarizes previous research using PEFA data and provides a free panel data set of PEFA data from published reports to encourage further research.

This report uses the approach to the conversion of PEFA scores from D to A into numerical scores from a 1 to 4 (score D equals 1 and score A equals 4).

While the PEFA framework was not originally designed for cross-country comparison, a few public financial management (PFM) practitioners and researchers have capitalized on this rich source of information for regression analysis. However, since a PEFA assessment does not provide an overall score, several conversion, weighting, and aggregating challenges must be addressed.

Conversion

The most common approach was pioneered by de Renzio (2009), who first analyzed PEFA assessments to reveal patterns and trends of PFM systems across countries. To conduct this analysis, de Renzio assigned a numerical score to each letter grade to facilitate cross-country comparisons (see table below). According to de Renzio (2009, 3), the “1–4 scale is of course somewhat arbitrary but is meant to reflect the fact that a ‘D’ score in many cases denotes a deficient system, not a non-existent one."

Numerical conversion of PEFA scores

PEFA scoreNumerical value
A4
B+3.5
B3
C+2.5
C2
D+1.5
D1
D*1
NA, NRExcluded from analysis

Source: de Renzio 2009.

Weighting and Aggregating

These numerical performance indicator scores were then averaged to derive an overall aggregate PFM performance score for each country, which could then be used as a dependent variable in a regression. Averaging, however, assumes equal weighting. Equal weighting makes the explicit assumption that progressing from D to C has the same statistical impact as progressing from B to A. Furthermore, it assumes that all 28 performance indicators have the same impact.

The decision to average PEFA scores is not ideal because the PEFA methodology measures different things across the various indicators, which, according to de Renzio, “are not necessarily amenable to quantitative conversions, calculations, and analysis.” In practice, certain aspects of the PFM cycle are likely to be more important than others for the overall performance of the PFM system.

“The decision to average PEFA scores is not ideal because the PEFA methodology measures different things across the various indicators.”

Additional Methodological Challenges

Further challenges relate to time inconsistency, measurement error, and limited observations. PEFA assessments are undertaken at different moments in time in different countries, making it challenging to compare countries during the same time period. For example, while two assessments may have a date of assessment of June 2010, the supporting evidence may cover 2006 to 2008 in one country and 2007 to 2009 in the other country. As a result, if an exogenous shock occurs in a year that stresses PFM systems, such as the 2008–09 global financial crisis, a country assessed in 2009 may score lower than the year before or after. As a result, differences across countries may be due to the occurrence of exogenous shocks and other factors rather than the strength of countries’ PFM systems.

Furthermore, any regression analysis on PEFA scores is compounded by the lack of observations. There have been only 311 national PEFA assessments for 136 countries, which means that the sample size will always be limited. A larger panel data set would minimize the challenges of time inconsistency and measurement error.

PEFA Secretariat Guidance

These issues are discussed in a note produced by the PEFA Secretariat (2009), which offers guidance on the aggregation and comparisons of PEFA ratings. In this note, the Secretariat acknowledges the pros (simple, transparent, and replicable) of converting an alphabetic score to numerical values, assuming equal weights for each indicator, and generating a simple average of all indicators.

The Secretariat identifies three challenges that these assumptions pose to validity. First, there is no theoretical justification to suggest that a move from D to C represents the same incremental change as a move from C to B or from B to A. Second, there is no evidence to suggest that each indicator should be weighted equally. Third, the weight or importance of different PFM dimensions varies across countries. According to the PEFA Secretariat (2009, 19), there is “no scientifically correct method on how aggregation should be done [and] the PEFA program neither supports aggregation of results in general nor any particular aggregation method.” The Secretariat’s advice at the time was that “any user—as part of the dissemination of results from comparison—clearly explains the aggregation method applied in each case. It would also be advisable that users undertake sensitivity analysis to highlight the extent to which their findings are robust under alternative aggregation assumptions.”

2016 PEFA Framework Update

The 2016 update to the PEFA assessment methodology poses an additional challenge. The new methodology revised the scoring of every PEFA performance indicator. This extensive revision suggests that the previous scoring methodology suffered from shortcomings and makes it difficult to compare countries that were assessed with the 2016 framework to countries that were assessed with the 2011 framework. More information about the PEFA Secretariat’s Guidance on Tracking PFM Performance for Successive Assessments is available at the resources page of the PEFA website.

Existing Research

Typically researchers, such as, for example, de Renzio, Andrews, and Mills (2010), Whiteman (2013), Fritz, Sweet, and Verhoeven (2014), Haque et al. (2012), and Ricciuti et al. (2016) have followed, with slight variations, the methodological alphanumeric conversion methodology set by de Renzio (2009). The table below provides a summary of their objectives, construction of the PFEA variable, regression methods, and pros and cons of their approaches.

"Typically, researchers have followed, with slight variations, the methodological alphanumeric conversion methodology set by de Renzio."

Methodologies used for PEFA indicator regressions

PAPER
OBJECTIVE
PEFA VARIABLE
REGRESSION METHOD
de Renzio (2009)
Identify PEFA assessments that tell us about PFM systems across countries
Alphanumeric conversion (A=4, B=3, C=2, D=1); PEFA average is the dependent variable
Cross-country OLS (43 countries)
de Rezio, Andrews, and Mills (2010)
Identify the impact of PFM donor support on PFM systems
Alphanumeric conversion (A=4, B=3, C=2, D=1); PEFA average is the dependent variable
Cross-country OLS and WLS (93 countries)
Whiteman (2013)
Measure the capacity and capability of PFM systems
Alphanumeric conversion (A=100, B=75, C=50, D=25); PFM capacity variable (average of PEFA indicators PI—5 to PI—28); PFM performance (average of PI—1 to PI—4); and PFM capability variable (PFM performance divided by PFM capacity)
Cross-country OLS (69 countries)
Fritz, Sweet, and Veerhoeven (2014)
Explore the drivers to strengthen PFM systems and their effects on fiscal outcomes
Alphanumeric conversion (A=4, B=3, C=2, D=1); PEFA average is the dependent variable
Cross-country OLS (112 countries)
Haque et al. (2015)
Assess PFM performance and capacity constraints in small Pacific island countries
Alphanumeric conversion (A=4, B=3, C=2, D=1); PEFA average is the dependent variable
Unbalanced pooled-panel Tobit (162 observations from 118 countries)
Riccuiti et al. (2016)
Identify the impact of political institutions on revenue administration
Alphanumeric conversion (A=3, B=2, C=1, D=0); 6 PEFA subindicators 13ii, 13iii, 14i, 14ii, 15i, and 15iii are regressed separately as the dependent variable
Cross-country OLS and two-stage least squares using settler mortality as an instrumental variable for political institutions (42 countries)
Andrews (2011)
Identify the organizational attributes that are amenable to PFM reform in African countries
Alphanumeric conversion (A=3, B=2, C=1, D=0); 64 subindicator PEFA scores are used as dependent variable for each country
Partial proportional odds model "a variant of ordered logit" with four-category ordinal outcome, where A (=4) reflects greatest reform compliance and D (=1) lowest compliance (31 countries)
Kristensen et al. (2019)
Identify the impacts of PFM performance on political institutions, fragility, corruption, and revenue mobilization
Alphanumeric conversion (A=3, B=2, C=1, D=0) and a three-step conversion process; subdimension PEFA scores are used to calculate average indicator scores, which are then used to calculate average pillar scores, which are then used to calculate an overall PFM performance score
A combination of cross-country OLS, first-differences, WLS, and a pooled panel

Note: PEFA = public expenditure and financial accountability. PFM = public financial management. OLS = ordinary least squares. WLS = weighted least squares.

An Alternative Methodological Approach

Andrews (2011) developed a second methodological approach to PEFA numerical conversion using a multivariate ordered logistic regression to estimate the impact of organizational attributes on the PFM reform. Andrews adopted a partial proportional odds model, using PEFA letter scores as the dependent variable, because “using ordinary least squares or even more conventional logit techniques would mean discarding the ordinal nature of the outcome variable, which would result in a loss of efficiency.”

According to Andrews, this approach has two main benefits. First, the dependent variable in this estimation method is a four-category ordinal outcome, where A (=4) reflects greatest compliance with reform and D (=1) reflects lowest compliance. Second, the partial proportional odds model relaxes the assumption of parallel slopes, an assumption under straight ordinal regression models, which implies that the effect of explanatory variables on the dependent variable is constant across different categories of the dependent variable (Andrews uses a Brandt test to show that there is a violation of the parallel regression assumption).

“Andrews developed a second methodological approach to PEFA numerical conversion using a multivariate ordered logistic regression to estimate the impact of organizational attributes on the PFM reform.”

Under this estimation method, a positive coefficient implies that higher values of the explanatory variables push the likelihood toward higher PEFA scores (such as A, B, or C), while a negative coefficient implies that higher values of the explanatory variables limit the likelihood to a lower-category ranking (D). This approach maintains the ordinal ranking and does not impose the assumptions of (a) PEFA scores as a continuous variable, (b) equal distance between ordinal rankings, or (c) equal weighting among PEFA indicators.

Guidance for Researchers

In 2019, the PEFA Secretariat recommended “follow[ing] the approach for conversion outlined in a paper by Paolo de Renzio (2009).” However, we encourage all interested researchers to explore alternative statistical approaches. To encourage this process, we are providing the full set of panel data from completed public PEFA assessments.

References

Andrews, M. 2011. “Which Organizational Attributes Are Amenable to External Reform? An Empirical Study of African Public Financial Management.” International Public Management Journal 14 (2): 131–56. doi: 10.1080/10967494.2011.588588.

de Renzio, P. 2009. “Taking Stock: What Do PEFA Assessments Tell Us about PFM Systems across Countries?” Working Paper 302, Overseas Development Institute, London. https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/4359.pdf.

de Renzio, P., M. Andrews, and Z. Mills. 2010. Evaluation of Donor Support to Public Financial Management (PF Reform in Developing Countries. Final Report. London: Overseas Development Institute. www.odi.org.uk.

Fritz, V., S. Sweet, and M. Verhoeven. 2014. “Strengthening Public Financial Management: Exploring Drivers and Effects.” Policy Research Working Paper WPS 7084, World Bank, Washington, DC. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/349071468151787835/strengthening-public-financial-management-exploring-drivers-and-effects.

Haque, T. A., D. S. Knight, and D. S. Jayasuriya. 2012. Capacity Constraints and Public Financial Management in Small Pacific Island Countries. Washington, DC: World Bank. https://onlinelibrary.wiley.com/doi/full/10.1002/app5.79.

Kristensen, Jens Kromann; Bowen, Martin; Long, Cathal; Mustapha, Shakira; Zrinski, Urška. 2019. PEFA, Public Financial Management, and Good Governance. International Development in Focus. Washington, DC: World Bank. https://openknowledge.worldbank.org/handle/10986/32526.

PEFA (Public Expenditure and Financial Accountability) Secretariat. 2009. Issues in Comparison and Aggregation of PEFA Assessment Results over Time and across Countries. Washington, DC: PEFA Secretariat. https://www.pefa.org/resources/issues-comparison-and-aggregation-pefa-assessment-results-over-time-and-across-0.

Ricciuti, R., A. Savoia, and K. Sen. 2016. “How Do Political Institutions Affect Fiscal Capacity? Explaining Taxation in Developing Economies.” ESID Working Paper 59, European Social Innovation Database, University of Manchester. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2835498.

Whiteman. 2013. “Measuring the Capacity and Capability of Public Financial Management Systems.” International Public Management Review. Vol. 14, Iss. 2. University of St. Gallen, Switzerland. https://journals.sfu.ca/ipmr/index.php/ipmr/article/view/132.