Hostname: page-component-848d4c4894-wg55d Total loading time: 0 Render date: 2024-06-10T12:15:48.920Z Has data issue: false hasContentIssue false

Account-Ability

Published online by Cambridge University Press:  03 May 2011

Rights & Permissions [Opens in a new window]

Abstract

Type
Editor's Corner
Copyright
Copyright Birnbaum © World Association for Disaster and Emergency Medicine 2011

To fight for the right
Without question or pause.
To be willing to march into Hell
For a heavenly cause.

Man from La Mancha

Adapted from Cervantes

Although the topic and the issues associated with accountability for the actions of the humanitarian community during crises has been mentioned repeatedly in multiple forums, until recently, the topic only has been mentioned briefly, before rapidly being swept off of the table. The South East Asia Regional Office of the World Health Organization commissioned a book on the health aspects of the December 2004 earthquake and tsunami that devastated large areas in South-East Asia. In response, the staff of the World Association for Disaster and Emergency Medicine (WADEM) and Prehospital and Disaster Medicine (PDM) searched the peer-reviewed and non-peer-reviewed literature, including the huge repositories of the grey literature (including governmental, inter-governmental, and non-governmental documents). While they identified hundreds of publications on various aspects of the earthquake and tsunami, for all practical purposes, there existed little information describing the specific interventions (responses) that were provided by the health-related organizations that responded to the catastrophe. Specifically, there were no accessible records of: (1) the needs addressed by the responding organizations; (2) the processes used for implementation of specific projects; (3) evaluations of the performance of the organizations; or (4) the impact of the interventions on the affected populations.

Furthermore, for those relatively few accessible reports that did describe some of the effects of the interventions, they only consisted of achievement indicators—little, if anything, was reported regarding the impact of their interventions on the affected societies and its members. Published Case Studies generally have reported only achievements in terms of the numbers of persons treated, the number of procedures provided, and some of the difficulties and barriers encountered. Such reports have been valuable in terms of setting expectation (e.g., persons trapped under rubble rarely survive for more than 48–72 hours). Such observations, when substantiated by additional observations, eventually become standards and have led to shifting of rescuers from search and rescue operations to providing relief to the survivors. Much of the current science of Disaster Health has been derived from such repetitive observations.

However, further development of the science of Disaster Health now demands more detailed evaluations of the processes implemented in a project, and identifying the impacts of specific interventions. Without such evaluations, it will not be possible to develop scientific standards to support best practices. Best practices must be based on competencies, and the competencies as objectives for needed education and training courses and/or derive benchmarks for performance indicators, and augmentation of levels of preparedness. We really only will have observed lessons rather having used the information and learned. Without competently performed evaluations, we will not be able to improve our abilities to meet the needs of the stricken population or to enrich preparedness for the next event caused by a natural or human-made hazard.

Today, for the most part, organizations responding to assist in the relief and/or recovery of stricken populations have not been required to demonstrate accountability for their actions to the beneficiaries of the interventions, to the other members of the humanitarian community, to the governments of the countries in which the interventions were provided, or even to the donors who have provided the resources for the responses. It would appear that the donors, beneficiaries, and governments have been satisfied with knowing only of the responding organization's achievements, and have not asked the important questions of “So what?, and “Did the use of our resources make a difference?”

The issues associated with accountability have been noted by the Global Health Cluster, but accountability for what has been or will be done, never has been discussed openly. However, Valerie Amos, the recently appointed Emergency Relief Coordinator for the Inter-Agency Standing Committee (IASC) of the UN Office for the Coordination of Humanitarian Affairs (OCHA) recently has demanded that the Global Clusters (part of the Humanitarian Reform Movement) and the humanitarian community develop a new “business [operating] model”.1 Recent discussions have led to the identification of five themes that must be addressed. One of these five themes, not surprisingly, is accountability—accountability to the population and region affected (beneficiaries), to the coordinating and control agencies involved, to the donors, and to the public.

But, what is “accountability”?—there is no uniform, generally accepted definition of “accountability”. The Health Cluster Guide 2 defines accountable as the person/people ultimately answerable for the correct and thorough completion of the task. This definition seems too limiting for use in the current context. Perhaps, a better definition comes from the Oxford Concise English Dictionary (1995) in which accountable is defined as responsible; required to account for one's conduct; explicable.Reference Thompson3 Further, the American Heritage College Dictionary defines accountable as, liable to being called to account; answerable;Reference Pickett4 and the Oxford Concise Dictionary of the English Language defines account as, reckoning; provide an explanation of; favorable or unfavorable opinion of.Reference Thompson5 Thus, in this context, accountability means the ability to account for one's actions whether favorable or unfavorable. Every intervention implemented must be evaluated for: (1) its respective accomplishment of all or part of the objectives for which it was implemented; (2) whether it contributed to attaining the goal for which it was selected; (3) its costs (human, economic, opportunity, environment, direct, indirect); and (4) the efficiencies of the process used. Hence, evaluation includes both the performance of the intervention (process) and all of the effects of the intervention on the beneficiaries, on other projects, on the Basic Societal System in which it was applied, and on other Basic Societal Systems that it affected.

A major barrier to evaluations, and therefore to accountability, is the fear that if an evaluation has not attained the objectives for which it was implemented, there may be a subsequent loss of funding from the donor(s). But, this never should be the case. Donors must understand that not all interventions will produce all of the desired effects for which they were implemented; evaluations are necessary to help make the interventions more beneficial the next time. No intervention ever goes exactly as planned. Evaluations will enhance the use and efficiency of their investments and serve as a means for achieving assurance of the quality of the project; they never are to be used punitively.

Some organization or combination of organizations must step up to the plate and shoulder the responsibility for providing unbiased and structured evaluations of the impact and the performance of interventional projects. The ultimate goal is to assist in making interventions the best that they can be. Some one or group of organizations must be responsible for the conduct of evaluations using the best indicators available.

Evaluations should be conducted by knowledgeable persons with the necessary experiences to be able to judge the performance of the responders and identify the effects, outcomes, and impact of the intervention(s) being evaluated. This will require special education and training, hopefully using the Guidelines for Evaluation and Research promulgated by the WADEM. Additionally, the donors must step up to the plate and demand6 comprehensive evaluations of the projects they are funding. After all, it is their desire and responsibility that their resources be used for the benefit of the population affected by the crisis in the most effective and efficient manner possible.

Without evaluations, it will not be possible to be able to account for our actions. I believe the WADEM must consider where it fits in this process. These are important considerations. But without beginning this process, we will not be able to develop the necessary standards, best practices, and competencies. We must demonstrate our concern for the quality and effectiveness of future responses, and be able to apply our knowledge and skills to provide the best assistance and care that those suffering deserve. We must have the ability to account for what we do.

And the world will be better for this…
To search for the unreachable star.

Man from La Mancha

Adapted from Cervantes

References

Global Health Cluster: Internal documents.Google Scholar
UN Inter-Agency Standing Committee of the UN Office for the Coordination of Humanitarian Action: Health Cluster Guide. World Health Organization: Geneva, 2009. p 34. Also available at http://www.who.int/hac/network/global_health_cluster/guide. Accessed 12 March 2011.Google Scholar
Thompson, D (ed) ord Concise English Dictionary of Current English. Oxford: Clarendon Press, 1995, p 9.Google Scholar
Pickett, JP (ed) College Dictionary. 4th ed.New York: Houghton Mifflin Co, 2002, p 9.Google Scholar
Thompson, , Idem.Google Scholar
Task Force for Quality Control of Disaster Management: Guidelines for Research and Evaluation. Prehosp Disaster Med 2011. In press.Google Scholar