Fashion & Beauty

The Status Quo of Evaluation in Public Diplomacy: Insights from the U.S. State Department

Description
Purpose: In recent years, expectations for demonstrating the impact of public diplomacy programs have dramatically increased. Despite increased calls for enhanced monitoring and evaluation, what texts exist on the subject suggest the state of
Published
of 28
8
Published
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Share
Transcript
  1 The Status Quo of Evaluation in Public Diplomacy: Insights from the U.S. State Department   Erich J. Sommerfeldt University of Maryland sophia-charlotte.volk@uni-leipzig.de Alexander Buhmann BI Norwegian Business School esommerf@umd.edu  Abstract   Purpose : In recent years, expectations for demonstrating the impact of public diplomacy  programs have dramatically increased. Despite increased calls for enhanced monitoring and evaluation, what texts exist on the subject suggest the state of practice is grim. However, while the current debate is based mostly on practice reports, conceptual work from academics or anecdotal evidence, we are missing empirical insights on current views of monitoring and evaluation from practitioners. Such a practice-level perspective is central for better understanding factors that may actually drive or hamper performance evaluation in day-to-day public diplomacy work. The paper seeks to update knowledge on the state of evaluation  practice within public diplomacy from the perspectives of practitioners themselves. Design/methodology/approach : This study assesses the state of evaluation in public diplomacy through qualitative interviews with public diplomacy officers with the U.S. Department of State—a method heretofore unused in studies of the topic. Twenty-five in-depth interviews were conducted with officers in Washington, DC and at posts around the world. Findings : The interviews suggest that practitioners see evaluation as underfunded despite increased demands for accountability. Further, the results show a previously not discussed tension between diplomacy practitioners in Washington, DC and those in the field. Practitioners are also unclear about the goals of public diplomacy, which has implications for the enactment of targeted evaluations.   This is a preprint of an article that is in press at  Journal of Communication Management.  Please cite as: Sommerfeldt, E. J., & Buhmann, A. (2019). The status quo of evaluation in public diplomacy: insights from the US State Department.  Journal of Communication  Management  . https://doi.org/10.1108/JCOM-12-2018-0137   2 Originality/value : The research uncovers the perceptions of evaluation from the voices of those who must practice it, and elaborates on the common obstacles in the enactment of  public diplomacy, the influence of multiple actors and stakeholders on evaluation practice, as well as the perceived goals of public diplomacy programming. No empirical research has considered the state of evaluation practice. Moreover, the study uses qualitative interview data from public diplomacy officers themselves, an under-used method in public diplomacy research. Our findings provide insights that contribute to future public diplomacy strategy and performance management. Key words : Public diplomacy, monitoring and evaluation, measurement, goals, department of state, qualitative interviews Acknowledgements : This research was supported by the Center on Public Diplomacy, Annenberg School of Communication, University of Southern California. Introduction Public diplomacy is a communication domain that involves the conveyance of information to foreign publics and the building of long-term relationships to create an “enabling environment” for government policies (Nye, 2008, p. 101).   According to the most recently available data, the U.S. government spent $2.03 billion dollars on public diplomacy initiatives in 2016 (Powers, 2016). While this amount is but a small fraction of that spent on defense and social services in the United States, the billions allocated to public diplomacy nonetheless requires the demonstration of high and measurable returns to Congress and other important stakeholders (Brown, 2017). As such, monitoring and evaluation—as the U.S. State Department has labeled the practice of evaluation and measurement (Department of State, 2017)—has been positioned as vital policy connected to several public diplomacy functions, including program planning, providing evidence for the impact of public diplomacy on informing and influencing foreign publics, and, ultimately, showing support for strategic goal attainment in foreign affairs. Public diplomacy as a field is increasingly shaped by demands for quantified  performance management and evidence-based decision-making. This is a global trend that affects many other public sector domains (Christensen, Dahlmann, Mathiasen & Petersen,  3 2018; Van de Walle & Roberts, 2008). Especially in the past decade, around the world, calls for increased efforts in public diplomacy evaluation have grown strong (Cull, 2014; Pamment, 2012). However, recent literature on monitoring and evaluation in public diplomacy still paints a picture of low satisfaction with the state of the field (e.g., ACPD, 2014, 2018; Banks, 2011; Gonzales, 2015). Public diplomacy practitioners must demonstrate the impact of their work, but have little knowledge or resources with which to do so. Related to this is a broader challenge in the public diplomacy field: there is often no agreement on what the goals and objectives of public diplomacy initiatives are—neither in the field, nor in specific institutions (Sevin, 2015). This is further complicated by the fact that the field or domain of public diplomacy is made up of a multitude of actors with partially diverging interests (White, 2014). Simply put, without a clear understanding of public diplomacy goals evaluation becomes difficult to effectively enact. Moreover, overlapping and competing approaches to evaluation in U.S. diplomacy across the various bureaus and operating locations of the U.S. State Department further complicate the already muddied responsibility of public diplomacy practitioners to prove the value of their work. Given these challenges, the purpose of this article is two-fold. First, we update and add to discussions of the state of contemporary evaluation practice within the field of U.S.  public diplomacy. Our approach is different than and complementary to other recent efforts in that we utilize a qualitative interview methodology to understand practice through the voices of public diplomacy practitioners themselves. While the current debate is based mostly on reports produced by the Advisory Commission on Public Diplomacy (ACPD)—an independent office of the State Department charged with appraising U.S. diplomacy efforts— conceptual work by engaged academics (e.g., Sevin 2017; Pamment 2014), or anecdotal  pieces by practitioners (Banks, 2011; Gonzales, 2015), we are missing arguments that use insights from empirical research to assess current views and perceptions of practitioners.  4 However, a deeper understanding of this practice-level perspective is central for understanding factors that my drive or hamper performance evaluation in day-to-day public diplomacy. Second, we draw upon the perceptions of public diplomacy practitioners to better understand their views of public diplomacy goals and the challenges of executing evaluation within a multi-stakeholder environment. Literature Review  Evaluation is a central element of all strategic communication fields, such as public diplomacy, public relations, and health communication, among others (Macnamara, 2017). Generally speaking, evaluation refers to the systematic assessment of the value of an object and can serve two equal purposes: accountability  (were objectives met?—often the main  purpose of reporting) and improvement   (how were objectives met?) (Stufflebeam & Coryn, 2014). In public diplomacy practice, as recent works suggest, the focus is primarily on reporting and accountability to important stakeholders such as Congress (Gonzalez, 2015). Practically, evaluations are commonly modeled by setting up various evaluative stages for which objectives are formulated and success measures are defined. The development of evaluation models in strategic communication domains dates back decades and all current approaches resemble, more or less, the structure of common “logic models,” which, in their most basic form, distinguish between inputs  (program resources), activities , outputs  (the products that result from the activities), and finally outcomes  (the mid- and long-term change or impact that results from the program). Implicit in widely accepted approaches to evaluation is the assumption that goals are derived from the overall mission, goals, and objectives of the organization (Hon, 1998). Thus, through evaluation, practitioners can determine and demonstrate tangible outcomes in the extent to which their efforts contribute to overall organizational goal achievement.  5 For public diplomacy, these outcomes ideally tie into larger foreign policy goals which, in the United States, are set by officials in the State Department (Sevin, 2015). As a strategic element of the foreign policy toolkit, public diplomacy has to help achieve foreign  policy goals and advance national interests (Sevin, 2017). However, the current practice of evaluation is not only strongly focused on reporting, but furthermore, reports mainly on the output (not outcome) level (cf. Pamment, 2014). That said, part of the problem with evaluating most strategic communication programs is that they encompass many different activities, goals are complex, and objectives will vary (Hon, 1998). Comor and Beam (2012), in reviewing current public diplomacy practice geared at complex intangible outcomes (such as ’dialogue’ or ‘engagement’) on the one hand, and extant public diplomacy performance reviews done by the U.S. Government Accountability Office, criticize the often reductive evaluations based on polling or focus groups. They stressed that despite sizable investments (communication initiatives worth US$10 billion between 2001 and 2009), evidence for the impact of public diplomacy programs is limited. Moreover, as Sevin (2017) has noted, based on Pahlavi (2007) and Pamment’s (2014) findings, the “reporting culture is also observed in the design and execution stages” of public diplomacy projects—he gives a recent example: An illustrative project was undertaken by the Bureau of International Information Programs at the U.S. State Department to increase its social media outreach. The Bureau followed a reporting understanding, the focus was strictly limited to increasing the number of users. Relying on Facebook advertisements, over $600,000 was spent between 2011 and 2013 to increase the number of users subscribed to the State Department feeds (...) This campaign was an output success as it managed to raise the number of Facebook fans (...) however, such a success does not necessarily mean that it was effective in helping the United States advance its national interests. (Sevin, 2017, p. 883) Such a strong focus on outputs and accountability is common to many strategic communication domains. The lack of rigorous measurement in related fields like public relations, and a general lack of monitoring and evaluation implementation, has continued despite considerable growth within the field (Macnamara & Zerfass, 2017). A central part of
Search
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x