Un marco para la toma de decisiones en evaluación y comunicación: resumen de investigación-acción

Número

Descargas

Visitas a la página del resumen del artículo:  1252  

Autores/as

Resumen

En general las disciplinas de la comunicación y de la evaluación se desarrollan de forma independiente o al menos secuencial: la comunicación de los hallazgos de la evaluación ó la evaluación de las actividades ó programas de comunicación.  Sin embargo, ambas disciplinas comparten elementos comunes a nivel teórico y práctico. Esta ponencia resume trabajos de investigación-acción en comunicación y evaluaciónpara brindar capacitación a proyectos socios de investigación a nivel global. Se subrayan dos aspectos: la verificación al inicio de la  prontitud ó disponibilidad de los proyectos para recibir capacitación, y las ventajas de los procesos de facilitación regidos por el calendario del socio, no del capacitador.   La integración metodológica aborda la ‘Evaluación Orientada a los Usos’ y la Comunicación de la Investigación. El marco decisional en evaluación y comunicación permite a los gerentes de proyectos u organizaciones, explicitar su teoría de cambio y ajustar su estrategia de intervención.  

Palabras clave


Descargas

Los datos de descargas todavía no están disponibles.

Cómo citar

Ramírez Nathan, R. (2017). Un marco para la toma de decisiones en evaluación y comunicación: resumen de investigación-acción. Commons. Revista De Comunicación Y Ciudadanía Digital, 6(1). Recuperado a partir de https://revistas.uca.es/index.php/cayp/article/view/3371

Biografía del autor/a

Ricardo Ramírez Nathan, Independiente

Ricardo Ramírez is an independent researcher and consultant based in Guelph, Ontario, Canada. His consulting and research work includes communication planning, participatory evaluation and capacity development.  He is a credentialed evaluator (Canadian Evaluation Society). His doctoral work focused on how rural and remote communities harness information and communication technology. He is co-author of Communication for another development: Listening before telling (Zed Books, London 2009) and co-author of Utilization-focused evaluation: A Primer for evaluators (Southbound, Penang, 2013).

Citas

Christie, C. A.& Alkin, M.C. (2012). An evaluation theory tree. In: M. C. Alkin (Ed.), Evaluation roots: A wider perspective of theorists' views and influences. Second Edition. (pp. 11-57). Thousand Oaks: Sage.

Argyris, C., & Schon, D. S. (1978). Organizational learning: A theory of action perspective. Reading, MA, Manlo Park, CA, London, Amsterdam, Don Mills, ON, Sydney: Addison-Wesley Publishing Co.

Balit, S. (2005). Listening and learning report: Measuring the impact of communication for development: An online forum.

Barnes, M., Matka, E., & Sullivan, H. (2003). Evidence, understanding and complextiy: Evaluation in non-linear systems. Evaluation, 9(3), 265-284.

Barnett, C. & Gregorowski, R. 2013. Learning about theories of change for monitoring and evaluation of research uptake. ILT Brief 14. Brighton: Institute of Development Studies.

Bessette, G. (2004). Involving the community: A guide to Participatory Development Communication.

Britt, H. (2013). Discussion Note: Complexity-aware monitoring. Monitoring & Evaluation Series. Washington, DC: USAID. http://usaidlearninglab.org/sites/default/files/resource/files/Complexity%20Aware%20Monitoring%202013-12-11%20FINAL.pdf

Brodhead, D., & Ramírez, R. (2014). Readiness & mentoring: Two touchstones for capacity development in evaluation. Paper presented at the Conference: Improving the use of M&E processes and findings. Wageningen, The Netherlands.

Bryson, J. M., Patton, M. Q., & Bowman, R. A. (2011). Working with evaluation stakeholders: A rationale, step-wise approach and toolkit. Evaluation and Program Planning, 34, 1-12.

Chambers, R. (1997). Whose reality counts? Putting the last first. London: IT Publications.

Douthwaite, B., Kuby, T., van de Fliert, E., & Schulz, S. (2003). Impact pathway evaluation: an approach to achieving and attributing impact in complex systems. Agricultural Systems, 78, 243-265.

Easterly, W. R. (2006). The White Mans' Burden: Why the West's efforts to aid the rest have done so much ill and so little good. New York: Penguin Press.

Eyben, R. (2013). Uncovering the politics of 'evidence' and 'results'. A framing paper for development practitioners. www.bigpushforward.net

Hanley, T. (2014). Challenges in evaluating development communications: The case of a street theatre programme to address racism. Journal of International Development, 26, 1149-1160.

Horton et al., 2003. Evaluating capacity development: Experiences from research and development organizations around the world. ISNAR, CTA, IDRC: The Hague, Wageningen, Ottawa.

Hospes, O. (2008). Evaluation evolution: Three approaches to evaluation. Retrieved from www.thebrokeronline.eu/en/articles/Evaluation-evolution

Inagaki, N. (2007). Communicating the impact of communication for development. World Bank: Washington.

Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-Hall.

Kuby, T. (2003). Innovation is a social process: What does this mean for impact assessment in agricultural research? In: Athus, R. & Pachico, D. (Eds.) Agricultural research and poverty reduction: Some issues and evidence. pp. 58-70. Cali: CIAT

Lennie, J., & Tacchi, J. (2013). Evaluating communication for development: A framework for social change. Abingdon, Oxon, UK & New York: Routledge.

Lennie, J., & Tacchi, J. (2015). Tensions, challenges and issues in evaluating communication for development: Findings from recent research and strategies for sustainable outcomes. Nordicom Review, 36(Special issue), 25-39.

Ling, T. 2012. Evaluating complex and unfolding interventions in real time. Evaluation 18(1): 79-91.

Lynn, J. (2014). Assessing and evaluating change in advocacy fields. Retrieved from http://www.evaluationinnovation.org/publications/assessing-and-evaluating-change-advocacy-fields

Mayne, J. (2009). Building an evaluation culture: The key to effective evaluation and results management. Canadian Journal of Program Evaluation 24(2): 1-30.

Morton, S. (2015). Progressing research impact assessment: A 'contributions' approach. Research Evaluation, 1-15.

Myers, M. (2004). Evaluation methodologies for information and communication for development (ICD) programmes. London: DFID.

Parks, W., with, F.-G., D., Hunt, J., & Byrne, A. (2005). Who measures change? An introduction to participatory monitoring and evaluation of communication for social change. New Jersey, USA: Communication for Social Change Consortium. Retrieved from http://www.communicationforsocialchange.org/pdf/who_measures_change.pdf

Patton, M. Q. (2008). Utilization-focused evaluation, 4th ed. Los Angeles, London, New Delhi, Singapore: Sage Publications.

Quarry, W., & Ramírez, R. (2014). Comunicación para otro desarrollo: Escuchar antes de hablar. Madrid: Editorial Popular.

Ramírez, R. & Brodhead, D. (2013). Las evaluaciones orientadas al uso: Guía para evaluadores. Penang: Southbound. http://evaluationandcommunicationinpractice.net/wp-content/uploads/2014/10/UFEprimerSPANISH29Aug2013.pdf

Ramírez, R. (2011). Why “utilization focused communication” is not an oxymoron. Communication, media and development policy; BB World Service Trust. Retrieved from http://www.comminit.com/node/329198

Ramírez, R., Quarry, W., & Guerin, F. (2015). Can participatory communication be taught? Finding your inner phronēsis. Paper presented at the IAMCR Conference, 12-16 July, 2015. Montreal.

Schön, D. (1991). The reflective practitioner: How professionals think in action. London, UK: Ashgate.

Waisbord, S. (2001). Family tree of theories, methodologies and strategies in development communication: Convergences and differences. http://www.communicationforsocialchange.org/pdf/familytree.pdf