Duties and Responsibilities
Background Road traffic fatalities and serious injuries represent a global public health problem, especially for the Arab region, where fatality rate due to road crashes was estimated at 18.9 fatalities per 1000 people in 2016, more than the world average of 18.2 fatalities per 1000 people and more than 3 times the average registered for the European region, which is 5.9 fatalities per 1000 people. In response to this alarming situation, Arab governments are facing traffic crashes burden differently. According to an ESCWA survey conducted in 2018 on the status of road safety management and data in its member States, fundamental dysfunctions at the management and data collection levels that compromise police data accuracy and comparability in the region are observed. These dysfunctions generated a large gap between reported and WHO estimated crash fatalities (70%). First at the management level, the existence of entities likely to contribute to the management of road safety in each country (such as national committee, higher council, national observatory and lead agency) differs from one country to another, as some countries partially have such entities while others do not. Second, at the data collection level, the use of digital forms to collect traffic crash data is still limited to two countries, while the rest of countries either still use paper form, or do not have a unified statistical form at all, which affects data accuracy especially crash location. As a result, these issues do not allow countries to properly identify the existing loopholes in the road safety system, make effective improvements, and properly monitor the progress towards achieving SGD 3.6. To assist these countries either to improve or to develop new evidence-based road safety policies, strategies and action plans, it is fundamental to collect reliable and accurate data to effectively identify problems, risk factors and priority areas, and to set targets and monitor performance, beyond the traditional collection of road crash records which is limited by nature to judging on how legal responsibilities are shared between users involved in a traffic crash. The project will be implemented in three Arab countries representing different parts of the Arab region (Mashreq, Maghreb, Gulf), and illustrating three different levels of advancement in dealing with road traffic crashes: • Lebanon is in the process of establishing its national observatory and seeks to ensure its alignment with the regional and international best practices since its inception • Tunisia has a national observatory that it seeks to develop • Qatar (self-financed participation in the present project) is considered one of the first countries to improve its information systems, especially through the adoption of the electronic form. Considering Qatar as a champion influential country, with good resources (both financially and technically) and high commitment. Duties and Responsibilities Evaluations at ESCWA strive to demonstrate the difference made by its work and its impact on member States and their citizens. This evaluation will aim to determine how the work undertaken in road safety project contributed to achieving its intended results, the pathway to these results, and the elements that contributed most significantly to their achievement. It will also consider any unintended results — positive or negative — that emerged during implementation. The evaluation will serve three main purposes: • Evidence-based decision-making: Provide a foundation for strategic planning and risk management. • Accountability: Demonstrate performance and results to ESCWA’s Executive and member States, in line with the Commission’s mandate. • Organizational learning: Identify lessons learned and actionable recommendations to strengthen future ESCWA projects in road safety. The evaluation will be conducted in accordance with ESCWA’s Evaluation Policy (2025), the UNEG Norms and Standards for Evaluation, and the Development Account Evaluation Guidelines. It will integrate best practices for promoting gender equality, human rights, disability inclusion, and environmental sustainability. Scope of the Evaluation The evaluation will be forward-looking and will objectively and systematically assess the performance of the project in terms of its relevance, coherence, effectiveness, efficiency, impact, and sustainability. Furthermore, the evaluation will assess the extent to which gender, human rights, disability inclusion, environmental concerns, and other cross-cutting issues were incorporated or mainstreamed into the project. The evaluation will cover the period from January 2022 to January 2026. Evaluation Criteria The following key evaluation questions per criteria will guide the evaluation. The evaluator is expected to refine evaluation questions where necessary and to include the refinement in the Inception Report. Relevance • How did the project team determine the strategic needs and priorities of member States in its project design? • How were the intended results of the project aligned with the strategic needs and priorities of member States, the sustainable development goals, and national and regional development agendas? • How were the identified results aligned with other stakeholders’ strategic needs and priorities? • How was the log frame translated to ensure that the activities carried out were relevant and contributed to achievement of the intended result(s)? • How were planned and implemented activities designed and sequenced to ensure a conscious consideration of the intended result? Effectiveness • What evidence is available to support the achievement of the results identified? • Which of the activities undertaken by the project team directly contributed to the identified results? How did these activities contribute to the results? • To what degree can the achievement of results be attributed to the intervention? • How did the work of stakeholders contribute to the achievement of the identified results? • Which other factors have contributed to the achievement of the identified results? • Would the results have been achieved regardless of the project being implemented? • How were key partnerships integrated in the delivery of the project to maximise the achievement of results? Efficiency • Were the planned activities considered and delivered with the end results in mind? How did this shape the process? • What, if any, adjustments were made during the project to optimize the achievement of results? • What, if any, considerations were made in terms of the most efficient way of delivering activities (choice of modality, expertise available, etc.) • To what extent were partnerships leveraged and/or enhanced to utilize additional strategic expertise? Impact • What, if any, high-level effects did the project cause (such as changes in norms or systems)? • How did the project ensure that all the intended target groups, including the most disadvantaged and vulnerable, benefitted equally from the intervention? • How transformative was the project – did it create enduring changes in norms – including gender norms – and systems, whether intended or not? • To what extent did the intervention lead to other changes, including “scalable” or “replicable” results? Sustainability • Given a similar context, could the identified results of the project be replicated? • Which of the activities identified as contributing to the identified results of the project provided ongoing benefits for stakeholders? • What evidence is available to indicate that the results of the project can be maintained by stakeholders? Gender, Human Rights, Disability Inclusion, and Environmental Issues • To what extent were issues of gender, human rights, disability inclusion, and the environment incorporated into the design, planning, implementation, and monitoring and evaluation practices of the project, as well as the results achieved? To what extent did the project respond to and affected the rights, needs and interests of different stakeholders, including women, men, youth, people with disabilities and other marginalized groups? Evaluation Methodology The evaluator is expected to identify the main results achieved through the project, as established by both the project team and key stakeholders, and develop a Theory of Change (ToC) for these results. The causal links postulated in the ToC should then be used to theorise a plausible causal mechanism, which should be tested using all available evidence, to evaluate the influence of ESCWA’s work on results achieved through the project. The evaluator is expected to ensure a mixed method (qualitative and quantitative), inclusive and participatory approach, with adequate triangulation across methods, to arrive at credible, reliable, and unbiased findings. The evaluator will also ensure that all aspects of the evaluation are gender and human rights sensitive (including a special focus on the rights with person with disabilities). In addition, the evaluator must ensure that they always comply with the UNEG Ethical Guidelines for Evaluation during the conduct of the evaluation. The above methodology is indicative. The evaluator is expected to build upon it and present a robust evaluation methodology within the inception report, including addressing and refining the evaluation questions. Management and Governance Arrangements SPARK will oversee management of the evaluation through an evaluation officer/s, who will act as the evaluation manager/s. This role will include recruiting the evaluator/s; serving as the main port-of-call for the evaluator/s and for internal and external stakeholders; recording the feedback of the project team and effectively integrating it into the evaluation exercise; monitoring the budget and the correct implementation of the work-plan; ensuring quality assurance; etc. The support of the organization and project’s senior management is key for the success of the evaluation, particularly during the recruitment of the evaluator/s, collection of documents for desk review, identification of stakeholders and arrangement of interviews, focus groups or surveys. Quality Assurance Mechanism The evaluator will employ a quality assurance mechanism of her/his preference (either an internal or an external system can be used), which will provide quality checks throughout the evaluation process. This quality assurance mechanism will be indicated in the Inception Report and in the Final Evaluation Report. In addition, ESCWA will review the evaluation report prior to finalization to ensure that it aligns with the UNEG Quality Checklist for Evaluation Reports , particularly with reference to the quality of recommendations. Evaluation Ethics The evaluation will be conducted in accordance with the principles outlined in the UNEG Ethical Guidelines for Evaluation; and all rights and confidentiality of information providers will be prioritized and safeguarded as per UNEG Ethical Guidelines for Evaluation.