The Evolution of Judicial Evaluation in Ethiopia - Part 2: Challenges of Piloting the Innovation

This is the second in a series of three posts we are featuring on the work of the Federal Supreme Court in Ethiopia. In this part you will learn about the challenges the Court encountered while piloting the new judicial performance evaluation system and innovative strategies employed to overcome some of these challenges.To catch up, we encourage you to read the first part of the interview where you can find a brief history of court reform in Ethiopia, and a detailed discussion of the new judicial performance evaluation system.

The Federal Supreme Court has been one of the most engaged Ethiopian government institutions in the process of reform to improve the efficiency and effectiveness of their services, as well as the responsiveness of the services to the needs of the public. This series describes a new system of judicial evaluation that the Supreme Court has been piloting at the federal level for the last few years, and has recently adopted as formal evaluation policy. The series will explore the nature of the new evaluation system, the challenges in the implementation, and its prospects for sustainability. 

We have structured the series as a set of excerpts from conversations we had in December 2015 with a Director in the Supreme Court whose responsibility was to pilot and lead the implementation of the new judicial evaluation system. The excerpts are lightly edited for clarity. Here’s what he had to say about the challenges of piloting and refining the innovation: 

Q: How did the transition go from the old to the new system of judicial evaluation?
A: We transitioned by introducing a pilot, only involving Supreme Court judges and courtrooms, which took close to 5 years. We chose the Supreme Court because it involved fewer judges and we thought that if they can be open to the innovation then other judges would follow. It was not that smooth. We had to make many adjustments and improvements throughout these years.

Q: We can imagine! So what was the first major challenge?
A: The first challenge was structural. The collection of the data by itself consumes much manpower, and requires significant data collection and analysis. The existing institutional structure of the Judicial Administration Council did not take into account the amount of work to be done. This was the main reason we got stuck in the piloting phase. 

Q: What solutions did you try for this challenge?
A: First, the Judicial Administration Council directly cooperated with Federal Supreme Court to add manpower. Second, we automated some of the data collection. For example, you can now see on the website of Federal Supreme Court, we recently launched 2 questionnaires which will solicit feedback from public and professionals over the website. We are also in process of adding touchscreen computers throughout Federal Benches to collect people’s opinions more easily and to help us organize the data. Third, and finally, we restructured our Secretariat in the Judicial Administration Council and secured more manpower.

Q: What was the second major challenge?
A: The second challenge was about handling the judges themselves. This was sometimes an obstacle. At first it was almost impossible to evaluate judges – they are respected people since antiquity. There were many judges who were known to be very resistant to the idea of performance evaluation at the beginning. 

Q: What solutions did you try for this challenge?
A: There were some formal and informal solutions we implemented. First, we tried to encourage judges to learn ways of accountability by using the legal and disciplinary process. Second, we took advantage of the strong commitment of leadership of judiciary and other branches of government towards implementation of a new evaluation system. We informed the judicial leadership about judges that had hard time embracing this new practice, which in return tricked down the message to the rest of the judiciary. Third, over time we have had to recruit new judges to replace old ones and we find that the newer cohort is more open to evaluation. 

One reason why the judges have been hesitant towards the new evaluation system was the controversial nature of the “rendering judgment” indicator, which includes asking legal professionals to evaluate the judge’s level of knowledge of the constitution. It is still an issue of debate whether we should ask lawyers to evaluate a judge based on the quality of the judgment and legal knowledge of the judge. One solution we are contemplating to manage this conflict is eliminating the feedback by lawyers in the courtroom on these particular legal knowledge indicators. Taking into account the feedback of lawyers may not be necessary if lower courts are already reviewable at appellate level. The opinion of peer judges at other courts of the same level, as well as appellate judges, could be sufficient evaluation and can also neutralize the sensitivity.


We welcome your comments, please use the comment section below to share your ideas and reflections. What do you think of this innovation? What questions does it raise for you? Have you had to conduct a pilot as a prelude to full implementation? What challenges did you face? How did you overcome them?