POLICY


Monitoring and evaluation

Accounting for our work



Purpose, definition, and scope

Monitoring and evaluation, or M&E, describes the process by which an organization measures its performance with the aim of improving on that basis. It is made up of four main activities:
  • Collecting data on the organization’s performance
  • Drawing lessons from the data about areas to improve
  • Sharing results to ensure consistency, transparency, and progress
  • Making changes to operations through an iterative process, aiming for ever better results

Given these four activities, M&E is also known as MEAL, which stands for monitoring, evaluation, accountability, and learning. The primary purpose of monitoring and evaluation activities is indeed to increase accountability and learning from mistakes. M&E is a pointless use of resources unless it leads to genuine, tangible, incremental, and well-documented progress.

For that reason, M&E cannot realistically be applied to every aspect of an organization’s work at all times. To be effective, an M&E process must be well-thought-through: tailored to particular goals, needs, and constraints, and adequately resourced. Synaps therefore restricts the scope of M&E to moments and areas where it is most useful in achieving our mission.

As an innovative research organization, Synaps runs projects that vary considerably in size and nature. Some are complex, pursue highly specific goals, draw heavily on resources, and demand close managerial oversight; others are small and experimental. We therefore treat M&E as a gradient ranging from formal, data-heavy frameworks to more qualitative assessments, depending on which approach presents the best tradeoff in terms of resources and results. As a rule of thumb, we view a fully fledged, quantitative framework as best suited to large-scale implementation activities. By contrast, in cases where bureaucracy would stifle our reactivity, creativity, and performance, we apply a lighter, narrative approach.

Synaps’ M&E efforts ultimately serve fundamental, overarching goals that generally lend themselves to narrative reporting more than quantitative measures. These organization-wide concerns are:
  • Our consistency with regards to our values, strategy, and other policies
  • Our efficient, responsible, and accountable use of resources
  • Our staff’s progression in their skills and careers
  • Our tangible effects on people and organizations in our environment
  • Our capacity for self-criticism and improvement
  • Our ability to sustain ourselves in a crisis

This policy’s purpose is to set clear guidelines for M&E in a way that serves this variety of concerns, constraints, and ambitions.

Principles and standards

Synaps holds itself to the standards laid out below. These standards resonate and cohere with our own organizational values, namely our commitment to: tenacity, rigor, candor, adaptability, community, and efficiency.

These standards extend to our funding partners and implementing partners, whose own performance we assess as part of our M&E efforts. Indeed, M&E must encompass all parties involved in a project. M&E applies to contractual processes, budget structure, communication between partners, and reporting requirements, which must be monitored and evaluated according to the same standards.

Monitoring standards

Synaps embraces the monitoring standards laid out by the United Nations Evaluation Group: impartiality, utility, transparency, credibility, disclosure, and participation.

In practice, we believe that a sound monitoring system involves:
  • Challenging any potential biases in the system’s design
  • Openly discussing the methodology with all involved parties as well as consulting the broader communities implicated in a project, whenever possible
  • Collecting data professionally, through a continuous and consistent process, and in ways that support clear, practical outcomes
  • Making results available to the public, one way or another

Any monitoring system will be imperfect, given how difficult it is to accurately measure the effects of a project on its environment. Designing such a system therefore invites a discussion of its limitations and flaws.

Evaluation standards

Synaps adopts the evaluation standards set by the Organization for Economic Cooperation and Development: relevance, coherence, effectiveness, efficiency, impact, and sustainability.

In practice, these principles translate into questions that must be answered as part of the evaluation process, notably:
  • Is the project relevant to all involved, including beneficiaries, Synaps, its funding partner, as well as taxpayers, if public money is at stake? In what ways?
  • Is the project aligned with other interventions occurring in the same area or field? Does it support them, undercut them, exist in a parallel universe?
  • To what extent are the project’s effects encouraging, unexpected, underwhelming, uneven?
  • Have available resources been used optimally? Why did it run over budget? Was the budget adequately designed? What explains poor estimates or delays?
  • Did the project have an impact that went beyond its scope and objectives? For instance, did it produce useful data, influence other interventions, or bring about negative side-effects?
  • Are the beneficial effects likely to endure? If not, why? How could obstacles be overcome? What should future projects do differently?

It is essential to note that these questions are qualitative in nature. A project may succeed in quantitative terms yet fail to provide good answers. Conversely, it may fail in quantitative terms, while providing benefits best described narratively. The evaluation process must therefore analyze and challenge numbers, rather than simply report them.

Accountability and learning

Unlike monitoring and evaluation, the accountability and learning components of M&E do not enjoy broadly accepted sectoral standards. In fact, these aspects of M&E often remain vaguely defined, although they are arguably the whole point of the exercise.

Synaps understands accountability as a multifaceted principle. We believe that we must be accountable:
  • Locally, to the communities we work in, as well as to our peers
  • Internally, to all our staff, who are encouraged to hold us to our commitments
  • Contractually, to both our funding and our implementing partners
  • Publicly, to our broader audiences, which are entitled to question our results
  • Environmentally, in line with our social responsibility policy

We constantly strive for accountability, seeing it as a test that the organization must pass over and over again. Failing this test is inevitable at times, but any shortfalls must be acknowledged, documented, and acted upon in ways that feed into the learning component of M&E and thus improve accountability in the future.

Synaps approaches learning as an equally multifaceted, open-ended process. As we see it, we must learn about:
  • Failures, notably our own, which are the most fertile source of learning
  • Successes, especially those of others, instead of reinventing the wheel
  • Communities we work with, which have so much to teach us
  • Partners we collaborate with, whose work is full of lessons for us
  • Topics we take up, which we can never know enough about
  • Skills required in our work, which we can always improve
  • Learning itself, a skill that is central to all others

As part of our M&E efforts, learning must always be documented, applied, and transferable (to other staff or peer organizations).

Indicators and targets

In an M&E framework, indicators measure whether your targets are being met. An indicator is therefore something you can count. To take a simple example, you can measure the number of people that participate in a project. Using the right methodology, you can also measure degrees of participant satisfaction. A target is the number or percentage attached to an indicator that defines success. Taking the same two examples, you may want to involve 100 people in your project, and ensure that 80% of them can be considered satisfied by it.

In line with managerial best practices, Synaps aims, whenever possible, for SMART indicators, meaning indicators that are specific, measurable, achievable, relevant, and time-bound.

However, Synaps also recognizes that SMART indicators can be difficult to design, if not deceptive. Project managers run a risk of assessing what is easiest to quantify rather than what is most useful or illuminating. All indicators should therefore be understood as “indicating” a possible result, which must always be questioned and verified through empirical means.

Some goals do not lend themselves to numeric indicators at all. Several of Synaps’ overarching goals, listed above, fall into that category, eg “consistency with our values” or “capacity for self-criticism.” Even the “efficient use of resources” is not something you can grasp through numbers alone: Diligently spending every penny budgeted for a poorly designed project is just another way of wasting money.

Logframes

When planning a project, Synaps recognizes the value of using a logical framework or “logframe” (also known as a results framework). Building a logframe involves breaking a project down into a hierarchy of objectives, outcomes, outputs, and activities, all of which reflect an overarching “theory of change”. It thus helps identify all the components of a project, think through the details, and ensure everything fits into a coherent structure. A logframe supports M&E by incorporating relevant indicators, baselines, and targets.

However, a logframe is primarily a planning tool. It is designed to detect flaws in the project’s design and make adjustments accordingly. It does not account for the negative outcomes of a project, any more than its unintended positive effects. It does not substitute an effective M&E strategy nor should it be allowed to stifle a responsive, ongoing M&E process. An M&E framework, which is designed to capture outcomes, may depart from the logframe on the basis of evidence collected over the course of a project.

Building an M&E framework

Developing an M&E framework is a considerable investment. It will bring equally significant returns, as long as the data collected is pertinent, the collection process is consistent, the evaluation is honest, and the lessons learned are truly put into practice. Making the most of this deeply reflexive and self-critical exercise therefore starts with careful design.

Design

The design process involves eight steps, in the following order:
  • 1. Set clear objectives. What do you want to evaluate and why is it important? Answers must be written in plain English, avoiding acronyms and jargon that would be lost on an external reader. Objectives must make immediate sense to colleagues as well as to members of the communities affected by our work. Such people must be involved in setting these objectives whenever possible.
  • 2. Define indicators and targets, per the principles and standards set above. What will you measure? What do you hope to see in the results?
  • 3. Design a data collection mechanism. How will you keep track of your indicators and targets? Data may be quantitative or qualitative. Ideally, it should be both. Either way, data must be collected continuously and consistently to be useful. That is why you must put in place the right mechanism to ensure, throughout the project, that you are gathering enough data of sufficient quality.
  • 4. Establish a baseline. The baseline describes the state of things at the start of the project. This provides something to look back at later: a point of comparison with what changes as a result of a given activity or intervention.
  • 5. Allocate resources. Even when an M&E framework doesn’t require dedicated staff, it does imply that various colleagues reserve time for such activities as data collection. That must be accounted for in the budget.
  • 6. Assign roles and responsibilities. Data collection will only occur if it is mandatory and supervised. Who will be doing what?
  • 7. Train staff accordingly. Is everything clear for the people involved? Do they need to hear more detailed explanations or acquire specific skills?
  • 8. Formalize a calendar. When to do what? The key to success is setting deadlines and reminders for all parts of the process: when to collect and archive the data, verify its quality, analyze the results, discuss the findings, and propose changes informed by them.

Implementation

Once the framework is in place, it is implemented as an iterative cycle. This cycle is made up of five activities, which run continually throughout the course of a project; skipping any of these steps will make the whole effort futile.
  • 1. Collect and archive the data. Collection and archiving is a continuous process. Any interruption will undermine the whole framework.
  • 2. Verify the data quality, regularly. This is an opportunity to make sure that the data collection mechanism is functioning as planned.
  • 3. Analyze the data, at set intervals. Corrections must take place while operations are ongoing, and not solely in hindsight.
  • 4. Communicate the findings and discuss them with the concerned parties. M&E is all about transparency, without which there can be no progress.
  • 5. Propose changes incrementally. The effects of these changes, for better or worse, should appear in subsequent M&E cycles.

Self-appraisal

An M&E framework that only logs good news must be treated as a matter of suspicion. The point is not to demonstrate infallible success, but on the contrary to show that flaws have been detected and acted upon. The framework’s ability to do so must itself be evaluated in hindsight. Questions that should be asked at this stage include:
  • How would the community we worked with respond to our assessment?
  • Which mistakes have been most useful in rethinking our activities?
  • How did our use of resources for M&E pay off, exactly?
  • How will we change our M&E approach in the future?

Organization-wide measures

As stated above, our organization-wide objectives are particularly difficult to translate into SMART indicators and targets. Here, however, is our attempt to do so.

To remain consistent with regards to our values, strategy, and policies:
  • We document any breaches in our biannual reports (100% of the time)
  • We align our biannual goals with our five-year strategy (100% of the time)
  • We systematically implement our existing policies (100% of the time)

To make efficient, responsible, and accountable use of resources:
  • We reject projects that contradict our values or policies (100% of the time)
  • We demonstrate progress from one audit to the next (100% of the time)
  • We review our expenditure in order to reduce waste (once a year)
  • We cut back on procedures that become unnecessary (once a year)

To ensure that staff progress in their skills and careers
  • All staff review, assess, and adapt their learning plans (once a year)
  • All staff add to their responsibilities (once every two years)

In order to claim tangible effects on people and organizations in our environment: 
  • Peer organizations seek our help (no less than the previous year)
  • We conduct joint projects with peer organizations (no less…)
  • Peers praise our work in public or private (no less…)

To develop our capacity for self-criticism and improvement:
  • Our biannual reports include candid self-criticism (at least 20% of the report)
  • Our director and managers are evaluated by staff (once a year)
  • We demonstrably raise problems with our partners (100% of the time)
  • Complaints are dealt with efficiently (100% of the time)

To remain able to sustain ourselves in a crisis
  • We take measures to anticipate foreseeable threats (100% of the time)
  • Crises are discussed transparently with our staff (100% of the time)

In support of this organizational framework, Synaps engages in a variety of feedback mechanisms to collect and archive the data we use to evaluate our performance.

Cyclical reviews

  • Five-year plan. This report lays out our long-term objectives, which we review every six months. We track completion according to a binary: ongoing or achieved.
  • Biannual reports. Within the context of the five-year plan, these interim reports describe our progress, problems, and prospects. They rely on input from managers. They enable us to document our failures transparently, set new short-term objectives, and assess each semester against the previous one.
  • Financial reports. We discuss our financial data at least once a year, when making estimates for the year to come. We also draft specific financial reports for other purposes, such as board meetings, future planning, and crisis management. These are opportunities to review our performance and business model, which feed back into our five-year plan and biannual reports.
  • Board meetings. These occur once or twice a year, with an agenda that depends on the aspects of our operations that invite more scrutiny or external input. Discussions are attended by a staff representative and the outcomes are made available to all colleagues.

External feedback

  • Complaints. We maintain a dedicated email account (complain@synaps.world) for negative feedback in relation to harassment or more general complaints. We also use this channel ourselves for documenting and reporting complaints received orally.
  • Client interviews. To update our understanding of how others see us from the outside, we interview current and former clients, or commission a consultant to do so.
  • Exit interviews. When long-serving staff leave Synaps, we conduct extensive debriefings to make the most of their deep understanding of the organization and their complete freedom of speech.
  • Audience analysis. We review website traffic data, social media metrics, and newsletter statistics to draw lessons regarding our ability to reach, satisfy, and grow our audience. Such analysis is shared among staff whenever trends suggest an inflection point or an opportunity.
  • Event assessments. Whenever possible, we ask participants in our training sessions and other events to help us improve, through short questionnaires.
  • Anecdotal feedback. Much of the feedback available to us is made in passing, in meetings or on social media feeds. Such anecdotes are shared with our team as often as possible, to inform our understanding of how we come across.

Staff feedback

  • Staff evaluations. Staff discuss their progress, problems, and prospects with their managers in the same spirit in which Synaps tackles these topics as an organization.
  • Anonymous staff surveys. We occasionally ask all staff to fill out a detailed questionnaire on the state of the organization as they perceive it, to assess the mood among us, troubleshoot crises, and identify areas for consolidation.
  • Post mortems. Failures can be fertile ground for learning. We study our own mistakes in depth, to gain as much as we can from them.

Contributions to M&E

These feedback mechanisms usually serve several objectives simultaneously, and contribute to different M&E activities. Some log useful data for us to analyze, while others play an active role in increasing our accountability and capacity for learning. These traits are captured in the table below.

Measure
MEAL
Anecdotal feedback (shared with staff)
M
Audience analysis (shared with staff)
ME
Client exit interviews (shared with managers)
ME
Staff exit interviews (shared with managers)
ME
Staff evaluations (shared with the concerned)
ME
Manager evaluations (shared with staff)
MEA
Anonymous staff surveys (shared with staff)
MEA
Complaints (logged by email, discussed in committee)
MEAL
Event assessments (shared with the concerned)
MEAL
Financial reports (shared with managers)
   EAL
Biannual reports (shared with staff)
   EAL
Board meetings (attended by staff representative)
   EAL
Postmortems (shared with staff)
   EAL
Five-year strategy (shared with staff)
     AL




Illustration credits: Wellcome collection A selection of glass eyes / licensed by Creative Commons BY 4.0; Cindy Grundsten Old glasses via Needpix / public domain.