Identifying the Current Landscape of Restoration Tools and Practices for TerraMatch

Building foundational research to build meaningful tools for people working on environmental restoration globally. 

The first UX Researcher at the World Resources Institute of the TerraMatch platform, where I conducted foundational research for understanding how small scale restoration organizations currently monitor and report on their project progress.

Project Timeline
8 weeks, June-August 2021

Total Involvement
June - Dec ‘21 

  • literature review
  • semi-structured interviews
  • scenario walk-throughs
  • usability audit


Why does restoration matter?

Our planet is losing billions of trees each year, increasing the velocity towards climate catastrophes. Such environmental devastation has a disparate impact on historically marginalized communities, world-wide.

Luckily, there are thousands of on-the-ground organizations (project developers) around the world that are already running successful and responsible tree-growing projects (restoration), but haven't been able to tap into the explosive funder interest in supporting restoration projects.

That's where TerraMatch steps in...

TerraMatch [1, 2, 3] is a global multilingual restoration projects and funder matching platform, is addressing the gap in access to large scale ($75k-$500k) funding for restoration projects. When I joined the team, restoration projects could only use TerraMatch to find funders and report back metrics for funded projects.

Before my team was going to invest more money into building out landscape monitoring tools for restoration areas, they knew they had to address one glaring issue with TerraMatch's inception. TerraMatch was designed without any user input.

Project developer facing view of TerraMatch.

The team knew they needed to learn more about how project developers were using TerraMatch and whether investing in technical capacity to build out monitoring features was what users actually needed to conduct restoration projects well. I was tasked to lead these research efforts, which meant my first challenge was scoping our study.


Thus, I spent the first two weeks of my eight week internship conducting a "literature" review of WRI's prior work in the restoration space, which included:

📹 watching multiple hour long webinars

✉️ reading through email chains

📑 digging into research reports and presentations

📣 and compiling user complaints spread across Jotforms and emails

I also was given two, one hour long tutorials through the TerraMatch platforms. With all of that information, I conducted a light usability audit of the current web version of the TerraMatch website to help highlight glaring issues. Through the tutorial, I learned that TerraMatch is built by an external software development agency, which meant that changes we wanted to make now wouldn't be seen by users for at least another year.

After synthesizing all of this information, I realized that the biggest impact I could have in improving TerraMatch was to prioritize research methods that would set up future researchers for success. Instead of moving forward with usability tests to only inform the next cycle of interface changes, we opted for long-term learnings for foundational research and focused on the following research question:

How is restoration currently done by project developers on the ground?

To break down this question further, we asked these smaller research questions: 

  1. Tools: What tools are being used for reporting? Monitoring? Project management?
  2. Datasets: What datasets are being used for monitoring and evaluation, if at all?
  3. Metrics: What metrics are being monitored? Why?

We recruited current users of our platform across Latin America to limit our scope to the funding and growing contexts to one continent. This is because we are conducting a qualitative study, which does not aim to be representative of all possible users, but rather conduct deeper interviewers with a smaller selection of interviewees to build a more nuanced understanding of how we can best meet users needs.

We were unable to conduct interviews on the ground due challenges such as limited bandwidth in remote locations across mutliple countries, we had to be creative and leveraged scenario-walkthroughs, where we would present different scenarios probing monitoring, reporting, and project management features. 

One example of scenario walkthroughs we showed to research participants.  


After taking many calls across different time zones, we were able to talk to seven different restoration organizations across Latin America representing different restoration experience levels and types of ecosystems restored.  With my product team, we analyzed the data using the affinity diagramming process. We learned that:

  1. Everyone does restoration differently. Whether it be how they organize themselves, how they preserve data (whether they do that at all), how data is collected on the ground, how they were monitoring things, what they were monitoring, what they were reporting on, everything was different. This was such a scary finding because it meant that there was so much we didn’t know.
  2. Some people use tools in ways that they were not intended to be used for. Whether it be using Google Earth to take screenshots of proposed and on-going planting areas, to sending photos of paper and pen field notes data collection using Whatsapp, project developers showed us all the ways in which they adapted and remixed existing tools to meet their needs.
  3. Project developers want to have centralized tools to manage their restoration work. Whether it be monitoring, reporting, databases, or project management, they recognize that their disjointed workflows are keeping them from making the most out of the valuable data they have to learn from prior restoration projects, get more funding, and set up future restoration work for success from the start.


I presented my research findings to the TerraMatch team, the Monitoring and Evaluation Team, and eventually, our newly formed UXR Africa team.
  • My research showed that project developers care about reporting tools the most since they are time consuming for resource constrained organizations.
    • Jeff Bezos was really interested in our work, and gave a 15 million dollar grant, which led to the continuation of my work at WRI after my internship, and my promotion as the lead User Experience Researcher. I recruited and hired local African talent–which includes three user researchers and one product designer–to continue supporting this work.
  • The monitoring and evaluation team used my research process and study design to guide how they are guiding UXR for the newest multi-million dollar lab, The Land and Carbon Lab.
  • I spearheaded new organizational collaborations with the Global Forest Watch Team, and the Forest Watcher products by showing how its tools could be incorporated into the TerraMatch product.


Framing the problem can be the most challenging part.

As the first UX Researcher in our 1000+ person company, I often felt like I had to have all of the answers on best UXR practices and where to look for relevant information.

I spent a lot of time throughout the internship making sure I really understood prior work on restoration through literature reviews and talking to expert stakeholders. I was stunned by how much something new I learned about the contexts of restoration would re-oriented how I understood what would best meet the needs of project developers. This made me much more comfortable with uncertainty and pushed me to ask for feedback early and often.

I really enjoyed being able to define so many aspects of the UXR function at WRI since I was able to immediately see the impact I could have on not only TerraMatch, but also the broader forest teams in how they approached building products.  Setting the stage for developing research workflows and processes from the ground up will improve the team’s ability to respond to user challenges long after I’ve left the team.

Having lunch with the Terramatch team at Lake Merrit!

*To protect the safety and privacy of our interviewees working in politically dangerous contexts, organizations and individuals will not be disclosed case study. Data about interviewees are stored securely and can only be accessed by particular individuals on the team.