How DojoMojo's Engineering Team Learns Through OKR Mistakes and Shortcomings

OKRs Apr 27, 2018

Objectives and Key Results are a great way to track organizational progress. Applying the concept to an engineering team has been an experimental, but ultimately successful, process for our company, a B2B SaaS platform for marketers called DojoMojo.

To start: objectives are what you want to do on a grander scale, and should concern a specific aspect of your company's mission statement. Our current mission statement is the following:

To create a marketplace that empowers brands of all sizes to connect, collaborate and build powerful partnerships. Brands use DojoMojo to efficiently build audience and scale their business with data-driven insights and measurable results.

This quarter one of our companywide objectives is to:

Establish DojoMojo as a marketplace for brands on our platform by expanding our offering.

As the person responsible for running the implementation branch of the company, I take this to mean: build stuff. And if we want to be a company that builds stuff, we had to perfect our building process.

So we'll take that objective and ask ourselves what needs to happen or change to reach that goal. These would be our key results. We want to take something that is an indicator of our performance and make a goal for what we do throughout the quarter.

Last quarter we made some goals that brought me to the following learnings:

Don’t conflict with product.

We don't decide what we build. That’s for our Product team to figure out. Last quarter we chose some initiatives that improved our technical product, but did require efforts that took up energy needed to focus on product goals. For example, we chose an initiative to incorporate docker containers into our deployment flow. While this was a worthwhile endeavor when we choose to execute it, we had to balance it with our product roadmap goals. And it's entirely possible that not completing this task has nothing to do with growth of the engineering organization, but rather a reaction to business priorities.

This quarter we are choosing key results that are __holding us back from being a better product development organization. For example, I've noticed is that pull requests will sit waiting for a requested review for as long as 7, and sometimes even 10 days. This is problematic for several reasons:

  • Anytime something is sitting and waiting there is a blocker in the process. In this case the blocker is the awaited reviewer. While maintaining some flexibility is crucial for the time management of the reviewer, we want to balance this with the true urgency of facilitating the forward progression of the fellow coder and the project as a whole.
  • The more time passes after a developer writes code, the less the developer has context to what they were thinking when they wrote it. This is beyond a documentation question, and delves into a micro level as to why each decision was made when writing the code.

So this quarter we are tracking these metrics and setting an improvement goal for ourselves to be better, with the hopes that we can contribute to our greater objectives as a company.

Ps: this metric is not easily available on Github, so we had to build some minor infrastructure to track this which I may post about in the future.

Make sure the Key Performance Indicator(KPI) indicates true performance.

For one of our key results we chose open sentry issues with the goal to lower the total number by the end of the quarter. By actively reducing the number of issues we hope to indicate that our product is operating more reliably. The problem is, automated issue creation can balloon up with errors that don't actually indicate a real user problem, and eliminating the issue doesn't have any real effect on our customer experience. Tracking this week over week felt tedious and disconnected from what we were really trying to do.

Sentry is a truly valuable tool, and really does alert us to bad user experience. So this quarter we're reusing this tools metrics with a new twist. Instead of tracking raw issue count, we're tracking aggregate event occurrences per user per day. There are several aspects to this KPI that we believe will be a better true indicator of our customers' experience.

  • Even if there are many irrelevant issues that pop up and are never resolved, each occurance of the issue likely does indicate a user experiencing something.
  • Per user per day is very important as it insulates this number from company growth. If we grow by 20% this quarter, we'd expect error occurrences to increase dramatically. our number to increase proportionately would water down the importance of the figure.


OKRs have different rules when applied to the engineering team of an organization, but I believe they can be a powerful way to raise the bar for your team by choosing what’s important for your team, and then tracking your progress towards your goals.