Better Onboarding Through Science

Photo by FeatheredTar via Flickr

We recently launched an interactive tutorial for new users of Divshot to make learning the interface a quick, painless process. It's something we've been working on for quite some time and lots of work went into it, but perhaps the most important thing we realized is that it wasn't going to be perfect out of the gate.

Tutorial systems are by their nature a bit brittle: you need users to complete the steps in a certain order and representing the exact state needed for a given step can be tough. I wish I could say that we had a perfect solution for this, but we don't. What we do have, however, is lots and lots of data.

When users are going through our tutorial, we measure everything. Which chapters get started, which steps get completed, the exact point where a user exits the tutorial. We even measure how long it takes a user to complete each step.

We launched tutorials mid-afternoon on a Friday to give us some light weekend data to evaluate before usage really picked up during the week. One of our tour chapters walks users through the basics of building an interface in Divshot. It's the longest and most complex of the bunch, but when we looked at the data we saw that completion was well below what we hoped. Thanks to robust measurement, however, we weren't left guessing:

Step Completion Drop-off

See the problem? We noticed a steep drop-off of step completion after Step 7 in the tutorial. While some of the other steps have small drops, Step 8 seems to be a major sticking point.

As it turns out, there was a combination of a small technical glitch and less-than-clear instruction in that step that led our users down the wrong path. We deployed a small fix to help address it and also rolled out a more reliable way to make sure no one gets "stuck" in the tutorial. Two days later, the drop-off chart looks like this:

Step Completion Drop-off After Fix

If we hadn't implemented robust measurement before we launched our onboarding tutorial, it may have taken days of back-and-forth with customers to identify the true source of the problem. Going forward we'll be able to do even more slicing and dicing, figuring out if any of our chapters are too long or steps too confusing.

Every application is an experiment, but often times you don't know every hypothesis you'll want to test in advance. If you plan robust measurement into every feature you build, you'll have all the better chance of catching and identifying potential improvements quickly like we were able to this time.

Note: I know that none of the things described in this article actually represent the scientific method and are simply basic statistical analysis, but I still like the title.

Top photo by FeatheredTar via Flickr.