Thoughts on App Success Measurement.

by jennifermshoop

When I was getting my feet wet with app development,  I spent some time studying up on metrics for app success.  I found that a lot of the metrics conventionally applied — conversion rates (ARPU or average revenue per user), usage (DAU – daily active user / MAU – monthly active user), session length — either did not apply or would not really tell us anything about whether our app was successful in helping our teens build their financial capability.  For example, while we do expect a lot of touchpoints between the student and the app in order to track their progress within the app, length of session is not necessarily a critical metric for success.  I’m more interested in knowing whether a kid sets a goal and meets it.  Or whether a kid successfully creates a LinkedIn account and adds five contacts.  Both of these examples are among the challenges in our nine-week challenge curriculum for the pilot this spring — and neither of them requires much time in-app.  (N.B.: As a reminder, the Moneythink app is an interactive social platform designed to engage youth around financial challenges that build financial awareness, skills, and habits.  The challenges are issued and facilitated and, in some cases, verified by Moneythink mentors, though engagement is largely driven by peer-to-peer interactions in the app.  Youth earn points for completing challenges that they can ultimately “cash in” for real-world rewards.)

metrics2

 

{A lot to digest…}

I was fortunate enough to speak with the incredibly sharp Mark Bruinooge, CEO of Tykoon, who shed some important light on success measurement.  Tykoon is a mobile experience that “empowers kids and their families to develop stronger financial habits and values through real-life learning experiences.”  Kids are able to tick off chores that they complete in order to earn IOUs from their parents for money that they can ultimately redeem to purchase items that they want.  The idea is that the app serves as a mobile intervention of sorts — kids use the app to record their purchasing goals  and then can track their progress towards those goals, ideally spurring multiple conversations about saving, spending, goal-setting, and beyond with their parents.  Mark pointed out that the success of Tykoon centers around whether parents are able to have those offline conversations with their kids — something not immediately measurable by the technology.  It occurs to me that Moneythink shares a similar quandary — believing as we do in the efficacy of a blended learning model, where kids “do things” using technology and then their mentors review those “doings” in order to create a teachable moment in class — much of the success relies on the effective communication of feedback in the classroom.  And how do we quantify that?

phone-child-tasks

 

{Tykoon‘s child-facing mobile app design.}

At the same time, many of the challenges that students will complete have intrinsic behavior measurement opportunities — for example, in one challenge, as I referenced above, students set a minigoal for themselves.  They set their goal and then snap a photo when they’ve achieved it.  These can be as small as “I want to save $5 this week.  I’ll do it by avoiding Starbucks tomorrow.”  The end goal is not only more money in the student’s pocket, but an inclination toward more mindful spending and evidence of self-discipline.  (One of the most startling aspects of the prototype was seeing how thoughtlessly our teens use their spending money — most of them report that they don’t know where their money goes, that it seems to disappear, etc.  When we mined the SnapTrack news feed we set up in our prototype, we saw exactly where it went: the vending machine, fast food restaurants, and the occasional discount clothing store.)

At any rate, I’m landing in a place where I am realizing that while the holy grail is having students complete all of the challenges that they are issued, there are a lot of conditional or proxy metrics that will help us figure out the ideal conditions for challenge completion.  And once we’ve isolated those, we can work on tweaking design (on both technology and content fronts) to ensure even higher rates of success post-pilot.  One complicating factor in the short term is that we are tracking success and engagement differently for our two different constituencies: mentors and mentees.  For mentors, our hypothesis is that the more that mentors engage with the app, the more likely students will be to complete their challenges.  (This was largely informed by our prototype test — at first, we were entirely hands off.  We discovered that quick, small-scale interventions along the lines of: “Jeremiah, why don’t you share your #savings moment with the group?” truly worked and spurred increased engagement.)  So, in the pilot, we want to learn whether mentors are verifying challenge completion promptly — and how many log-ins a week it takes to do this effectively.  We want to match the number of “cheers” and “nudges” (whether through “liking” mentee posts or leaving a comment) to the relative engagement of their mentees.  And if these numbers are low, but student engagement is high, we want to learn how much the in-class experience affects this, if it all.  If mentors are disinclined to use the app, we want to know why and what would help them — texts? push notifications? emails?

4_student_main

 

{Sneak peek of student-facing app design for our Moneythink Mobile app.}

On the mentee side, we want to know how many challenges are begun vs. completed, how many challenge progress posts are submitted, how many likes and comments they are leaving.  If we see substantial attrition on certain challenges or on the challenge curriculum overall, we want to focus on those points and determine how to increase retention through challenge design.

On both sides, we want to learn which pages are most frequently visited — are kids skipping to the challenge rooms?  Should the app open there instead of their current home pages?

In short, there is a lot to learn from the pilot and I’m doing my best to isolate the most important features and to sort through the important pieces of data to collect now, vs. what to worry about later.  As Eric Dynowski, one of Moneythink’s incredible advisors and mentors and CEO of the Turing Group, told me on the phone this morning: “Better to have way too much data and to measure way too much to sift through than the alternative.”  A lot to learn here — can’t wait to share what I learn in a few months, once I’m done with my data bath!

Advertisements