Closed Bug 1133774 Opened 9 years ago Closed 9 years ago

[research] figure out how to track performance of new thank you page

Categories

(Input Graveyard :: Submission, defect, P2)

defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: willkg, Assigned: willkg)

References

Details

(Whiteboard: u=user c=feedback p=2 s=input.2015q2)

We're modifying the thank you page to show search results from SUMO that might be relevant to a user's feedback response.

Bug 1133771 covers changing the thank you page template to show the search results.

This bug covers figuring out what performance metrics we can use and how to track them to figure out the answers to these questions:

1. How helpful are users finding the search results?

2. What's similar between feedback responses where the search results are not helpful?
Fixing blocks/depends.
Blocks: 1007840
No longer depends on: 1007840
Moving things out of the input.adam sprint.
Whiteboard: u=user c=feedback p= s=input.adam → u=user c=feedback p= s=input.2015q2
Grabbing this to work on over the next week.

Steps to take:

1. hone the questions to a better set that are clearer in regards to how to measure them and what to do about the results

2. based on the new set of questions, figure out how to derive the actual or approximate data to answer the question within some useful confidence

3. figure out which measurement bits to implement
Assignee: nobody → willkg
Status: NEW → ASSIGNED
Talked with Gregg at length.

First off, we'll use "total suggestions" as shorthand for "the total number of feedback responses that were given suggestions" since we're not generating suggestions for all feedback responses. We might at some point, but not for this project.


Here's what I'm thinking:


1. Are we helping users?

This is essentially a question of engagement--are users clicking on the suggested links we've provided? To determine whether this is helping users, we need to find out whether users are clicking on the suggested links.

Then engagement is something like: 

    total suggestions
    ----------------------------
    clicked on at least one link

Gregg was thinking that 20% engagement might be a good number to shoot for. If we end up with like 5% engagement, then maybe we're trying to solve a problem that doesn't exist or the heuristics generating the suggestions are sub-par.


2. Does tailoring work? How attractive are the results?

We can figure this out by comparing:

    total suggestions
    ---------------------------
    clicked on a suggested link

vs.

    total suggestions
    --------------------------------------
    clicked on "None of these helped" link


We talked about using GA to measure these, but couldn't figure out how to make that work.

We talked about adding a redirect service to Input and using that to capture the response id, what was suggested for that response and whether the user clicked on it. Need to look into what the ramifications of collecting this data are and what we'll need to institute policy-wise around the data.

If we do go with a redirecting service, we can see which SUMO kb articles show up in suggestions most often and which get clicked most often. Those top ten lists could be interesting to the SUMO folks.

Will run this by Lonnen to talk about data collection issues before proceeding.
Priority: -- → P2
Lonnen suggested doing server-side pings to GA to track events and use a server-side redirecter. I put together a script to test that out and it'll work fine. We can use Event Flow to determine things.

I'll write up a bug to implement the redirecter and codify the above in the project plan wiki page.
Bug #1169261 covers implementing the redirector. (No clue if it is spelled "redirecter" or "redirector".)
No longer depends on: 1133771
It's spelled "redirector".
Updated the Thank You page. Closing this out.
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Whiteboard: u=user c=feedback p= s=input.2015q2 → u=user c=feedback p=2 s=input.2015q2
Bah. The equations in comment #4 are all upside down. They should be:

    clicked on at least one link
    ----------------------------
    total people offered a suggestion


    clicked on a suggested link
    ---------------------------------
    total people offered a suggestion


    clicked on "None of these helped" link
    --------------------------------------
    total people offered a suggestion
Product: Input → Input Graveyard
You need to log in before you can comment on or make changes to this bug.