Haystack
This is an outline of Haystack, a side project by Jeremy (Developer) and I (Product Designer), and how we tested our riskiest assumptions by putting our product in front of real users as quickly as possible. The app idea initially stemmed from Jeremy’s aim of gamifying the online dating experience and he began building an early prototype before we met. In the span of around three months, after work, we made a few paper prototypes and tried to validate / invalidate the idea.
The Problem
Dating apps are not as efficient and enjoyable as they used to be. They are a labor-intensive, uncertain way of looking for a relationship. It’s a numbers game for men and a bunch of wasted time for women. Reflected in SMH’s take on dating apps:
But while women get more matches, they don’t necessarily enjoy an all-you-can-eat buffet of the most desirable mates. Men send out more messages, to more potential partners, but tend to put in less effort or are less committed to their matches. Women may feel flattered by the frequency of matches, but they may also feel disappointed when trying to follow up and have deeper conversations.
When Tinder first came out, it didn’t feel like a chore. It was a fun game and each match felt meaningful. But over time, Tinder and other similarly labour-intensive dating apps have saturated the market.
The idea
What Jeremy wanted to capture was the same excitement Tinder gave people when it came out.
You start by creating your own set of questions and answers, in a multiple choice question format, and personalise it. You still swipe. But instead of faces, you swipe through a set of 3–4 questions made by other users.
At this point, Jeremy and I met at BlueChilli, a startup accelerator. We decided to join forces for this side project, Haystack, and I stepped in as the product designer.
Assumptions
We took a step back and listed all the assumptions made on the app idea. We then plotted assumptions on a matrix from “Most assumed” to “Most risky”.
This allowed us to take assumptions that were in the top right corner (most risky and most assumed) and validate them.
Method of validation
Assumptions
People will be willing to answer a handful of questions for the possibility of matching with someone
Give each participant a deck of “cards” of each potential match and let them pick out and answer the questions of who they’d like to match with. Ask about their experience
Same as above
The experience of answering questions will be more fun than swiping on images alone
Ask participants to write multiple choice questions that will filter out the kind of people that they are not interested in. Let them know that the person who will be answering their test will need to get most of them correct. Ask them about their process and what they look for in a match
People will know how to write questions that will filter out the kind of people that they are interested in
User interviews and concept testing
We recruited participants through friends and friends of friends. Each session was split into two parts: user interviews about current dating habits and user testing of our prototype.
Part 1: User interviews
The purpose of these exploratory questions were to gain insights and see opportunities in users current frustrations with existing products. Our rough discussion guide included questions like:
Part 2: User testing
Test objective — Will users be willing to answer multiple choice questions written by the person they are trying to match with, with the possibility of matching with them?
We gave the user a deck of 10 “cards”, which were paper prototypes, either a deck of all male or all female, based on their preference. We asked them to select five out of the ten that they would like to match with and to answer their questions and let them know that there needs to be at least 3 out 5 questions correct in order the person to approve or reject you as a match. They were allowed to see the answers of the multiple choice questions before making their choice.
We asked participants questions like:
Why did you choose these people for the “yes” pile?
How did these people make it to the “no” pile?
How did you find this process?
Key Insights
Male and female perspective:
It’s difficult to start a conversation once matching (on pre-existing apps)
Writing a bio is difficult. If people make on average 35,000 decisions a day, should we be giving them more decisions to make with our product?
There is a feeling of rejection if users start a conversation and get no reply.
This might be more suitable for straight people because dating apps like Grindr have a different kind of culture
Female perspective:
With apps currently on the market, there is not enough information to determine personality
As users cannot tell what their match is looking for or if they are suited for each other, they invest time in talking to users only to find out that they are not suited.
Rejection is felt when not getting a match
Difficult to start a conversation or to continue one if the user has left the guy hanging for too long. eg. A participant said that she will “usually comment on their profile. Hate saying hey.”
Male perspective:
Difficult to choose photos that would attract people.
With our prototype, it was too much of a barrier to have to answer a few questions in order to match with someone. As the match rate for men on existing apps is already low in comparison to women, it was not worth spending the time answering questions because the rate would be so much lower. For example, if a user spent 15 seconds on each person’s questions on Haystack, and only matched with one person vs. Tinder where they could swipe through say a hundred and twenty people in one minute (at 2 seconds per swipe), then they’re match rate would be more rewarding.
To increase their match rate on Tinder, some of the users constantly swiped right (like) without looking at profiles. Only after matching would they look more carefully at the match’s profile and decide whether they were interested.
A lot of rejection is felt when not matching
Not sure how to start a conversation that will gain enough attention to separate them from the rest of the female’s matches.
Our research showed that there was not a strong use for original app idea so we tried to pivot. We looked at the user journeys from both male and female perspectives using Tinder, based on our user interviews, and plotted pain points to highlight opportunities more clearly.
We did not want to invest more time in this, as we could not fill a strong need with other ideas we came up with. So we called it quits.
What we learnt
The method of plotting assumptions on a matrix is useful to prioritise what to validate. (Thanks for suggesting this Tim!)
There are different ways to validate assumptions quickly, you just need to be experimental and creative about it. For us, the paper prototypes worked effectively because we made them all in under two hours. Even in such a rudimentary form, it was enough for us to test the concept with a prospective user.
Jeremy coming from a tech background learned a lot about the validation process for an idea. From interviews to paper-prototypes, he has greater understanding of the ‘product universe’.
Things we could have done differently:
Start with the user interviews to highlight problems before developing the interactive mobile app prototype. In a perfect world, we would have decided to work together before Jeremy started making the app, as it was a mammoth task.
Less bias recruiting: we could have been more creative in the way that we recruited participants to be less bias, perhaps through a dating meetup or Facebook group. We recognise that the interviews were quite personal and could have been biased because they were friends and friends of friends.
If we wanted to continue:
The next step could be to focus on the opportunities around the pain points. The user interviews revealed that some people struggled with ice breakers after matching. They would match, start a conversation, eventually it would die down, then they’d find it awkward to try and talk to this random stranger from the internet again, a few months down the track.
We could make the user-created-multiple choice questions optional. In this way, the multiple choice questions could act as an ice breaker.
This is a repost from our Medium account and was featured on UX Collective. Thanks to our mentors, Tim and Eric for all the advice, and to everyone who helped us out as research participants.