Your Guide to Sharpening Scoring Skills & Reducing Bias

Posted by Maziar Adl
Maziar Adl
Find me on:

Illustration: gather around an ideaProduct development teams are always eager to improve their products. This passion for the product and quest to make improvements leads to a flood of ideas pouring in on a regular basis. Naturally, everyone who submits an idea believes their suggestion is great and should be implemented. While their idea may be good, that doesn’t guarantee it’s the right idea to focus on next. Product development decisions need support from data and research. 

How should your team filter through the pile of “great” ideas? Most product manufacturers use a prioritization framework to help them find clarity in all the noise, especially with cyber-physical products. Following a prioritization framework and a scoring method is especially useful early in the process.


Prioritization Frameworks

scoreboard_1The goal of a prioritization framework is to sift through features and ideas to prioritize the best and add them to the product roadmap. This prioritization process will follow one of several popular scoring models to measure and compare the ideas. 


Popular Scoring Models

Whichever scoring method your organization uses, it’s essential that everyone is clear on the product vision and the process for evaluating a score. 

  • Weighted Scoring - ranks ideas based on a set of specific criteria
  • MoSCoW Method - prioritizes ideas based on value
  • Rice Score - measures reach, impact, and confidence against effort
  • Kano Model - measures customer satisfaction against functionality
  • Opportunity Scoring - identifies opportunities based on customer feedback
  • Value vs. Complexity Quadrant - a 2x2 grid measuring value against complexity/effort
  • ICE Scoring Model - scores ideas based on three factors: impact, confidence, and ease


How to Use the Results of Feature Scoring

Illustration: DirectionsOnce you’ve evaluated features and assigned them scores, you’ll want to be intentional about how you proceed with the results. Just because an idea has a high score doesn’t automatically make it the best idea. 

As humans, individuals have biases that will influence how they score features. Because there is a high input of people’s opinions in scoring models, the output is not entirely objective and, therefore, not always accurate. Scoring methods are great exercises to filter out the weak ideas or those that aren’t competitive. 

Once you get the low scores out of the picture, you can discuss the viable options to pursue next. Product management teams still need to look at the full landscape of ideas in search of anomalies that might have been overlooked before throwing away any feature ideas. A feature that scored low, based on the model’s criteria, may actually need a higher rank. 

Scoring is a useful process, but it can’t identify everything important. As long as product teams are aware of the limitations, great results can come from the task. 


How to Manage Bias When Scoring Features

Illustration: Scoring & WinningResearch shows that bias exists in everything we do, including prioritization frameworks. While some of our biases are unconsciously present, there may also be times when team members intentionally skew the scoring to favor their ideas. Product managers need to be aware of the potential biases and be ready to prevent and counteract them to achieve useful results. 

Unchecked bias can lead us to make errors in judgment and hurt innovation when what we really strive for is creativity and great ideas. Unless those tendencies are addressed, people are more inclined to stay on the perceived “safe path” than to support an innovative idea—which is the goal of feature prioritization. 


Common Biases that Appear During Feature Scoring

Anyone who has participated in a feature scoring process has likely encountered some of these biases from certain individuals. A team member may downplay the costs of an idea if they believe it will be scored lower due to the budget. They might also weigh the cost portion lower in the criteria to favor their idea.

Another case of intentional bias is when a team member exaggerates the benefits their idea provides the customer, hoping it gets a higher score. Whether they skew their opinions vocally, keep them silent, or are unaware they are doing it, these human influences can harm the results of the scoring exercise.


How to Recognize and Reduce Bias During Scoring

You know that bias is present amongst your product development teams; now, what can you do about it? Here are some basic steps you can take to foster true innovation in your product meetings. 


Step One: Acknowledge Bias

The first step is to acknowledge that they exist and educate your product teams on the issue as well. Our biases can show up in decision-making situations and can hold us back from seeking creative solutions. The following are just some examples of the biases that are at play in group scenarios and impact the innovation process:

  • Confirmation bias
  • Conformity bias
  • Authority bias
  • Loss-aversion bias
  • Self-serving bias
  • Ambiguity bias
  • Strategic misrepresentation
  • Bandwagon bias
  • Pro-innovation bias
  • Status-quo bias
  • Feature positive effect


Step Two: Spot the Bias When it Happens

When you’re aware of the potential bias situations that can arise while evaluating potential features, you can spot them as they happen. The product manager can lead this practice while encouraging your teams to do this, too. You can listen for certain phrases that might come out during ideation rounds, feature scoring discussions, and product strategy meetings. 

  • We know our customers and what they want.
  • Our customers would never use that.
  • That’s too disruptive!
  • That’s too crazy!
  • How do we know that would even work?
  • That’s already been done before.

When you hear a phrase similar to this, you’re encountering a biased opinion showing up in limiting thoughts and dismissive reactions. Unless there is customer research to back up a statement like that, you’ll want to consider digging deeper into the statement. 


Step Three: Use Lateral Thinking Techniques or Group Expert Opinions

Illustration: design thinkingOne way to nudge your team members out of their biases is to use lateral thinking techniques. These methods were developed by psychologist Edward de Bono in the sixties as a way to help us think outside the box, be more creative, and move beyond our biases. Some of these exercises include reframing your idea of why a feature should be scored a certain way or taking time to consider all the alternatives before weighing an idea. 

Another is to use group expertise. Asking different experts can result in high variance in score but if that happens, you can converge them by bringing the experts together to “sync” their reasoning. A good example of this is using Delphi technique. This technique has been used in many scenarios and in this case, by going around anonymously scoring and then receiving feedback, we can converge the scores and make it more accurate. 


Step Four: Work with an External Facilitator

It may be best to invite someone from your organization who is not directly involved in product decisions to facilitate the scoring session. That way, everyone is equally monitored and guided to avoid biased opinions and statements. You’ll want to ensure the person you select is aware of the common biases and has experience recognizing examples and guiding discussion away from them.  


Improve Your Product Scoring with Product Roadmap Management Software

Helping You Build Better Products

Knowing how to sharpen your scoring skills and manage unconscious bias will help your product teams make smart feature decisions. At Gocious, we’re passionate about helping companies build better products with innovative tools that support product development from ideation to product launch and beyond. Book your free demo to see how our product roadmap management software can work for your teams. 

Topics: Product Management, Product Development

  • There are no suggestions because the search field is empty.