Do sales numbers tell us all we need to know?

funnel comparison

Sales numbers tend to be considered the only relevant metric when evaluating in-store performance. Changes to a category that increase sales are considered to be good, and those that decrease sales are considered to be bad, which to the untrained eye makes sense and seems easy to manage, after all it is ultimately about increasing sales, and offline retail is just that simple.

So I am testing new packaging, and after a good amount of subjective and inconclusive data points from people in panels, paid to give their opinions, or from virtual stores that stimulate very few people’s imagination, I decide that am ready to take it to the store.

In the store the sales numbers increase, after the initial drop from auto-pilot shoppers adjusting to the change. The overall increase in sales numbers is less than expected, but that tends to be the case, as most projects are oversold and end up underperforming. However, the increase helps meet the sales targets in the region for the period, so am still a hero.

To the trained eye it may be fairly different.

The trained eye still seeks initial feedback from panels, eye tracking, and virtual stores, as protecting early testing makes complete sense.

The trained eye, after done with early testing through panels, eye tracking, and virtual stores, takes the new packaging to lab stores (real stores instrumented with Shopperception), and identifies that the number of shoppers stopping in front of the product (engagement) increased by 10% with the new packaging, that the increase in engagement generated an uplift of 5% in shoppers handling the products (interactions), and that such increase in interactions led to a 3.5% increase in conversions.

The trained eye notices that sales numbers report a 1% increase in sales for the target product, which is not aligned with the 3.5% increase in conversions for the category reported by Shopperception. After further analysis of the insights generated in the lab store, the trained eye realizes that the new packaging is increasing conversions of the target product by 1% and of the competition’s product by 2.5%.

After a few more tests it becomes obvious that product traction was impacted by the new packaging, triggering some shopper actions based on the familiarity with the competitor’s product, and ultimately canivalizing the benefits of the new packaging.

Placement emerges as the saving strategy, and the revised planogram delivers a healthier overall category performance that favors the investment in the new packaging by enabling it to capture the great majority of the uplift in conversions.

Is this narrative really fictional?

Can sales numbers really give us a sense for the real impact of each of our Ps?

Take a look at our Category Assessment sample, which is a deliverable from our in-depth, in-store category observation, and please reach out with comments and questions, or join the conversation!

See you in the store!

Read previous post:
The landscape of in-store observation solutions

With the increase in adoption of in-store observation as a means to improve products and improve shopping experiences, and as...