This post was written in collaboration with Valery Bezrukova, VP Product at Constructor.
When sales season hits, it’s a win for shoppers — but it can throw a wrench into even the best-performing product ranking algorithms. In this experiment, we set out to improve our models’ ability to adapt to the unique, fast-shifting dynamics of sale periods across several customers and verticals.
A few of our customers flagged a trend they noticed: when they ran short-term sales, it would sometimes impact the quality of product results. That was surprising. Our ML models are typically quite strong at learning from historical behavioral patterns. So, why the change?
The answer lies in how differently shoppers behave during sales:
Even a human would struggle to predict behavior based on out-of-distribution pricing. For instance, let’s say a car Model A ($30k) historically outsells Model B ($50k) 2:1. If the prices of both cars suddenly drop by 50%, would you expect Model A to continue to outsell Model B by 2:1? That’s uncharted territory and would be difficult for anyone to guess.
The same is true for the ML. The model will interpolate based on past data, but it would have a harder time extrapolating when prices for products suddenly move below anything it's seen historically.
We hypothesized that we could mitigate this problem by using Clickthrough Rate (CTR) instead of raw action counters (e.g., clicks, purchases) as a primary signal. Why?
To improve signal quality, we experimented with a weighted CTR:
Weighted CTR = Weighted actions / Unweighted impressions
This approach blends recency sensitivity (giving more weight to recent user actions) with volume smoothing (by using stable impression counts), enabling the model to adapt to trends without overfitting to momentary noise.
We ran the experiment across several interested customers and categories. Tests spanned both Search and Browse, over a minimum three-day period (sometimes longer depending on the customer’s specific circumstances). Each test introduced the weighted CTR-based signal into the ranking logic and compared it to the original counter-based model.
Here’s a sample of the results:
Note: Each test above is a standalone example and should not be seen as representative of the entire vertical. Customer behavior can vary widely within the same industry depending on the specific audience, test duration, and business strategies.
While the results varied across verticals, a couple of clear themes emerged:
For retailers considering introducing this type of variable, it is worth noting that acting quickly on trends might not always be what matters most to shoppers (especially if they value your brand for its bestsellers, for example).
While promoting new trends might improve short-term sales metrics, for some types of businesses, it may ultimately hurt overall performance by making it harder for shoppers to find the products they truly want.
Retailers should consider what their brand is known for and whether this type of strategy makes sense in the context of their business before testing it.
This test confirmed that faster model adaptation to user behavior shifts is both possible and valuable in certain instances. Weighted CTR isn’t a silver bullet, but it’s a powerful signal that helps us enhance real-time trend responsiveness.
And it's just one step. We’re continuing to explore:
All of this aligns with our broader goal: to create adaptive ranking systems that don’t just react to change, but anticipate it.