After you leave a request: interview ~15 minutes → guest audit access ~15 minutes → audit within 2-4 days → proposal approval → first iteration start. In our experience, it is realistic to get the process rolling in 2-3 days.
Usually advertising agencies produce wow-cases in contextual advertising by playing on contrast, by correcting simple mistakes to get an impressive result in a couple of months and claim their superiority over the preceding provider. But in this project, advertising had been well tuned even before we took on the project. In this article we share how we managed, despite the odds, to find points of severalfold growth.
We are jam.agency, we are engaged in setting up contextual advertising for e-commerce projects. A client came to us who had already good sales, contextual advertising was configured perfectly, but trying to scale sales resulted in expenses growing faster than revenue. We conducted a huge audit and found ways to still raise sales through contextual advertising.
Here we share in more detail what we did, but before that let's hear what the client had to say.
«During our first call with JAM, we discussed that it was important for us that the agency should not work "in a vacuum" but rely on our business objectives. We are a large network, whose management requires continuous adjustment of the product matrix, logistics, budgeting and other parameters. These changes must be reflected quickly in advertising campaigns.
Unlike previous contractors, with JAM we managed to adapt the advertising account to our business operating needs. For example, in December, we traditionally do not have enough resources to deliver all orders due to excessive demand. Previously, in such cases, we would simply stop advertising, losing profit. But last year, with the help of the Agency specialists, we analyzed our product offering and maintained several advertising campaigns with high-margin categories and with high average customer spend, which generate few orders (do not overload logistics) but at the same time bring a significant profit.
Now we have transferred advertising management to our internal marketing department and use the same mechanics that was developed during our cooperation with the Agency. We understand what to do and how to do it, where to raise or lower the budget, how to manage the target profitability, and we can quickly implement changes when the market requires it».
Marwin.kz is an online store offering goods for the whole family. They sell toys, sports equipment, books, goods for outdoor activities and so on.
The main page of the client's website
An online store is one of the key sales channels, but it is not the main one. Most sales go through offline stores, online sales account for about 20% of revenue. The client has a network of 30 outlets in different cities of Kazakhstan, which usually makes it easier to advertise thanks to having a loyal audience.
Another feature that makes this project special is the large range of diverse products: marwin.kz website offers 20 product categories and each is divided into dozens of subcategories. Moreover, prices range from 100 to 100,000 in ruble equivalent (all prices on the website are in tenge).
Examples of product categories and subcategories on the client's website
The store requested us to conduct an audit in April 2021 after reading our article, Case of Auto Parts Store. They were satisfied with their Internet marketing performance until they reached a plateau in July 2020: scaling up contextual advertising was causing increased Advertising-to-Sales Ratio (A/S ratio) with declining profitability. As a result, the account budget was limited to an amount that gave an optimal balance of revenue and costs.
Revenue and A/S ratio chart for a few months before our audit: revenue is declining, A/S ratio is growing
The project was led for several years by a full-time specialist who knew the store's product range well and understood contextual advertising. Further on, he helped us a lot with his comments and advice.
The client needed to someone to take an independent look at the project and give recommendations for further growth. That is what we did.
We always start our work with an audit of contextual advertising: we look for errors in settings, find growth points, build hypotheses for scaling, etc. Usually an audit takes a week or a week and a half, but in this case it took us a whole month. The outcome of the work was a huge 70-page document and several appendices with explanations. Here we will outline only the key nuances.
In 95% of cases, we encounter typical errors in client ad projects, but in this case there were none. The work on the project had been carried out competently and thoroughly. Even the display of all ineffective keywords was stopped, which speaks of regular monitoring and optimization of advertising.
The only obvious point of growth was the small share of traffic and orders from Smart Shopping campaigns, one of Google's main tools. Usually these campaigns generate up to 80-90% of traffic for an online store, and here the share of Smart Shopping campaigns was only around 10%.
Most of the traffic came from Dynamic Search Ads, then from search and only then from Smart Shopping campaigns
But this is not enough to achieve severalfold growth, so we continued to dive deeper into the project. In order to summarize all the data and analyze the performance of advertising campaigns, their profitability, to see the trends and draw the right conclusions, we compiled an interactive dashboard report in Datastudio. Usually we draw up small dashboards for the business owner after the contract is signed, but for such a large and complex project, we couldn't sort it out without a clear presentation of data.
Fragment of the dashboard
At this level, we wanted to see the segments with loss-making or underperforming advertising. But the project performance turned out to be very balanced: all campaigns and 95% of ad groups were generating profit.
To find growth points, we had to dig deeper and go beyond the ad account: to find out what and when sells well, what exactly makes money for the client and how much.
Since the store had a large range of diverse products — from marmalade and animal feed to expensive strollers and game consoles — obviously not all of them were in the same demand and all had different profit margins. These facts should be taken into account when designing the architecture of advertising campaigns.
The client received more than 70% of revenue from two Dynamic Search Ad campaigns for a mix of various product categories. Because of this, products of different prices, both popular and unpopular, had the same visibility for the buyers. The architecture was easy to manage but clumsy: targeted adjustments could not be made, and in fact everything was reduced to a simple goal of receiving conversions at a specific cost, which was set in the campaign settings for all products. We started digging in this direction.
Most of the budget was spent on two Dynamic Search Ad campaigns, although it contained not only the best-selling products
The most popular categories. We conducted an ABC analysis. We identified the top categories in which the client earns the most money: 80% of the revenue came from 6 categories out of 20.
The highest earning product categories (hidden by the client's request)
These categories should receive the highest share of the advertising budget, but the account structure did not allow for this to be arranged: there were two main advertising campaigns with different product mixes and several brand categories.
One of the advantages of such a structure is that Google ads collected enough data to run automatic bidding strategies. The disadvantage is the lack of flexibility: it was impossible to manage the coverage and transaction cost for each category of goods.
Seasonality. Next, we decided to check how much seasonality affects sales by analyzing seasonal revenue fluctuations in the top categories. Some of them had a pronounced peak in demand. For example, toys were bought most often for the New Year, and the office supplies before the start of the school year:
Yet again, the structure of advertising campaigns did not take into account that some products were better sold in the summer, during school breaks, while some before the New Year. The client could not flexibly manage the bids so that e.g. in July, video games would get more coverage than office supplies.
Profit Margin. Next we studied the financial performance of the project. A huge product range often means a big difference in profit margins between goods: the client could be earning 10% on consoles and 60% on toys. All this should be taken into account in the structure of advertising campaigns.
Let's assume the average profit margin is 40% and the client is willing to spend half of the margin on advertising. In this case, the target A/S ratio will be 20%. If the same target A/S ratio is applied to all products, advertising of low-margin products may turn out to be unprofitable, while high-margin products will not receive sufficient coverage.
Therefore, this strategy is suitable for stores with a small product range and a small profit margin difference.
*Gross Profit = Revenue*Margin - Budget
For our project, a flat A/S ratio is not an option: instead the profit margin of each product must be taken into account. If half of the profit margin is spent on advertising, then the target A/S ratio will be different for each product. * Target A/S ratio = profit margin / 2, if the target advertising cost is half of the profit margin.
After analyzing all possible data, we came to the following conclusions:
Based on these findings, we formulated the main hypothesis: in order to invest the advertising budget effectively, it is necessary to split up campaigns and manage categories independently.
The hypothesis is controversial because Google itself recommends combining categories into one campaign so that automated bidding strategies work better. We do this ourselves on most projects with a small product range (there is a separate article about automated bidding strategies — «9 Case Studies of Online Store Automated Bidding Strategies: from 4-fold Revenue Growth to Drop to Zero»). But in this case, we decided to try a non-standard setup. We had doubts whether this would help, but we explained our logic to the client and suggested testing this hypothesis on the TOP 5 sales categories.
Before the start of the work, we received data on profit margins from the client. Our hypothesis was confirmed: the profit for different categories could vary by 4 times. For each category, we calculated our target A/S ratio to use in the automated bidding strategy settings.
Based on the client data, all categories vary greatly in profit margin: some products sold at a low profit but in high quantities, and for some the reverse was true.
Other interesting details also emerged. For example, some advertising campaigns were carried out in agreement with manufacturers, so a budget agreed with a specific brand was allocated for them. In addition, the management set limits of monthly expenses to the marketing department in advance, so all maneuvering had to be within this limit.
Having familiarized ourselves with all the subtleties, we began to develop a new structure. An important point: for each category, we should have enough sales to launch an automated bidding strategy, about 30 orders per month. If there are 5 orders, the automated bidding will not work and we would have to run ads manually, and it will be difficult to manage advertising for such a number of items. Therefore, we could not just split the campaign into small categories, such as Lego Classic, Lego City, Barbie dolls, L.O.L. dolls and so on — we had to collect data on orders.
For example, there is a large category of "Toys" with 5,000 orders per month. It has subcategories "Toys for girls" with 3000 orders. There is enough data, which means the subcategory may be split further. With Toys for Girls, there is a smaller subcategory of "LOL Dolls" with 50 orders and "Dollhouses" with 10 orders. This means, a separate campaign can be made for "Dolls", while "Dollhouses" should be combined with something else.
Here's a diagram that roughly describes how we collected data for the automated bidding strategies
The work was complicated: in the end we got almost 300 advertising campaigns based on the product feed. It was necessary to take into account all factors: current performance, the budget allocated by manufacturers, profit margin, seasonality, the total spending limit set by the management. But as a result, we got a very flexible structure. What remained was only to launch advertising.
In our experience in eCommerce, the most revenue is generated by the Smart Shopping campaigns: a user enters a query into a search engine and immediately sees a product card with a price. They can use it to go to the store and make a purchase:
Since the share of Smart Shopping campaigns on this project was only about 10%, they were the first that we tested the new setup on.
To develop such advertisement, you do not need to manually collect search queries and type up ads. Google automatically takes all the information it needs, creates ads, finds the relevant audience and displays products to it. Google takes information about products from the product feed, a table with product details: name, photo, price, description, availability, and so on.
We launched the first 10-20 campaigns in the top categories — and immediately got an excellent result: revenue from Smart Shopping campaigns increased 10-fold.
Here and further, the charts show the revenue as a percentage of the first month, since we cannot show the absolute values:
But a much more important fact is that revenues increased without increasing the A/S ratio, that is, proportionally to the costs. The hypothesis worked: we were able to scale up the sales without an unjustified increase in costs, with higher real profits.
Sometimes it happens that sales grow in the ad account, but the real profit of the client does not change. We also checked for this: we compared statistics from Google Ads with data in Marwin CRM. It turned out that everything is in order and the growth is almost synchronous.
After we verified that everything was working as it should, we moved on to tuning Dynamic Search Ads.
The setup of Dynamic Search Ad campaigns repeated the successful segmentation logic of Smart Shopping campaigns. We tried it, we succeeded, so it should work here too.
Dynamic Search Ad Campaigns — are a Google tool similar to the Smart Shopping campaigns. Google also collects search queries, creates ads, displays them to the right audience, but there are some differences.
Dynamic Search Ads look like regular text and image ads, without photos or product prices. Google takes information about products not from the feed but from the site: a link to the product page or category is given to the system, it scans it and creates an ad.
The most important thing in our work was to separate category and product queries. With category searches the user is not looking for a specific product but types something like "Buy a 2 in 1 stroller" in the search bar. With product searches the buyer is looking for a specific product model, for example, "Buy a Mercedes stroller".
Usually, the conversion rate is higher for product searches: the buyer already knows what they want, it remains only to find a suitable product. But the Marwin case is peculiar because category searches worked better than product searches. Therefore, it was important that a search for a product did not direct the buyer to a category and vice versa according, and this proved to be difficult to set up.
The point of this is not to get more traffic, but that for general queries with 2 nesting levels, A/S ratio is lower than for product-specific searches with 3 or 4 nesting levels. Although in theory it should be the opposite: as a rule, it is low-frequency queries that are more specific and better converted into purchases.
We conducted a research on queries differentiated by the level of nesting. For example, the gradation of levels from 1 to 4 may look like this: "Mom and Baby" → "Strollers" → "Transformer Strollers" → "Chicco: Goody Plus Stroller". The queries "Strollers" → "Transformer Strollers" received more traffic:
If a customer is looking specifically for Chicco Goody Plus Stroller and the store has only one page with this product, Google will display just that page. Everything should work without problems here.
But if the buyer has not yet decided on a specific stroller model and enters "Buy a transformer stroller" in the search engine, a problem arises. We have several hundred pages where such words occur, so Google will choose one at random to generate an ad. And it is far from certain that the buyer will like this stroller and they will not go looking for something else.
The best option in this case is to display a page with all strollers so that the buyer may choose what they like. To do this, one must manually remove display of all product pages in the campaign settings.
It was a long and painstaking job that took many hours. If the search queries were more general, for example, to buy not a "transformer stroller" or "2-in-1 stroller", but a "stroller", then we had to exclude even more pages from the search results page.
We launched Dynamic Search Ad campaigns at the end of September. They worked great before us, but we were able to increase revenues for these campaigns by more than 2 times in a month and at the same time reduce the A/S ratio.
The first stage was completed: we launched the Smart Shopping and Dynamic Search Ad campaigns with the new setup and got good results. The second stage was to optimize campaigns so that advertising worked even better, to identify what worked well and what did not. The first approaches to optimization that we try usually give the biggest effect and help to cut costs by up to 30%, so after launching, we spend resources primarily on optimization and only then on expanding campaigns.
Each advertising campaign is divided into groups. We can see which groups bring the most buyers, which bring few, and which are generally useless. The problem is that we cannot manage the profitability of each group due to the specifics of the automated bidding strategy: profitability is set at the level of an advertising campaign.
We can only stop them if the results are really bad. But we cannot scale specific groups within the campaign. Therefore, in the Smart Shopping campaigns, we divided the product groups into 3 clusters and managed them independently based on the actual A/S ratio:
As a result, by disabling and adjusting inefficient product groups, we reduced the ratio of advertising costs to sales for Smart Shopping campaigns. This allowed us to increase budgets by 30% and revenue by 100%: Revenue and Cost Chart for Smart Shopping Campaigns for September-December 2021
With Dynamic Search Ad campaigns, we did pretty much the same thing. Also, we examined all the pages with poor performance:
2 months after the launch, revenue began to grow rapidly, which made it possible to smoothly increase the budget for Dynamic Search Ad campaigns. We received a 6-fold increase in revenue while reducing the A/S ratio by 3 times: Revenue and A/S Ratio Chart for Dynamic Search Ad campaigns for September–December 2021.
In total, during the work on the project from May to December 2021, we were able to increase the estimated gross profit by about 7 times. This was partly influenced by the growth in demand in December, but when comparing the performance indicators for 2020 and 2021, it becomes clear that in December 2021 the profit was almost 3 times higher than in December 2020:
Going forward, we see several more scenarios for further optimization of the project.
It should be noted that one of the important tasks that we set ourselves is to ensure that the client did not depend on us and could take over campaign management at any time without losing results. The account structure we implemented allowed for a smooth project transfer to the in-house team because they were more in sync with the client's product range.
Campaigns are still working effectively after almost a year; the client is able to adjust budgets for categories depending on current priorities. This allows them to keep the sales at a stable level. For our part, we are ready to implement the development ideas discussed above if the explosive growth is requested again.
What was interesting about the project
«No contextual advertising guru will save your business if you work in a falling market. We took up such a project: all macro indicators pointed to the fact that the conversion rate decreased twofold and the market sank by 70%. Let us tell you how we tracked the market decline, what growth points were found, and what came of it in this new case.»
What was interesting about the project:
Advertising budget for a month
«I came to 1jam.ru agency with the problem of high cost per lead and incorrect analytics, which we could not use to compare effectiveness of advertising channels with their expenses. The agency's experts developed a custom model to match conversion to channels, implemented electronic commerce, found problems in Google Analytics measurements and fixed them. What was exciting is that we got a matrix of our current and potential market reach with breakdown by car brand and spare part category, from which we drew a conclusion that we need to expand. Now we are systematically expanding the market reach and reducing the cost per lead. Contextual advertising brings a revenue of 400-600 thousand rubles per month»
What was interesting about the project
Advertising budget for a month
Offline revenue growth
«We started working with the agency in April 2018, the project team suggested we launch the categories of goods in sequence over the season in step with demand growth during the summer. One of our problems is that we have several offline stores for which we could not measure return on advertising investments. To solve this problem, we implemented call tracking, set up goals on the site and manually tracked the revenue trends by categories to collect the overall data and calculate the acceptable cost per lead by category. In general, I consider the task accomplished, we continue to work, I can recommend the team.»