The exception that confirms the rule. Is the best position in AdWords is the best results? If you bid more will would you become better? In some cases, the fact of improving the position for a given keyword can get expensive. Find out how to do the test without putting at risk your account performance.
Tabla de contenidos
Although logic tells us that in better positions a keyword can have better results, we know that in some cases, worse positions can have a better performance in terms OF ROI.
This usually occurs, for example, with too-generic terms: The user will search the results page for an ad that matches what he really needs, even though he has placed a broad term in the search box.
It also happens with difficult searches, for example in the case of products/services that are too innovative that people do not know yet. There is no set of keywords that you can really define, but you can move demand from other searches.
When we see that a keyword has good performance and we want to exploit its potential, we usually review the impressions it accumulates and if there is any growth field (losses by ranking or by budget). If we detect losses by ranking, we will tend to increase the bid to be more likely to achieve results.
In the case of the client that today we are working with a high volume of generic keywords, and we need to know precisely if the extra ranking that we can win when raising the bids will allow us to improve the results. It is necessary to know in advance, because, once the changes are executed, they can be difficult to reverse (generic keywords are very sensitive to the changes in the bids).
For this particular client, if we apply massive changes without foreseeing the consequences, we can seriously damage the performance of the account, because of the generic keywords we obtain around 80% of the results.
The use of AdWords experiments To distinguish results between the same keywords with different bids is very useful in these cases, and gives us the statistical reliability we require to not be mistaken with the actions implemented.
Keep in mind that experiments can affect the level of quality of our keywords.
A block of ‘ experiments ‘ exists in the Campaign Configuration tab. The implementation is very simple.
We create the experiment and decide the percentage of impressions that will go to him, that is, for how many impressions you want to bid with your usual bid and for how many with the new bid. To make this decision, you should keep in mind how much you are willing to risk.
The higher the percentage of impressions destined to the experiment, the sooner you will have data on how these bids behave. But it is also possible that during the duration of the experiment the results will be more affected.
In this case we decided that the percentage of impressions that would be allocated to the experiment was 50%. The decision was based on how quickly we needed to anticipate the results:
➡ If we want to have data soon, we will have to use a high percentage of split (50%). If on the contrary, we do not want to risk a lot every day and we want to experiment gradually, we will use lower percentages.
We now choose the duration of the experiment. We can put it to 30 days or set the end date manually. As we do not know when we can have relevant data to draw conclusions, it is best to put a long-term end date. If we have data before that date arrives, we can stop the experiment whenever we want.
Next, we modify the keywords whose bid we want to test. You have to put in the Keyword tab and deploy the experiment segment to be able to see the option to set a different bid.
In our case we applied a bid raise of about 20% for this list of more generic keywords, and we monitored the results every 3-4 days to control the status of the experiment.
The results are marked with blue arrows when the statistical relevance is sufficient.
Once the experiment is finished we can apply the changes or not. What happens if we just want to apply them for some of the keywords? We will have to do the process manually, discard the changes of the experiment and establish the bids that we consider in each keyword a task!
For this client and these specific keywords, the experimental rise in bids served only to increase the CPCs and CTRs, but it did not give rise to more conversions. Simply, each conversion was more expensive for us. THE CPL was higher in the keywords with higher bids.
As you have already deduced, we finished the experiment without applying the changes. And this way we check that not always higher bids give rise to better results. How do you put your ideas to the test?
SEM · 26 / 05 / 2016
SEM · 16 / 01 / 2020
Tabla de contenidos1 1. Auction Insights (Comparison of auctions)2 2. SpyFu3 3. SEMRush4 4. iSpionage5 5. KeywordSpy6 6. Moat7 7. Adbeat8 8. Similarweb9 9. Minderest10 10. Boardfy11 11. Prisync12 12. Kompyte13 13. Facebook Ad Library14 14. Google Trends 1. Auction Insights (Comparison of auctions) The first of the SEM tools in the list we have within the Google Ads interface itself. Through the Auction Insights or Comparison of auctions, we […]
SEM · 09 / 01 / 2020
Whether you have just landed in the world of Google Ads (former Adwords) or if you are an experienced Account Manager, it is always good to have a guide with the updated terms of the PPC world at hand. In addition, this Google Ads dictionary will serve as a refreshment and help to understand many […]
SEM · 05 / 12 / 2019
You may think that we still have time to finish the year, but the truth is that we are getting closer to 2020. From what we have seen during this year, which has been full of changes and news, we should pay attention to the SEM trends that will govern in 2020 and the new […]
Send this to a friend