The language capabilities of generative AI, specifically GPT-4, have proven helpful in various PPC tasks like managing keywords, writing ad text, and helping with audience development.
However, there are also difficulties to tackle, such as figuring out which existing domain expertise to prioritize, connecting with business data and integrating into established workflows.
To address these, PPC marketers working with GPT-4 can focus on four aspects:
- Prompt engineering.
- Plugins.
- Custom GPTs.
- Actions.
I’ll guide you through each technique to help you become a proficient GPT-4 user.
Where GPT-4 falls short
Historical data and real-time gaps
Generative AI is built from a historical corpus of data and doesn’t have access to up-to-the-minute new information.
Specifically, business data isn’t available in ChatGPT, which is just a large language model (LLM) that can’t pull data from the sources digital marketers need to do their jobs. For example, GPT-4 can’t connect to Google Ads, so it can’t accurately describe how campaigns perform.
It also can’t access historical CPC data, so it can only guess the answer to questions like “Tell me the most cost-effective new keywords to add to my campaigns.”
Lacks insight into your preferences or expertise
GPT-4 lacks knowledge about your preferences and personal domain expertise, so it can’t say how you would address a poor-performing keyword.
Maybe your approach would be to reduce its bid. Or maybe you’d eliminate the keyword.
As with many things, the answer is usually “it depends.” But what it depends on and what decision you’d make is not obvious to an LLM.
Sure, it can make recommendations based on the collective PPC knowledge in its model, but it doesn’t know your interpretation and preferred course of action.
Inability to take direct actions within ad accounts
Conversations with ChatGPT are stuck in OpenAI’s systems and can’t turn advice into actions in the ad platform. So, while it may suggest a useful new ad headline, it can’t upload that headline to your ad account.
Let’s explore some GPT-4 capabilities that help address these three problem areas, starting with prompt engineering.
1. Prompt engineering
With prompt engineering, you change how you interact with GPT-4 to get it to provide higher-quality responses.
Simply asking it, “What are my best-performing campaigns?” won’t give you a good response because GPT-4 doesn’t know anything about your campaigns.
It’s unsure how you define “best performance” and doesn’t know your account management style.
Add business data to the prompt
To fill in the knowledge gap, you can add more details to the prompt:
- “What are my best-performing campaigns from this list: [copy-paste in the CSV from your Google Ads campaign report].”
The beauty of GPT-4 is that it understands CSV and can parse the data in it to respond with the name of a high-performing campaign.
If your data surpasses the token limit for a prompt, you can attach files, such as the CSV file containing your account data, instead of pasting the CSV directly into the prompt box.
Explain your goals in the prompt
Next, engineer the prompt to define what “best performing” means. You could add a line like this to the prompt:
- “We have a CPA target of $20, and campaigns should drive at least 30 conversions per month.”
With that additional knowledge, GPT-4 can rank and filter the campaigns to respond to which ones are best, and it can even add a more nuanced response in cases where no campaigns meet all the criteria.
It could still tell you the campaigns with the lowest CPAs but also explain that none of them met the target for volume and say that further optimizations are recommended.
Avoid bad math
One issue I’ve faced with GPT-4 is that it doesn’t consistently do math correctly. This is due to what’s called “drift” in LLMs.
As time goes by, the weights in the LLM model may change, and the same prompt that returned correct results may start to produce incorrect results.
For example, in my own tests, I see occasional mistakes in the calculation of ratio metrics like CTR and CPA.
CPA is the cost divided by the number of conversions, but sometimes GPT-4 forgets to do the division, producing a much higher CPA than the actual numbers.
To mitigate this problem, I now load the prompt with the values of ratio metrics so that GPT-4 doesn’t have to do that math but can instead use the data provided.
Another problem is that LLMs sometimes fail to understand that a high CPA is worse than a low CPA. When that happens, it may say that the campaign is doing great when it’s the exact opposite.
Left alone without guardrails, GPT-4 could generate an account summary for a client that will surely get the account team in a lot of trouble for failing some basic PPC knowledge.
For this reason, engineer your prompts with extra information and include instructions like:
- “High CPA is worse than low CPA.”
Use custom instructions
Prompt engineering is the original skill to make LLMs perform better, but it’s also the most manual. Fortunately, there are easy ways to make prompt engineering more scalable.
Custom instructions were introduced to address the repetition of specific instructions by GPT-4 users who engineered prompts.
A custom instruction is a user-level setting in GPT-4 that stores these repeated instructions.
Custom instructions are a great place to load in common things that should be considered for all interactions, like:
- “High CPA is bad.”
- “When CPA is too low, only suggest a higher bid if the impression share is less than 100%.”
Then, every chat with GPT-4 will consider this information and return more consistently high-quality answers.
2. Plugins
In prompt engineering, we could add CSV data with information about campaign performance, CPCs, or other data to a prompt.
But this involves manually getting the data, like Google Ads metrics, from somewhere else before interacting with GPT.
One way to make interacting with Google Ads data in GPT-4 easier is to work from the Google Ads UI rather than ChatGPT. I covered this technique in a previous article, explaining how to use Ad Scripts and add GPT-4 prompts with access to Ads data.
But if you prefer to work in ChatGPT, plugins are a solution that allows the LLM to connect with additional data by letting it call APIs as needed. APIs are the connectors of data on the web, letting various systems communicate with each other.
Web Browser and Code Interpreter are two plugins built by OpenAI. They’ve also open-sourced a third plugin for knowledge base retrieval so anyone can easily enhance the LLM with additional knowledge.
Third-party plugins
But plugins can also come from third-party developers to connect with another API you might need as part of a chat with GPT-4.
Despite roughly 1,000 plugins for ChatGPT, discovering useful ones is challenging due to the absence of a comprehensive directory on the OpenAI site.
So, I used a bit of generative AI to pull together the list as of Jan. 22, and classify each.
Check out the list in my spreadsheet so you can filter and hopefully discover a plugin that will help you. Feel free to make a copy to apply filters and search to your heart’s content.
Although plugins can be useful, they haven’t gained much traction.
People prefer using generative AI in their existing tools and workflows rather than switching to ChatGPT, where they have to work hard to combine all the capabilities of their current workflows.
Sticking with their existing tools and using their generative AI integrations is easier.
How plugins work
Once you’ve added plugins to your ChatGPT account from the Plugin Store, simply select up to three of them with a checkmark to use in your current chat.
The APIs supported by the selected plugins are then automatically available for your use.
Working with APIs may seem intimidating to non-programmers. Still, the advantage is that the LLM handles technical details, allowing users to get what they need easily through a simple conversation.
Think of it this way: when the chat leads down a path that requires some outside data, the LLM can figure out what that data is and knows how to construct the API calls to get what it needs.
The LLM also understands the API response that contains the requested data so that it can integrate this additional information into its responses.
From the user’s perspective, the only difference from adding a plugin to a conversation is that they can now get more information when using the LLM.
A simple example would be conversing with GPT-4 about the weather. It doesn’t know the current weather in Mountain View. However, given a plugin for weather data, it can:
- Figure out that the user wants to know when it will rain in Mountain View.
- Call the weather API for that city.
- And, if the response includes a 60% chance of rain on Tuesday, weave that data into its response.
In the case of weather data to construct ad text, GPT-4 can:
- Use a plugin for the weather to get the chance of rain.
- Use its LLM powers to write a must-click ad headline.
- Use another plugin to put the new headline into a spreadsheet format that Google Ads can understand.
Plugins could connect with company data to construct ads with customer reviews or current promotional offers.
Plugin capabilities
OpenAI plugins can retrieve real-time information, like sports scores, stock prices, the latest news, etc. It’s not just limited to public data. It could also retrieve business information like:
- Company docs.
- Brand guidelines.
- Advertising policies.
- Target audiences and personas.
- Personal notes about previous PPC experiments.
This feature could be used to craft highly personalized and targeted ad campaigns. With access to target audience preferences and detailed company brand guidelines, ChatGPT can tailor ads to your audience’s needs while matching your brand’s style.
And plugins are not just limited to using APIs to retrieve data. They can also push data back into another system through an API, enabling transactions like placing an order, updating a lead in a CRM, or adding a keyword to the ad platform.
But as new as they are, plugins are already yesterday’s news, having been surpassed by the more powerful GPTs.
Get the daily newsletter search marketers rely on.
3. Custom GPTs
Beyond plugins, there is an even newer and more powerful capability that adds extra knowledge, custom actions, and specific instructions by making your own clone of GPT and instilling it with your own preferences.
Anyone can build their own GPT using a wizard that uses ChatGPT interactions to collect the required information and turn it into the right settings.
As a frequent Google Ads script developer, I created a GPT to help non-coders through the process of creating a Google Ads script.
My GPT is directed to interact with users in ways that help address common issues. For example, I told my GPT to do several things, including:
- Encourage the user to explain what the script should do by using pseudocode.
- Ask the user to state concrete rather than vague goals (e.g., instead of “best-performing,” did you mean “CPA less than $20?”).
- Get the GPT to raise potential issues (e.g., are you considering appropriate lookback windows for the analysis?).
- Ask the user if they need a manager (MCC) or single account script.
- Ask the user if they want more detailed logs to understand what the code is doing.
Here’s what an interaction with the Scripts Helper GPT might look like:
GPTs are like a massive evolution of custom instructions that lets you store a predefined set of extra prompt details.
One of the most powerful benefits GPTs have over custom instructions is the ability to have different preferences for different scenarios.
An agency that works with many clients would want to maintain different custom instructions for each client, something not possible.
Now, an agency could create a GPT for every client. Each GPT can be trained with different brand guidelines, style docs, and other client preferences, like different marketing strategies.
Because a GPT can be kept private to an individual, like an account manager, or private within a company, like an agency, it is a very powerful and useful tool to shape AI in a meaningful way so that it can produce more relevant and useful results for each client an agency works with.
Luckily, OpenAI has created a nice directory to help users find GPTs they might find useful.
4. Actions
Finally, there is the new OpenAI capability called Actions, available to custom GPTs to help extend capabilities beyond Code Interpreter, DALL-E and web browsing by allowing access to any API, much like what plugins did for GPT’s core models.
Actions can be used to get new information or take actions resulting from an interaction. For example, ecommerce advertisers could connect with a product pricing API so that the LLM could return suggested ad headlines that include the correct, current product price.
Advertisers could use a different action to send the ad text suggestions from the LLM directly to the ad platform or to a spreadsheet generator that knows how to construct a spreadsheet in the right format for bulk imports.
Because actions are part of GPTs, advertisers can maintain separate GPTs with different actions for each client. For example, your team could build an action for every ecommerce client you manage to pull in current inventory and price details.
Each client is connected to their own data, but the agency team can choose which data to work with by selecting the GPT associated with the right client.
Take your PPC marketing to the next level with GPT
To go beyond the default capabilities of ChatGPT, consider testing the various solutions available from OpenAI.
- Prompt engineering techniques help get the most out of base models.
- Plugins allow you to connect more data to the LLM.
- GPTs let you build your own version of GPT that has all the right preferences, learnings, and data connections built in.
- Actions are the ability inside custom GPTs to work with your own APIs to fetch data and make changes in third-party systems.
The better we leverage these capabilities, the better we will unlevel the playing field and tip the odds in our favor of winning at PPC.
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.