top
Click Here & Buy ZATO Owner, Kirk Williams' newest book on Google Ads - Ponderings of a PPCer: Revised & Expanded.
Kirk Williams
 • 
PPC Podcast

Shapley Models & Attributive Common Sense with Nechama Teigman Full Interview - PPC Ponderings Podcast

Shapley Models & Attributive Common Sense with Nechama Teigman Full Interview - PPC Ponderings Podcast

10/25/19 UPDATE: Hello Facebook Agency Visitor Person!  We’re delighted to have you visit this awesome post. About a year ago, ZATO stopped offering Facebook Ads solutions so we could focus solely on what we do best: Google Ads. Because of this, we’re always interested in partnerships with great Social Advertising agencies (like yourself, wink wink!) and we offer referral fees for signed clients!  Anyway, back to it, and happy reading…

Post Summary

It's been awhile!

The Q4 holiday season is officially over and we're back to the podcast.

I'm really excited to share this next interview with you, because like the last one (Heather's, check it out here if you missed it: Imputation, Predictive Models, & Emotions in Attribution), it was with someone who taught me about attribution from beginning to end of the interview.

Grab some coffee so you can keep up (faster talker alert!), and ponder each and every word Nechama shares about attribution in this can't miss episode.

In this episode, you'll hear the rest of the conversation Nechama and I had that we couldn't fit into our full Core episode on attribution as we all ponder digital attribution together.  

Listen on Apple Podcasts

Listen on Google Podcasts

Listen on Spotify

If you haven't already, make sure to catch the first full length episode here: Episode 2 - Attribution Concerns & Pitfalls

In the PPC Ponderings Podcast core episodes, we depart from a more traditional "interview approach", and come from more of an investigative journalism format. Enjoy!


Nechama Teigmna is the Senior Paid Search Data Analyst at AdVenture Media based in New York City.

Episode Transcripts

Nechama Teigman (00:03):

When I think of attribution, I like thinking of it, notjust in terms of PPC, but it's really applies to everything in life.

Kirk Williams (00:14):

If you've been building in this person, this interest inpurchasing for maybe months, giving the most credit to just that very end,whatever it might be probably going to be brand or remarketing or somethinglike that is... I don't think that's giving an accurate look at what you'readvertising is actually doing for people, so...

Nechama Teigman (00:40):

Anytime you're looking at data, specifically marketingdata, but really any data, it's that it has to make sense.

Chris Reeves (00:49):

Welcome to the ZATO Works PPC Ponderings podcast, where wediscuss the philosophy of PPC and ponder everything related to digitalmarketing. Today's show is a bonus episode of our full interview with thesenior paid search data analysts in Adventure Media Group, Nechama Teigman. Weinterviewed Nechama for our episode on attribution, and it ended up being afantastic conversation full of passion and extreme knowledge on the subject. Ifyou haven't heard our second PPC Ponderings episode yet, give it a listen. On atechnical note, we did have some audio issues during this interview. Soapologies for the random blips of loops. Please enjoy our behind the scenesconversation with Nechama.

Kirk Williams (01:31):

Okay. So first, would you tell us your name, your title,maybe what you do at AdVenture, where you work and all that?

Nechama Teigman (01:38):

Yeah, my name is Nechama Teigman and I work at AdVentureMedia Group, digital marketing agency in New York, and I work on strategy sideof things. So my background was account management, I still do some of that.Move more to strategy, which is basically taking the account managing side andpairing that with analytics side. And coming up with an overall strategy. So,my preferences are to work within the data side, but then I don't like beingthe person who works with the data and does all the hard work and doesn't getto see the results from that. So I prefer to sit in a place where I get to lookat the data, make some real recommendations off of it and get to actually seethe outcomes of that.

Kirk Williams (02:25):

Sounds like a fun job. Yeah, I've chatted with Patrick andIsaac a little bit. Isaac is founder, Patrick I think he's COO. Is thatcorrect?

Nechama Teigman (02:35):

Yes.

Kirk Williams (02:35):

Yeah. So for those listening, Patrick is the author ofJoin or Die. That's probably his claim to fame at this point. Went out thislast year. No, it's funny. Because I released a little book that's a little bitmore just kind of meandering random thoughts, which is me. Patrick releasedaround the same time. I think it was a month later. Then he released thedigital marketing book of the last few years. And I was like, "Oh man,Patrick, come on, you totally killed my thunder because your book isbetter." So, there you go.

Kirk Williams (03:06):

If you only have a little bit of money and you have tochoose between the two books, I would say go buy Join or Die. So I'm a terriblemarketer of my own book probably. Okay. So we're going to be discussingattribution in this episode. So let's start from the beginning, maybe at a realbase level. How would you define attribution?

Nechama Teigman (03:28):

Yeah. So I would define attribution as a process by whichyou assign value to different touch points that led to conversion. And then bythat logic, a attribution model would be the model that allows you to assignthat value. But whenever you have an attribution question, it's always abouthow much value do you assign to each touch point? And that's supposed to bedone in a fair way, right? So, it's supposed to correlate with the amount thatthat touch point actually gave. So how valuable was that touch point? And youwant to assign the correct value to it, which is similar to... when I think ofattribution, I like thinking of it, not just in terms of PPC, but it's reallyapplies to every thing in life, right?

Nechama Teigman (04:15):

So if someone has a partnership and they have to divide itup like, "How do we divide that up?" And that's why if you... gettingtoo practical right now, but if you look at the models behind the attribution,some of the biggest ones are taken from game theory, which also it's just like,"How do we just act within life and how do we assign these values in afair way?"

Kirk Williams (04:38):

What are some of those models of attribution?

Nechama Teigman (04:42):

There's a model called the Shapley model. I think that'sthe one that Google uses. There's that and Markoff Teams would be the mainones. This is what it uses by the way. We're talking about data drivenattribution at this point. Otherwise, I thought the models are fairly simple interms of the role based. They're giving full credits to the first click they'llask, you're dividing it off in a very system out of way, versus these are themachine learning models and what they do. So for example, the Shapley modelessentially was a Nobel prize winner in economics for game with his valuecalled the Shapley value that was used for game theory. And it really justsolving this question of, if there's multiple members in society and they'recontributing different ways, how do we value them? And what his model is in...oversimplified. Because I will not understand it if it's written out in mathterms, but when you oversimplify it, it's just asking incremental... what's theincremental value of each additional touchpoint.

Nechama Teigman (05:41):

So if I'm comparing someone who comes in from, for example,a Google paid touchpoint, and then you go onto Google organic, and then youhave Facebook touchpoint, right? And that's your journey. And then you haveanother journey that's a Google paid and then a Google organic and then aFacebook and then email. What was the probability, what was... of conversionwithin the first sequence versus within the second sequence? And whateveradditional probability you have in the second sequence, that's the value ofemail. So that's done at the larger scale, but it's also done for each andevery campaign or each and keyword, right? So that's kind of the baselinebehind the Shapley model, which is just looking at everything that we have. Andall these models that are using data need a certain amount of data to be ableto be used.

Nechama Teigman (06:27):

I know Google recently said that they're going to let youdo it much faster. I'm honestly not sure how that's going to work out. But youreally need a decent amount of data to be able to train these systems so thatthey can start to compare different users who converted and see how theyconverted. Right. See what their sequences were so that they can come upwith... Okay, the value for email, if it happens in the four... as the fourthtouch point is whatever, versus the value for email when it happens as thefirst touch point, versus the value of paid or paid social or paid search orwhatnot. And then the Markov is based on a Markov chain. So that's a little bitdifferent. That's not coming out of an economic model. Markov was amathematician. So that's in the math space. So it's getting even further fromstuff that I understand, but my... I know a little bit about it because myhusband is a math guy.

Nechama Teigman (07:17):

I remember him talking about it. And what that looks at isfor each sequence, if I would take one away, well, how much would I lose? Soit's almost the opposite, right? So let's say I took away email this timeinstead of adding it, then what's the probability that that conversion wouldn'thappen? And whatever that is, that's the value of that conversion. So those arethe main data driven models that are used. And very often they come out withessentially the same answers. So I don't think it's that important to decidewhich one we look at. I feel like I wanted to come back to something, but Idon't remember what that was.

Kirk Williams (07:56):

No, that's fantastic. It's interesting as... to hear you talkabout the role that machine learning is having in attribution as well, right?Using those different models, you had referenced the data driven attribution,which a lot of people listening are... probably that's the first time they'veever heard of the Shapley model, the Markov model, but we've most likely allheard of data driven attribution especially since probably that's what repshave told us from day one that we should switch to right? And it's interestinghow there's kind of a tie I see, however, simplistic between even thinkingthrough bidding and attribution models, right? In terms of bidding, you havekind of the simplistic manual type things where you as a human take a fewthings that you see, you make a couple of adjustments or guesses, and then youset the bid, right?

Kirk Williams (08:45):

Then all of a sudden machine learning, bidding, smartbidding comes along and they're just... they're utilizing all of thisinformation that we don't even have access to as PPC advertisers. Where did theperson last visit in searches and that sort of thing, and then bringing thatinto the bidding model. And so in some ways you're basically saying... I mean,that's really, what's happening in attribution as well. You either have theseoverly simplistic manual type models or you have the machine who kind of istaking data, a lot of data that we really could not compile as humans and thenmaking the best decision in that. Is that a fair tide?

Nechama Teigman (09:24):

So I'd say it's similar. I would want to point out somedifferences here. So machine learning and bidding, right. That's taking a lotof values that a lot of inputs that we have absolutely zero access to. So thisis stuff that Google has about your location, about your search history, aboutthe time of day for you specifically, right? So it's very individualized formachine learning based bidding. So in a way it's a lot more advanced. Whenwe're talking about the machine learning models, they're really just doingthings that we can do if we had... if we wanted to. We can sit there and we cando the math manually and we can calculate the probability of each things basedoff of the data that we have access to, as long as, as a business, you'recollecting this data, but it is accessible.

Nechama Teigman (10:10):

So it's data of touch points that people go throughgetting up to conversion. And many businesses will capture this in theirdatabase or they'll rely on GA 360 or some other tools to do this. But this isstuff that we have access to and we can actually calculate on our own if wewant it to. Now the machine learning aspect of it makes a lot faster, makes itmore efficient. But I wouldn't say that the machine learning has the same impacton attribution that it does on bidding. It's just that we're using a morecomplex model. So it is a lot better. I agree with the Google route to say thatwe want to use data driven models. Not exclusively because I like comparingthem to other models, but that's just a side point, but nonetheless, they'regood, but they're not the same. They're not up to the same level ofsophistication as we are with bidding.

Kirk Williams (11:01):

Yeah. No, thank you. That's helpful. Okay. So thinkingthrough the probabilistic aspect of those machine learning models, especiallyas privacy kicks in more, there's going to be holes on that user journey moreand more I think. That's already the case in some ways with things like darksocial or some platforms in their attribution models are going to be able tolook at things like ad views in that. If we're pulling things into a databaselooking internally, we're pretty much dependent on click models, things likethat. Maybe talk through the role here that modeling conversions has to do withespecially those machine learning models. And then how do you see that changingin the future?

Nechama Teigman (11:42):

I would differentiate the machine learning models that wecurrently have from when we're modeling conversions in general. You spoke to aview through conversion. So that's basically what makes all this stuff insanelycomplicated. And up until you get to view throughs, it's complicated and it'shard to wrap your head around. It takes advanced math skills and whatnot. But Ithink that we can get most people there. Once you bring in the view throughs,that's when you really take it to some other world where you lose most people.And the reason for that is like you said, in your internal databases, that datadoesn't exist. So if we're talking about my view of the ideal attributionmodel, which was not your question at all, but that's where I guess I'm bringingthis for now. It would be... it's a tie.

Nechama Teigman (12:28):

So we have to put in regressions, right, to be able to seeif I'm doing a branding campaign and I have tons of impressions and very fewclicks, is it fair to say that that had no real impact just because it didn'tlead to direct clicks? If that's the case, then we're basically discounting alltraditional media, which some people may do, but like I'm not really preparedfor that. Instead, I'd rather kind of combine the two. So we want to look atview throughs, understand that a view and a click are different, right? Andimpression and a click are very different things. And I would give more weightto clicks in general, but we have to compliment that by looking... by puttingin statistical models, which are calculating regressions. So that we're saying,"Hey, how are these two things correlated?"

Nechama Teigman (13:15):

And if they're correlated, how can I input that on top ofmy model? So it's kind of like layering the view throughs on top of the clickbase model and coming out with a overall model that we look at. But this ismore... I think this is more applicable to when you're looking at your wholeentire business. So when you're looking at it across [inaudible 00:13:34], whenyou're looking... let's say just within Google ads or a Facebook or Bing, orwhatever it is, the view throughs are going to be less important depending onthe channel. If I'm running Google shopping campaigns, I think Patrick actuallyhad an analysis where you show that it did have a bigger impact than just theclicks. But nonetheless, I am less concerned about that then I am when I'mrunning a Facebook branding campaign, just because the nature of those twotypes of campaigns are so different.

Kirk Williams (14:00):

Oh okay. So let's think of a more simplistic situation. Anadvertiser starts a Google ads account. They get things rolling and they go toconversions. They create their first conversion. What model should they startout with?

Nechama Teigman (14:22):

So as a baseline model, I'd like to typically use aposition based model in those cases. I'm definitely anti last click and firstclick. I don't have a problem being strong on either of those two, because youby default... you're discounting other ones. Now when it comes to how do weattribute them across different conversions, across different touchpointswithin the same channel? Ideally I want to go for something more complex, butif you're starting out, what typically happens is being that... I think youmentioned it's a new business. So being that it is a new business, the firsttouch point's going to be incredibly important, right? Because we're educatingpeople about the existence of this business versus... And the last click isalways important because without it, you end up nowhere. And then the middleclicks are also important, but probably less so.

Nechama Teigman (15:15):

So I would default to a position based model in thatsituation. However, whenever I'm looking at data kind of moving forward, I'malways using model comparison tool because I don't think that there's any onemodel, which is perfect at all. Even the data driven ones, nothing's perfect.We're talking about human behavior. It's not something that it's very easilymodeled. And to me, the most interesting thing is opening up Google analyticsand just looking at the model comparison tool and making some custom modelswithin there also. So if I'm... I'll set it to position based within my Googleaccount and let that go. But I'm also looking like, "Okay, if it was onlast click, what would it be like?" And really making these decisions, youwant to have a more holistic view, but in terms of when I'm setting it for themachine to pick up on, that would probably be position based.

Kirk Williams (16:09):

Yep. Cool. For what it's worth saying, [crosstalk 00:16:14]there with you.

Nechama Teigman (16:13):

I was going to ask you, I don't know if that's the waythis is supposed to go, but I was curious.

Kirk Williams (16:18):

Yeah, I know. That's pretty much what we always do too. Weswitch that position base. Again, all the weaknesses in there aside, it'sprobably the best simplistic model I think that you can go to. I don't know ifyou know who [Aaron Levie 00:16:34] is at [inaudible 00:16:35]. He was actuallyon our last episode. But he a long time ago... I mean, this was a few yearsago, he wrote about how he wished that we had a reversed time decay model. AndI thought that was kind of an interesting idea. So you start more with benefitof first click, kind of what you would reference, especially maybe for a newadvertiser, and then it starts to give less and less credit as you get towardsthe end. We don't have that. So... and I'm not a huge personal fan of the timedecay model, which again is kind of the opposite.

Kirk Williams (17:05):

You almost give less credit to the beginning. It givesmore credit as you get closer to the point of sale. And gosh, it's just trickyto know how to think through that because I do think there's arguments for why somemodels are better in other instances, and yet the closer you get to sale, ifit's already... if you've been building in this person, this interest inpurchasing for maybe months, giving the most credit to just that very end,whatever it might be probably going to be brand or remarketing or somethinglike that is... I don't think that's giving an accurate look at what you'readvertising is actually doing for people. So...

Nechama Teigman (17:51):

Yeah, I would agree with that.

Kirk Williams (17:53):

Cool, cool. Let's see. I was going to ask you about usingthe model comparison tool, but you kind of addressed that. Do you have anyother thoughts? Are there specific ways that you like to use the modelcomparison tool or exact things you're looking for?

Nechama Teigman (18:08):

Yeah. So within analytics, it is by far my favorite toolin the platform. It's going to obviously vary by client and what I'm lookingfor, but in general, I always find it helpful to look at a last click model anda first click model versus a model for whatever... giving full credit towhatever platform I'm interested in. So for example, earlier today, I waslooking at a client's account, which we're running Facebook for. And we'relooking at the last click versus the first click versus giving a hundredpercent of credit, which is building a custom model within analytics, which isfairly simple to do to Facebook. And that really gives you a much betterunderstanding of the customer's journey. So now, when I see that when I'm onlast click, I have let's say 10 conversions, but it goes up to a hundredconversions when I'm on giving full credit to Facebook. And I have 50conversions at first click.

Nechama Teigman (19:04):

Now I'm starting to kind of build a story, which is reallyin my opinion, the point of attribution, right? We want to get a betterunderstanding of what's actually going on and just giving me a number like,hey, this is the best number I can come up with by the most sophisticatedattribution model out there, which is like Facebook equals 50 and Google equals20 whatever. That's not really creating the story for me as well as I like itto be. So I typically like to go with the extremes and then something in themiddle. So first and last are usually my extremes and then something else inthe middle to kind of create more of a story there.

Kirk Williams (19:41):

Okay. So as you're looking, especially with modelcomparisons and you're kind of keeping an idea and all that to get a biggerpicture of the story like you'd said, do you think there is a lot of benefitto... I mean, there's a whole lot of attribution tools out there and if youhave some favorites, go ahead and shout them out. But do you think...especially, let's just say the average advertiser out there... do you thinkthat they can get a lot of what they're looking for in things like theattribution comparison tool in that in Google analytics? Or do you think thereare going to be specific benefits that a lot of those attribution tools bringthat are worth paying for? Again, feel free to shout out specific ones if you haven't.

Nechama Teigman (20:21):

Yeah. So I think it really depends. If you're talkingabout a small advertiser, I don't think that they're worth paying for,especially when you're starting out, especially if you're running a performancedriven campaigns. If you're running more brand awareness campaigns and a bunchof different types, and you're trying to look at this overall systemaltogether, I think they bring more value, but what's always going to happen isthat when you're at a decently small amount to spend, it's going to not beworth it, you're going to look at the data coming in, you're going to be ableto have a feel for things as unscientific as that sounds. And you're going tobe able to have a pretty good story of what's going on just using the basicstuff that Google offers for free. However, if you're talking... once you'reputting more money into it, the risk just gets exponentially bigger, right?

Nechama Teigman (21:09):

So instead of spending a thousand dollars [inaudible00:21:12] spending a hundred thousand dollars a month, a million dollars amonth. Now the difference of what your basic models are going to give you andthese more advance systems are just going to get amplified. So at that point, Ido think that it is worth it in terms of shouting out. I don't have anyone toshout out, but if you have, I am so interested because we've been obsessivelylooking for a really good solution. And I'm at the point where I'm just tryingto figure out if there is really good one or if this is something that we justhave to build.

Kirk Williams (21:43):

I don't either, which means we'll probably both get hit upby lots and lots of sales reps of attribution tools after this. So some advertiseris getting into this, they're super excited. They've read three blog posts onattribution. Really understand it. They've picked their position based model.What are ways that we should say, "Okay, well, that's good. You're on theright track. Now slow down. Here are some things to make sure you consider withattribution, concerns, limitations, challenges, that sort of thing."?

Nechama Teigman (22:16):

Yeah. Okay. So the first thing I would say to this is thatit's not really just attribution, but anytime you're looking at data,specifically marketing data, but really any data it's that it has to makesense. Sometimes you... that doesn't mean it has to validate your theory.Because sometimes our theories are just straight out wrong, but it has to makesense. If it's not making any sense, it doesn't mean that the attributionmodel's wrong. It doesn't mean that you're wrong, what it means is that youhave to do a deeper dive. So the first thing I would do is if you're somethingfrom... in... where Google's attributing a weird amount to them, it justdoesn't make sense to you based on what you know about your business.Especially if you're marketing for a specific business that you have a lotof... that you really understand, or if you're a business owner, it's your ownbusiness and you really understand your business, I would never discount yourknowledge of your business just be because a system is spitting out a differentnumber.

Nechama Teigman (23:13):

And the thing is just trying to do a deeper dive andfiguring out why is this is happening? Does this make any sense? What's goingon? So I'll just give you a quick example there, in analytics, we're looking atanalytics for a client and we were seeing that referrals, we're getting aridiculous amount of conversions attributed to them, but what was happening wasa lot of the referrals were coming in last click. But then if we switched thefirst click, which kind of comes back to my obsession with using the modelcomparison tool, it was dropping off. So I don't remember the actual numbers,but let's say there were a hundred last click conversions associated with it.It was a handful of first click ones. And that doesn't make sense to me becausethat's not how referral traffic works. Referral traffic isn't real marketing tomy customers, which is more of something along those lines.

Nechama Teigman (24:04):

And if we have just kind of gone with the data there, itwould be an indication that we should make certain business decisions, whichprobably makes the most sense to me. So the next step was like, "Okay,let's figure out what this referral traffic is." And doing a deeper dive,we realized that there was a button, a link on the site that was beingredirected to a referral and redirected back for whatever reason. And it waskind of messing with everything. So the first thing I would just say is, youshould have an idea of what the data should be, and if it's not, then doeverything you can to figure out what's going on. And if after you do a realthorough check and you kind of look through everything, if it still shows thatthen at least have a theory at the very least of why the model could be rightbefore kind of leaning all into it.

Nechama Teigman (24:56):

And the second thing is that I would look at it moredirectionally. Whereas if it's saying that a specific campaign is driving mostof your conversions and is the most efficient then, it probably is the bestcampaign if we're being honest. As long as we're looking at the modelcorrectly, but that doesn't mean that the other ones are necessarily bad. Anddoesn't mean that it's 10 times better than the one it's saying it's 10 timesbetter than. So I would look at it with a grain of salt, but still kind ofputting a decent amount of trust on it, because at the end of the day, you needto trust something. The question is, is that something? And I do tend to put adecent amount of trust in these models as long as it makes sense.

Nechama Teigman (25:42):

The last thing I would say here is that if you're lookingplatform specific, then things kind of change, right? So if we're looking atsaying Google specifically, display by default is not giving any attribution aslong as a search or shopping click happened from that same user. So if I amshown a display ad and I clicked on it and I get to the site and now I'mshown... I decide to come back, I do a branded search and I search and I see abranded ad and I click on it, the full credit's given to the branded campaign.So in that case, that's just a clear limitation in the way the attribution modelis set up within Google ads right now. So there, we have to look at that. So Iwould just say, knowing your attribution models and knowing what's going on isimportant to be able to decide how much credit to put in the data at the end ofthe day,

Kirk Williams (26:34):

Do you have any tips or best practices to think throughwhen you're going into a Google ads account and changing the attribution modelfrom last click, maybe to position based or maybe they're ready for DDA, so youswitch it from last click to DDA? And anything that they should think about andkind of the big picture managing?

Nechama Teigman (26:51):

Yes. So it depends on how your account's set up. There'stwo main factors I would talk about here. The first one is going to be whatyour time lag and path length is right now. So I don't know that we touched onthese yet, but these are really important for attribution, which is how longdoes it take for somebody to convert from their first interaction with thebrand? And how many touch points is that? So if I'm looking at an account andclients on last click, but every single touchpoint is on average one time, [inaudible00:27:24] just not... it's not worth it. In the event that it's taking multipletouch points... And I know that's not really off. Google's really getting thewrong information here as is the advertiser. So that's the first thing that Ilook at and that kind of determines, is this something that has to happen rightnow or is this something that we can wait... hold off on?

Nechama Teigman (27:43):

Especially as right now we're in pre-Black Friday, right?That's what everyone's getting ready for. And the client who is eligible fordata driven, but we actually said, don't do it yet. Wait till after Friday,because this is the second thing, which is that if you're using automatedbidding, so anything like machine learning based, then you're most probablygoing to throw your campaigns back into a learning phase where it's going tohave to go through a learning period, and instead of your... it's not just thatyour reporting is going to be different, Google's getting different informationand Google has to be able to adjust to this. I would not recommend doing it if youare anticipating and having a big impact based on the fact that there are manytouch points and before, let's say we were only concerning one, I wouldn'trecommend doing it at a time when you really need the account to be stable.

Nechama Teigman (28:30):

I'd wait for a period when you're okay with a little bitless stability. Ultimately you kind of have to bite the bullet and just go forit. But really just doing... do it... anticipating a dropping overallperformance for a little bit of time with the end goal of having a betterfuture.

Kirk Williams (28:50):

So you alluded to it. Can you expand more on, why would anattribution model impact smart bidding? Can you kind of push it back intolearning all that? Can you expand more on why that... what's taking placethere?

Nechama Teigman (29:03):

Sure. So the way bidding works, right, is instead of...well, at least smart bidding, instead of us giving Google a specific CPC bidand saying, "Hey, this is what we want a bid for these keywords.",Google's coming up with that bid by themselves. And they're doing that basedoff of our inputs, which are our keywords and our budget, et cetera, but alsobased off of the the conversion rate probability. So what is the anticipatedconversion rate based off of all these different attributes that Google looksat, which would include anything about the user, so where they are, we kind ofreferenced earlier, what they're searching, what their search history is? Sowhat is their probability to convert for our ad? And it takes both of thosethings into account, and then comes up with a CPC bid. Now Google's makingthose conversion rate probability predictions based off of the user, but alsobased off of the history of your account.

Nechama Teigman (29:58):

So if a specific keyword has a history of having a reallyhigh conversion rate, then it's going to be... then Google's more incentivizedto bit higher on there, on that. Because if we go back to what these bidstrategies are, these are conversion based bid strategies. So it's eitheryou're on maximize conversions where you're literally telling Google, maximizemy conversions or you're telling them maximize my conversions at a specific CPAor [inaudible 00:30:24] at target. But nonetheless, they're all tied back toyour conversions. So if you have a specific keyword, which is bringing in alarge amount of conversions, and now we switch our attribution and suddenly ithas way fewer, the system's going [inaudible 00:30:40] confused and it's notgoing to know what to do until it starts to gather more data. So one thing thatwe... that I typically will say is a confused algorithm is worse than noalgorithm because confused algorithms, they kind of just go out of whack andthey start to say, "I don't know what's going on."

Nechama Teigman (30:58):

So either it just stops altogether, that's one case thatprobably wouldn't have happen in the case of attribution of just changing yourattribution model. But there are times when your conversion data isn't comingin correctly and Google's like, "I don't know what's happening here."And all of a sudden your campaign just don't spend any money. The other optionis that it goes crazy. So it's just like, "I don't know what's happeninghere. I need more data in order to figure this out." And so it just startstesting. So basically what you're doing is you're allowing for more testing tohappen, which typically results in a period of instability followed by aperiod... followed by better performance in the future. Because we know we'redoing this for a reason, but because Google has a very incomplete amount ofdata, because it's being fed, let's say on last click acquisition for the lastthree years, it doesn't know any of the... it doesn't really know the fullimpact of the keywords and searches that's happening. And it really has tofigure that out all over again.

Kirk Williams (31:53):

When you are talking a client through changing a model,not directly before black Friday, what timeframe do you usually give them interms of letting them know, "Hey, there's probably going to be somereentering learning period, all that. Let's give it this amount of time."?

Nechama Teigman (32:11):

So I think a lot of the default I've heard, especiallyfrom Google side is two weeks is a time period. I like to think of it more atmore so in terms of clicks. So spend... I know people like to talk about spendlike, okay, once you spend $10,000, you're okay or whatever it is, but I don't thinkthat's really makes sense. Because you could be spending $20 on a click or 50cents on a click. We're really talking about a volume game here. So if a clienthas a higher amount of clicks, so there are a client that gets more volume totheir site from our campaigns, then I'm usually comfortable saying within twoto three weeks we should be able to move on. If it's a client that gets lower,a smaller amount of clicks to their site, it could take a couple months.

Nechama Teigman (32:54):

And that's also part of the... while we're making thedecision to switch over, that's part of it. If it's going to take six months toget anything there and also when there's so little data that it's going to takesix months, that's a problem because it's not that it's going to take sixmonths, it's also that Google tends to value more recent data, more so thanolder data. So when you tell Google to switch the attribution model and you'renot going to have enough data for six months, it's probably not worth it toeven switch if we're being honest, because it's never going to actually haveenough data to do a good job for you. So really what I'm looking at is how muchdata there is. If there's a lot of data, we could kind of go with you[inaudible 00:33:37] move two weeks just because that has... there's day of theweek discrepancies. So we want to give it at least two cycles of that. Buttypically we'll give it two a month at least to fully figure itself out.

Kirk Williams (33:50):

Okay. Anything else on attribution? This would be the lastquestion. Anything else on attribution that we should think about?

Nechama Teigman (33:58):

I guess one thing I would say is that as a company isgrowing, they really have to take it seriously. If you're in the beginningphases, it's okay to not invest in attribution models. I would even recommendthat they don't until it's more advanced modeling, but if you're growing and ifyou're doing a lot of different types of marketing, then it becomes worth it.

Chris Reeves (34:23):

This has been a bonus episode of the PPC Ponderingspodcast. Keep checking back for more interviews and our next full episode. Ifyou like what you hear, please consider sharing this with your network, leavingus a review on Apple podcasts. Until next time, may the auctions be ever inyour favor.

 

Want more free content like this delivered directly to your inbox?
Subscribe Here
Kirk Williams
@PPCKirk - Owner & Chief Pondering Officer

Kirk is the owner of ZATO, his Paid Search & Social PPC micro-agency of experts, and has been working in Digital Marketing since 2009. His personal motto (perhaps unhealthily so), is "let's overthink this some more."  He even wrote a book recently on philosophical PPC musings that you can check out here: Ponderings of a PPC Professional.

He has been named one of the Top 25 Most Influential PPCers in the world by PPC Hero 6 years in a row (2016-2021), has written articles for many industry publications (including Shopify, Moz, PPC Hero, Search Engine Land, and Microsoft), and is a frequent guest on digital marketing podcasts and webinars.

Kirk currently resides in Billings, MT with his wife, six children, books, Trek Bikes, Taylor guitar, and little sleep.

Kirk is an avid "discusser of marketing things" on Twitter, as well as an avid conference speaker, having traveled around the world to talk about Paid Search (especially Shopping Ads).  Kirk has booked speaking engagements in London, Dublin, Sydney, Milan, NYC, Dallas, OKC, Milwaukee, and more and has been recognized through reviews as one of the Top 10 conference presentations on more than one occasion.

You can connect with Kirk on Twitter or Linkedin.

In 2023, Kirk had the privilege of speaking at the TEDx Billings on one of his many passions, Stop the Scale: Redefining Business Success.

Continue reading

Find what you're looking for here: