Dynamic Keyword Insertion for Landing Pages

One critical aspect of highly successful search marketing campaigns is making sure searchers find what they are looking for. I posted this before.

To accomplish this, we first need to grab the visitors’ attention, get them to click through our pages, and ensure that the pages’ content matches the search.

Whether you are doing SEO or PPC, it is imperative that your ads (title and description if SEO) include the search terms.

Advanced PPC management platforms (such as Adwords) provide a very useful feature for this purpose: Dynamic Keyword Insertion (DKI). The purpose of this feature is to help the advertiser create dynamic ads that include the queried keywords in the ad copy, automatically.

DKI works by creating a place holder text (ie.: {Widgets}) where you want the keywords to be included. A typical ad that says: “Buy Widget” will say the same, no matter what the user is searching for. Now, using DKI, for the ad: “Buy {Widget}”, the text inside the brackets, and the brackets will be replaced with whatever the users types in the search box. If he or she types “blue widgets”, the ad will say “Buy Blue Widgets”, etc. This is very useful. DKI can be used to replace all the text in an ad (the title, text and landing page). Jennifer Slegg wrote an interesting article on using DKI for changing the URL of the landing page in the PPC ad.

The point is that the closer the ad is to the search query, the more likely the visitor is going to click on it. In addition to this, Google highlights the keywords if they match the query. This helps a lot too.

Now, what happens when the visitor gets to the landing page? Well, chances are that the page will not include the exact keywords the visitor used to conduct the search; especially, if you are doing PPC. In order to fix this, I use a very simple technique: Read more

Preventing duplicate content issues via robots.txt and .htaccess

Rand of SEOmoz.org posted an interesting article on duplicate content issues. He uses the typical blog to show different examples.

In a blog, every post can appear in the home page, pagination, archives, feeds, etc.

Rand suggests the use of the meta robots tag “no-index”, or the potentially risky use of cloaking, to redirect the robots to the original source.

Joost the Valk recommends WordPress users change some lines in the source code to address these problems.

There are a few items I would like to add to the problem and to the proposed solution.

As willcritchlow asks, there is also the problem of multiple URLs leading to the same content (ie.: www.site.com, site.com, site.com/index.html, etc.). This can be fixed by using HTTP redirects and by telling Google what is our preferred domain via webmaster central.

Reader roadies, recalls reading about a robots.txt and .htaccess solution somewhere. That gave me the inspiration to write this post.

After carefully reviewing Google’s official response to the duplicate content issue, it occurred to me that the problem might not be as bad as we think.

What does Google do about it?
During our crawling and when serving search results, we try hard to index and show pages with distinct information. This filtering means, for instance, that if your site has articles in “regular” and “printer” versions and neither set is blocked in robots.txt or via a noindex meta tag, we’ll choose one version to list. In the rare cases in which we perceive that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we’ll also make appropriate adjustments in the indexing and ranking of the sites involved. However, we prefer to focus on filtering — rather than ranking adjustments … so in the vast majority of cases, the worst thing that’ll befall webmasters is to see the “less desired” version of a page shown in our index.

Basically, Google says that unless we are trying to do something purposely ill intended (like ‘borrowing’ content from other sites), they will only toss out duplicate pages. They explain that their algorithm automatically detects the ‘right’ page and uses that to return results.

The problem is that we might not want Google to choose the ‘right’ page for us. Maybe they are choosing the printer-friendly page and we want them to choose the page that includes our sponsors’ ads! That is one of the main reasons, in my opinion, to address the duplicate content issue. Another thing is that those tossed out pages will likely end up in the infamous supplemental index. Nobody wants them there :-).

One important addition to Rand’s article is the use of robots.txt to address the issue. One advantage, this has over the use of the meta robots tag “no-index”, is in the case of RSS feeds. Web robots index them, they contain duplicate content but the meta tag is intended for HTML/XHTML content and the feeds are XML content.

If you read my post on John Chow’s robots.txt file, you probably noticed that some of the changes he did to his file, were precisely to address duplicate content issues.

Now, let me explain how you can address duplicate content via robots.txt. Read more

Advanced link cloaking techniques

The interesting discussion between Rand and Jeremy had me thinking about some of the things affiliates do to protect their links. I am talking about link cloaking — the art of hiding links.

We can hide links from our potential customer (in the case of affiliate links), and we can hide them from the search engines as well (as in the case of reciprocal links, paid links, etc.).

While I think cloaking affiliate links to prevent others from stealing your commissions is useful, I am not encouraging you to use the techniques I am about to explain. I certainly think it is very important to understand link cloaking in order to protect yourself when you are buying products, services or links.

When I am reading a product endorsement, I usually mouse over the link to see if it is an affiliate link. Why? I don’t mind the blogger making a commission’; but, If I see he or she is trying to hide it via redirects, Java-script, etc. I don’t perceive it is as an endorsement.  I feel it is a concealed ad. When I see <aff>, editor’s note, etc. I feel I can trust the endorsement.

Another interesting technique is the cloaking of links to the search engines. The reasoning behind this concept is so that your link partners think you endorse them, but you tell the search engines that you don’t. Again, I am not supporting this.

Cloaking links to the potential customers.

Several of the techniques, I’ve seen are: Read more

Estimating visitor value

We love traffic.  We want as much traffic as possible.  It is really nice to see our traffic graphs jump really high.  With our PPC campaigns we pretty much obsess over our click-through rates.  We like to go after the keywords phrases that drive the most traffic.  Everybody is in love with Digg and Social Media.

All traffic is not equal, even search traffic coming from similar phrases.  What we really need is traffic that converts.  Visitors that take whatever action we expect them to take.  Buy an e-book, subscribe to our newsletter or download our software, etc.  We need traffic motivated to take action.

There is a big difference between running a site that get 10,000 visitors a day that makes $10,000 a month and one that gets 1,000 visitors a day that makes $20,000 a month. For the first, the visitor is worth 3 cents, and for the second is worth 66 cents — 22 times more. Read more

Determining searcher intent automatically

Here is an example of how useful it is to learn SEO from research papers.

If you’ve read some of my previous posts, you will know that I am a big fan of finding out what exactly search visitors want. I posted about classifying both visitors and landing pages, so that search visitors looking for information find information articles, searchers looking to take action land on transaction pages, etc.

I really like the research tools MSN Labs has. One of my favorites is this http://adlab.msn.com/OCI/OCI.aspx

You can use it to detect commercial intent. Try it. It is really nice.

I’ve been wanting to do something like that, but I didn’t have enough clues as to how to do it. Until now.

Search engines patent expert, Bill Slawsky, uncovered a gem. A research paper that details how a team of researchers achieved exactly this.

I still need to dig deep into the document and the reference material, but it is definitely an excellent find.

I will try to make a new tool for this. I will also try to make this and other scripts I write, more accessible to non-technical readers. I guess most readers don’t care much about the programming details. They just want to be able to use my tools easily :-)

What to do with the money you make online

Some readers coming from John Chow dot Com, might be wondering if I make money on-line.

Showing big checks and bragging is not my style, but I do understand most people want proof. Instead of showing checks, bank statements, etc. I am just going to show you what I do with the money my companies make for me.

I just added pictures from my nice little golf villa in Casa de Campo. I bought it last year and I recently remodeled it.

If you want to have an idea how much it costs, here is the current list of villas for sale at Casa de Campo. You can alternatively do a search in Google for “buy villa in casa de campo”.

Remember I don’t live there, it’s just for renting and relaxing.

Robots.txt 101

First let me thank my beloved reader SEO blog.

Thanks to him I got a really nice bump in traffic and several new RSS subscribers.

It is really funny how people that don’t know you, start questioning your knowledge, calling you names, etc. I am glad that I don’t take things personal. For me it was a great opportunity to get my new blog some exposure.

I did not try intentionally, to be controversial. I did ran a back link check on John’s site and found those interesting results I reported. I am still more inclined to believe that my theory has more grounds than SEO Blog’s. Please keep reading to learn why.

His theory is that John fixed the problem, by making some substantial changes to his robots.txt file. I am really glad that he finally decided to dig for evidence. This is far more professional than calling people, you don’t know, names.

I thoughtfully checked both robots.txt files and here is what John removed in the new version: Read more

John Chow fixes anchor text and pleases Google

As I reported before, John stopped showing up in Google for “make money on-line” for a few days. He is now back at #1 for the term.

What did he do? He is not telling.

I was going to use this post to explain exactly what I did to restore my number one ranking. However, after reading Kumiko’s comments in my Taipei 101 to number 1 post, I’ve decided against it. I think everyone will agree that this kind of information is extremely valuable – some “SEO Guru” tried to take me for $4,000 by saying he knew the answer (which I highly doubt since he made no guarantee).

While I won’t give the step by step I can offer this piece of advice if you lose a ranking for a desired keyword – Google webmaster tools is your friend! Get to know it really well.

This is what Kumiko said:

Comment by Kumiko

2007-06-02 18:24:47

48) { this.width = 48; this.height = 48; } ; if (this.width Reading how you got back to #1 will be a great read! Aren’t you worried about Google reading it though and simply changing the algorithm again?

 

He is hinting that he used Google webmaster tools to figure out what the problem was. I can tell you what specific section he looked at: Webmaster Tools -> Statistics -> Page analysis -> In external links to your site. That section shows the anchor text people are using when linking to your site. Read more

Competitive research or privacy attack?

I found an interesting tool via Seobook.com. It exploits a “feature” of current browsers that do not properly partition persistent client-side state information (visited links and caching information) on a per site basis.

The tool can identify URLs in your visitor’s browsing history. Aaron suggests this be used to check if your visitors come from competing sites and adjust your marketing strategy accordingly.

This might not work as Aaron might expect. You can only tell that the visitor visited those URLs in the last n days (n the number of days the user keeps in his or her browsing history). You won’t be able to tell when, how often or how recently those URLs where visited. Read more

Segment visitors by intention with Google Analytics

As I mentioned before, understanding what visitors want and giving it to them is the key to a successful website. That is the big picture.

Now let me tell you how to actually measure this. My tool of choice for this is Google Analytics.

With Google’s Adwords Conversion Tracking you can define goal pages and track conversions that happen once visitors land on those pages. For example: thank you pages for signing up, downloading a white paper or for purchasing a camera.

Many e-commerce websites have a multi-step check out process. Once you hit the “buy” button you are taken to a page where you can select the quantity of the selected product and other variables. After this you are taken to a page where you input your shipping information. Later to another page for you to input your billing information. Followed by a confirmation page and finally to the thank you page. This is commonly known as the “conversion funnel”.

You can use funnels to identify and reduce drop-out rates throughout the conversion process. Google analytics provides tools to create such funnels and reports to measure them.

The main problem is that most people optimize their conversion process, but don’t measure and optimize their persuasion or pre-selling process as well.

Once a visitor clicks on the “buy” button, he or she is already set on buying the product, but the path to conversion starts way before that, it begins with the persuasion process. I explained that process on this post.

In short, visitors come to your site with a specific mindset (expecting something in particular and the keywords they type are the best clue to what that is). It is important they land in the right pages and that those pages induce them to move to the next step in the persuasion process.

Now let’s see how we can use Google Analytics to segment your visitors based on what they want. Read more