The Truth About Sitelinks: Site structure is splendid, it seems

There has been a heated debate on Sphinn about a controversial post by Rand Fishkin of Seomoz. There is a lot to learn from that discussion, but instead of focusing on the debate, I want to talk about something that keeps coming up: Google's Sitelinks.

yahoo1.png

Google doesn't provide a lot of information, but this is what they say about the matter:

  1. Sitelinks are presented if they are found to be somehow useful.

  2. A site’s structure allows Google to find good Sitelinks.

  3. The process of selection, creation and presentation of Sitelinks is fully automated.

Let's forget the technical details for the moment and focus on what Google's purpose is here: they want to save users some clicks by pointing them to the right page directly in the search results. Sitelinks appear only for the first result, and only for sites with meaningful traffic. (Google uses the toolbar data of visitor frequency to make this determination.)

I decided to dig deeper and study the sources, try some examples of my own and make my own conclusions. I'd definitely like to have Sitelinks when people search for my blog, and I'm sure many of my readers here would like the same. Here’s what I learned… Read more

Freakonomics: How to lose money by saving

saving.jpgI am never going to understand why some people don't value their time properly. If you work for yourself, or if you plan to do so in the future, one of the first things you need to learn is to charge yourself an hourly rate—the higher the better. Why? Because affording yourself the maximum rate will prevent you from wasting your precious time on things that are not worth it.

Let me give you a recent example. My top developer, Harold, was hired by a big local telecom here in the Dominican Republic for a freelance programming gig. While working for another company as a consultant he had created a piece of code and now that code needed maintenance. He quoted them a few thousand dollars for the whole project and they went back and forth for several months trying to get the lowest possible price. In the end, their relentless penny pinching saved them the astonishing amount of $800.

I have to say that saving is good and I like to save as much as possible, but read the rest of the story to learn when you lose by saving. Read more

Reclaiming What’s Yours: Getting your feed back from FeedBurner while still tracking subscribers

feed-icon.jpgAs you are probably aware, FeedBurner's way of tracking subscriptions is a little bit unreliable. You've probably seen your subscription numbers drop significantly during the weekends and during the days when you have no new posts or little activity. If you’re like me, you want to know your actual subscriber numbers.

There isn’t a straightforward solution, but I have a couple of ideas I'd like to test. One of the easiest involves using FeedBurner’s Awareness API to gain access to the raw data collected and interpreting the data myself in a more useful way. The other idea takes a little bit more time and involves parsing the RSS hits from the web server log file. I explained my idea for the log file in this comment.

The API idea has the disadvantage that depends on FeedBurner, and my hands would be tied later if I wanted to create a competing product. On the other hand, the log file idea is complicated by the fact that, once you’ve moved your feed to FeedBurner, your RSS hits no longer go to your website. You only get the RSS hits from FeedBurner. That is major obstacle. Read more

On gurus, Google, and gaining marketshare: Leveraging your position as number one

guru.jpgA lot of people read blogs, books, articles and other materials from so-called 'gurus' that made it big. Readers are hoping to do the same but I know that no one will match success by simply following a guru’s advice. I am not saying this because I think those gurus are necessarily dishonest or are holding back (though most are), but because I've learned that you don't become successful by following a specific formula—especially not a formula that somebody else gave you.

Why? Because success is in many ways all about competition. If you learn things from other people's playbook, they already have the first-comer advantage. You too need to be the first somewhere. You need to find unexploited opportunities of your own.

In this post, I want to tell you about my personal experiences with pay-per-click (PPC) and what I’ve learned over the years. I started my first profitable site back in 2002 on borrowed credit with a CAD$3,000 limit. I turned that $3k in $4.5k with PPC and affiliate commissions—a 50% ROI. It was my first time as an online marketer, an online baby step. And now, several years later, I own a 7-figure per year business, I employ several talented individuals and have time to post on this blog! Read more

LinkingHood v0.2 – Find all your supplemental pages with ease

linkinghood2.jpgAs it is well known by now, Google decided to remove the supplemental label from pages it adds to its supplemental index. That is unfortunate because pages that are labeled this way need some “link lovin’.” How are we going to give those pages the love they need if we are not able to identify them in the first place?

In this post, I want to take a look at some of the alternatives we have to identify supplemental pages. WebmasterWorld has exposed a new query that displays such results, but nobody knows how long it is going to last. Type this into your Google search box: site:hamletbatista.com/& and you’ll see my supplemental pages. I tested it before and after Google removed the label and I'm getting the same pages. Read more

Checkmate: Strategic vs Tactical SEO

robotchess.jpgIs SEO just a game?

Consider two chess players, Mike and Tom. Mike has never been able to win against Tom. Mike knows all the rules of the game: how to move every piece, when to capture, when to castle; he even knows all the tactical ideas like forks, pins, skewers and discovered attacks.

Mike's problem is that while he knows all the rules and has read a lot of chess books, he still looks only one or two moves in advance. It is very hard to prepare a winning plan looking so short-term. Tom on the other hand thinks at least five moves ahead and moves all his pieces so that they complete his master plan. Tom anticipates all of Mike's moves and prepares for them with a strong counterattack.

The world of search engine optimization is no different than this. As SEOs we need to think ahead of our competitors and, more importantly, we need to be on top of search engine advances. The world of search is moving so fast that any slacker will be left behind, with no time or opportunity to catch up. Check and mate. Read more

Controlling Your Robots: Using the X-Robots-Tag HTTP header with Googlebot

robopet.jpgWe have discussed before how to control Googlebot via robots.txt and meta robot tags. Both methods have limitations. With robots.txt you can block the crawling of any page or directory, but you cannot control the indexing, caching or snippets. With the robots meta tag you can control crawling, caching and snippets but you can only do that for HTML files, as the tag is embedded in the files themselves. You have no granular control for binary and non-HTML files.

Until now. Google recently introduced another clever solution to this problem. You can now specify robot meta tags via an HTTP header. The new header is the X-Robots-Tag, and it behaves and supports the same directives as the regular robots meta tag: index/noindex, archive/noarchive, snippet/nosnippet and the new unavailable_after directive. This new technique makes it possible to have granular control over crawling, caching, and other functions for any page on your website, no matter the type of content it has—PDF, Word doc, Excel file, zip files, etc. Read more