Digg this: How to use social networking sites properly to boost traffic

diggthis.pngWe all know that building solid, natural and authoritative links to your site or blog is the best way to obtain unshakable rankings. There has been a lot of chatter lately about using social networking sites to help build traffic and eventually links. Those links are hard to get, unless of course you have a power user account at a popular site like Digg. Unfortunately, getting a power user account involves a lot of work that most bloggers are not willing to put in.

Power users carry more weight than regular ones, the main benefit being that you need far less votes to make it to the Digg homepage. One hundred power users account for around 50% of the stories that make it there. The traffic you would get by being linked to the Digg homepage is not itself profitable, but many of those eyes glaring at the screen are the linkerati – influencers that will link and blog about your story, giving you a lot of very valuable natural and authoritative links.

The basic ingredients for success at Digg are: diggable posts/articles, a power user submitting your article, and a digg-friendly landing page. Most stories make it to the home page if they get over 50 votes in less than 24 hours. To help you achieve such numbers, I've seen many blogs directly or indirectly promoting a service called Subvert and Profit (S&P), designed to get all the votes needed to land on the Digg homepage. You basically pay $1 per vote and they pay $.50 cents to the Diggers. This means that if you create a diggable post and a digg-friendly landing page, you only need to invest around fifty bucks to make it to the Digg homepage. And if you can make it there, you can make it anywhere. Right. Read more

The Power of Myth: Can a black-hat take down your rankings?

black_hat1.jpgMy old pal Skitzzo from SEOrefugee revisits what he calls an SEO “myth”: that a competitor can potentially harm a site owner just by pointing links to his or her site.

According to the number of Sphinns, it looks like a lot SEOs agree it’s a myth. That’s understandable, as it would be very unfair for the search engines to allow this type of thing to happen.

Unfortunately the situation is not as simple as it first seems. As has been my practice on this blog, let's dig a little bit deeper to understand why—although difficult and possibly expensive—it is very well possible to pull of this exploit. For those concerned, I explained how to counter this type of attack in a previous post about negative SEOs. Check it out. Read more

Avoiding the Bounce House: Optimizing your Search Marketing Campaign

bounce_house1.jpgGetting solid rankings is a lot of work, and properly organizing keywords and landing pages is no trivial task either. Why not make the most out of it once you have started getting the traffic? After beginning a successful PPC or SEO campaign, it’s time to maximize the returns from it.

There are a lot of metrics that search marketers can track, but these are the three that deliver 80% of my results: Bounce Rate, Conversion Rate, and Return on Investment (ROI). Read more

Rome Wasn’t Built in a Day—Neither is effective link building

rome_colosseum.jpgLink building is without a doubt the most time consuming—but most rewarding—aspect of search engine optimization. It usually takes more effort to promote your content (build links) than to actually create it. As I have stressed repeatedly before, compelling, useful content should make your link building efforts much easier.

Before I go any further, let me note that I have a slightly different perspective when evaluating link-building tactics than most SEO consultants. I do SEO primarily for my own sites and my income depends on the ability of those sites to make money. That means that I try to build links that primarily offer long-term value. I still try to get the short-term and medium-term value links, but I like to build authority for my sites. If you’re working for a client or a boss that wants to see immediate results, your priorities will probably be different.

In situations where I have to pay or put some serious effort to get a link, the most important criteria is always: Will the link send useful, converting traffic?

Why is this my most important criteria? Let's explore three different scenarios to illustrate this: Read more

Tracing their Steps: How to track feed subscriber referrals with Google Analytics

people_walking.jpgOne of the most important measures of success for a blog is the number of RSS subscribers. There are many blog posts out there about how to increase your number of subscribers. They range from the use of bigger, more prominent and attention-grabbing RSS buttons, to offering bonuses for signing up. While you can use all sorts of tricks, at the end of the day it is really about the value you give to your visitors on an ongoing, consistent basis. Personally, I subscribe to any blog that sparks my interest, but as soon as I see the quality drop I unsubscribe just as quickly. So many blogs, so little time!

Let me introduce another way you can increase your RSS subscribers that I have not seen covered anywhere. It works by identifying your best RSS referral sources and focusing your marketing and networking efforts on those.

Read more

The Ranking Triathlon: How to overcome crawling, indexing, and searching hurdles

felizsanchez.jpgI frequently get asked why a particular page is no longer ranking. I wish there were a simple answer to that question. Instead of giving personal responses, I’ve decided to write a detailed post with the possible problems that might cause your ranking to drop, as well as all the solutions I could think of. I also want to present a case study every week of a ranking that dropped and what we did to get it back. If you have a site that is affected I invite you to participate. Send me an email or leave a comment.

There are many reasons why your page or website might not be ranking. Let's go through each of the three steps in the search engine ranking process and examine the potential roadblocks your page might face. We’ll see how to avoid them, how to identify if your page was affected, and most importantly, how to recover. Read more

Sphinn Doctor: Adding Sphinn It! (with Sphinn counts) to your feed and website posts

You have probably read about Sphinn – the Digg-like, social media site for search engine marketers. Almost every SEO/SEM blog has talked about it. If you haven't, Rand's post is an excellent introduction.

Instead of trying to explain why it is important to get on the Sphinn home page – that is covered in other blogs – I will focus on how to make your posts more “sphinnable” by adding a Sphinn It! link to the end of all your posts. You can do that by using FeedBurner FeedFlares.

In my previous posts on the website and feed you have probably seen something like this:

screenshot2.jpg

Read more

Content is King, but Duplicate Content is a Royal Pain.

painkiller.jpgDuplicate content is one of the most common causes of concern among webmasters. We work hard to provide original and useful content, and all it takes is a malicious SERP (Search Engine Results Page) hijacker to copy our content and use it for his or her own. Not nice.

More troubling still is the way that Google handles the issue. In my previous post about cgi hijacking, was clear that the main problem with hijacking and content scraping is that search engines do not reliably determine who is the owner of the content and, therefore, which page should stay in the index. When faced with multiple pages that have exactly the same or nearly the same content, Google's filters flag them as duplicates. Google's usual course of action is that only one of the pages — the one with the higher PageRank — makes it to the index. The rest are tossed out. Unless there is enough evidence to show that the owner or owners are trying to do something manipulative, there is no need to worry about penalties.

Recently, regular reader Jez asked me a thought-provoking question. I'm paraphrasing here, but essentially he wanted to know: "Why doesn’t Google consider the age of the content to determine the original author?” I responded that the task is not as trivial as it may seem at first, and I promised a more thorough explanation. Here it is. Read more

Canonicalization: The Gospel of HTTP 301

book_gospel_closed.jpgUsually I don’t cover basic material in this blog, but as a loyal reader, Paul Montwill, requested it, I’m happy to oblige. As I learned back in school, if one person asks a question, there are probably many others at the back of the class quietly wondering the same thing. So here is a brief explanation of web server redirects and their use to solve URL canonicalization issues.

And just what is that ecclesiastic-sounding word “canonicalization”? It was Matt Cutts and not the Pope that made it famous, as he used the nomenclature to describe a certain issue that popped up at Google. Here is the problem. All of us have these URLs:

1) sitename.com/

2) sitename.com/index.html

3) www.sitename.com 

4) www.sitename.com/index.html

You know they are all the same page. I know they are all the same page. But computers — unfortunately, they aren't on the same page. They aren’t that smart and need to be told that each one of these addresses represents the same page. One way is for you to pick one of them and use it consistently in all your linking. The harder part, however, is getting other website owners linking to you to do the same. Some might use one, others another, and a few are bound to choose a third.

The best way to solve this is to pick one URL and have your web server automatically force all requests for other variations to go to the one you picked. We can use HTTP redirects to accomplish this. Read more

Our Digital Footprints: Google's (and Microsoft’s) most valuable asset

searchengine_footprints.jpgAfter reading this intriguing article in the LA Times, I came to the conclusion that Google has far more ambitious plans than I originally thought. In their effort to build the perfect search engine — an oracle that can answer all of our questions, even answers that we didn't know about ourselves — Google is collecting every single digital footprint we leave online. They can afford to provide all their services for free. After all, our digital footprints are far more valuable.

What exactly are digital footprints, and how does Google get them? Imagine each one of Google’s offerings as a surveillance unit. Each service has a double purpose. First, to provide a useful service for “free,” and second to collect as much information about us as possible. Consider these few examples: Read more