John Pushed Google's Limit and got Dropped! Should you push it too?

I just learned that John Chows rankings dropped even for his own name!  He suspects that it has something to do with his review for a link back promotion. I am sure that is the problem.

As of yesterday, I was not ranking for my own name. My problem was that when I moved from wordpress.com to my own server, I had http://hamletb.wordpress.com and http://hamletbatista.com, both with the same content. Fortunately, it was very easy to fix. I made hamletb.wordpress.com redirect to hamletbatista.com, and requested a removal of hamletb.wordpress.com from Google's index via Webmaster Central.

Some useful SEO tweaks I did  to my blog:

1. I installed all-in-one-SEO plugin so that I can have unique titles and descriptions. Meta keywords is not really useful for SEO.

2. All category pages, etc. have a meta robots noindex tag to avoid duplicate content.

3. I registered an XML sitemap of my blog to Google. 

Would these steps solve John's case? I don't think so. Read more

I'm not slacking, I'm working on a homerun post!

I am working on a killer post for next Monday where I am going to detail my best kept secret: a very simple technique to identify keywords with high demand and little or no competition.

Do you want to have keywords like this?

profitable_keywords3.gif

I've asked several A-list bloggers for their niche finding formula.  And guess what: they don't have time or don't think it's a good idea to share it.  I can't blame them.  Sharing this powerful information will render it practically ineffective and can cost them thousands of dollars in revenue due to the increased competition.

My blog reader Paul Montwill started the fire when he dared to ask me for such information.  I am sure he will be delighted when I post my technique next Monday.

Here are a couple of tidbits I was able to squeeze out of Aaron Wall and Neil Patel.  They are busy guys and I am glad they took the time to respond:

if he wants to become an SEO consultant then the easiest way to learn marketing is to start marketing one of his own sites… preferably covering a topic he is passionate about. Aaron Wall

The way I usually start is to look at terms that have a high CPC and
then from there I look for the least competitive ones and go after
them. I don't know of a quick way to do this because I myself don't
really do it, but there maybe some easy ways.  Neil Patel

What Neil mentions applies when you are planning to do SEO only. For PPC, you don't want to pay high bid prices.  In either case, what I personally look for is for terms that have high demand (search volume), good profit per sale, and low competition.  The profit per sale depends on the product and the affiliate commission you would get paid.

To measure the level of competition, I use two basic methods. If I am going to do PPC (which I usually do to start), I check the level of competing Adwords advertisers.  In the case of SEO, I check the SERPS (search engine result pages) to see how many sites are ranking organically for those terms.

How can you find those terms in the first place?

That is what I am going to answer in Monday's post.  I will include very detailed instructions and examples too.

 I am still debating whether this is a good idea.  Am I going to take food from my table by doing this?  Probably, but as I've committed myself to share, I guess I don't have an option.  I have to stick to my word.

Please leave some comments and let me know if this is something that you'll find useful.  Would I be giving away too much?  To share, or not to share: that is the question.

Watch out, Feedburner's numbers are woefully inaccurate! … but why?

This was Rand's response to a comment I made about Rand's confirmation of Aaron's claim that an RSS subscriber is worth 1000 links.

Here is my comment:

Wed (6/27/07) at 07:38 AM

Very useful links. I really like the Adwords tip.

An RSS Subscriber is Worth a Thousand Links – well said, Aaron, and very true (though I'd say, rather, 250 or 300)

I think it all depends on the quality of the links, the content on your blog, and your audience.

I checked some A-list blogs to compare subscribers count and inbound links:

SEOmoz

13,109 subscribers

998,000 links

76.13 links per subscriber

Problogger

25,579 subscribers

543,000 links

21.23 links per subscriber

Copyblogger

19083 subscribers

196,000 links

10.27 links per subscriber

Shoemoney

9737 subscribers

127,000 links

13.04 links per subscriber

John Chow

5,818 subscribers

127,000 links

21.83 links per subscriber

The gap doesn't seem to be so big.

What got me intrigued was the fact that bloggers are losing credibility on Feedburner's ability to accurately count RSS subscribers. I noticed, especially on Seomoz that RSS subscriber numbers jump up and down drastically, usually during weekends.

We all like to see our reader stats, count and traffic as a measure of whether we are doing things right or wrong. When WordPress.com dropped the RSS stats tab, they motivated me to host my blog on this server. I am glad they did as I have a lot more flexibility now. I will write a post with more details on the move soon.

I decided to dig deep for clues as to how Feedburners assess the subscriber count. I had the feeling they were measuring the hits to the RSS pages. But, how they account for the hits coming from aggregator services like Bloglines, Google Reader, etc was the question.

How Feedburner estimates the number of RSS readers?
Read more

Legitimate cloaking — real world example and php source code

It's been a while since I posted some juicy source code. This time, I am going to explain the infamous black hat technique known as cloaking with some basic PHP code.

While most people think of cloaking as evil (asking for search engines to penalize your site), there are circumstances where it is perfectly legitimate and reasonable to use it.

From Google quality guidelines:

Make pages for users, not for search engines. Don't deceive your users or present different content to search engines than you display to users, which is commonly referred to as "cloaking."

What is cloaking?
Read more

What is the problem with generic product names?

If you have read my about page, you already know one of my businesses: NearshoreAgents. You can learn more about what we do by visiting the website.

I want to share a nice little debate I recently had with the company's marketing and sales director, Michael Payne. We were brainstorming the right name for a new product. He wanted a generic name but I strongly refused.

One of the best things about blogging and interacting with potential customers is that you get to understand their needs. This is of tremendous value at the time you are trying to come up with new product ideas.

I review our chat transcripts every day. They tell me a lot about the quality of service we are providing, problems, etc.

At the moment we are only targeting businesses, but we have noticed we are getting service inquires directly from consumers. A lot of people don't know how to solve their PC problems. We saw this as a great opportunity to introduce our first B2C product. Live tech support for end users. I won't get into all the details as that is not the purpose of this post.

Michael wanted to name the product ChatTechSupport. The product name says exactly what we plan to do. It includes the main keywords (search engine friendly) and is not too long.

What is my problem with that name?
Read more

Google's Architectural Overview (Ilustrated) — Google's inner workings part 2

For this installment of my Google's inner workings series, I decided to revisit my previous explanation. However, this time I am including some nice illustrations so that both technical and non-technical readers can benefit from the information. At the end of the post, I will provide some practical tips to help you improve your site rankings based on the information given here.

To start the high level overview, let's see how the crawling (downloading of pages) was described originally.

Google uses a distributed crawling system. That is, several computers/servers with clever software that download pages from the entire web Read more

What is the practical benefit of learning Google's internals?

I forgot to start my Google inner workings series with WIIFM. My plan is to write one post each week.

Not matter how well I try to explain it, it is a complex subject. I should have started the first post explaining why you would want to learn that. There are a lot of easier things to read.With some people questioning the usefulness of SEO, this is a good time to make my views clear. Please note that I believe in a solid marketing mix that includes SEO, PPC, SMO, affiliate marketing, viral marketing, etc. Do not put all your eggs in one basket.

If you have been blogging for a while, you have probably noticed that you are getting hits from the search engines for words that you did not try to optimize. For example, the next day I started this blog, I received a comment from a reader that found my blog through a blog search! How was this possible?

Heather Paquinas May 26th, 2007 at 1:24 am

I found your blog in google blogsearch. Needless to say I subscribed right away after reading this. I always suspected what you said, especially after Mike Levin from hittail blogged about using hittail for ppc, but you really hit the nail on the head with this post.

This is possible because that is the job of the search engines! If every page you search had to be optimized, there wouldn't be billions of pages in Google index. It would take a lot of people to do the SEO work :-).

Why we need SEO then? Read more

Should I cross link my sites for better rankings?

My loyal reader Jez asks a very interesting question. I am sure the same question is on the minds of others in the same situation.

Finally, I am in the process of creating multiple sites around a similar theme. I have unique content for all sites, and will host on different servers in Europe and the US, however the whois for each domain will show my name (The company I used does not allow me to hide this info). Is the common whois likely to make much difference when I begin cross linking the sites?

Cross linking (or reciprocal linking) in a small scale (maybe 10 to 15 sites maximum) should not be a major concern. I've seen many sites do it and they are ranking in highly competitive phrases. Most of their link juice comes from non-cross-linked sites though.

When you try to do this on a massive scale, things start to get interesting. I know this from experience. Read more

Can you trust Alexa's numbers?

It is very important to understand that there is no way for external metrics tools such as Alexa, Compete, Ranking, Netcraft, etc. to provide accurate data. Their information is collected from their respective toolbar usage. Alexa has the broadest distribution, but there are still a lot of people that don't use those toolbars or browser plugins. Their data is particularly useful if you are in a technical field: search and affiliate marketing, web development, etc. A large portion of your potential visitors probably have one or more of these toolbars installed.

A while ago, there was an interesting project regarding the efficacy of those metrics. Read more

Great Content + Bad Headline = Mediocre Results

You can spend a few hours researching, structuring, drafting and proofreading a great post, to completely miss it by choosing a really bad title. I recently submitted a carefully crafted rebuttal to the Seomoz article: Proof Google is Using Behavioral Data in Rankings. The post generated some controversy and some heated discussion as to the validity of the tests and results. I read everything. And, given my technical nature, I decided to dig deeper in myself. I ended up with slightly different conclusions about the experiments. If you want to find out please read the post at Youmoz.

Now, here's the bad news. As Kurt, wisely points out, I tragically missed the mark by poorly choosing an empty title: "Relevance feedback".

Kurt (86)

Sat (6/16/07) at 05:38 PM

Good post… well thought out and presented… gave it a thumbs up. Unfortunately, it will most likely get overlooked by most readers due to its title/headline. Look at the article you're a referencing, "Proof Google is Using Behavioral Data in Rankings". You know that headline will bring in some clicks. It was moved to the blog of SEOmoz from the Youmoz section (even with its flawed testing and logic). The mozzers aren't stupid… they know this type of headline and article will stir up some controversy and bring in some links. I'm no expert copywriter… far from it. I just hate to see a good post sit on the sidelines because of a bad headline.

The title I chose did not offer the reader any incentive to click or learn more. I guess that I operate in two modes: engineer and marketer and that I forgot to flip the switch while writing this post.

First, let me state that his remarks about the mozzers are valid for most journalists, trade publications, social media sites, etc. It is human nature to judge books by their cover. If the cover is crap, the content must be crap. That is how we normally think. Read more