SEO is Dead! (Long Live Advanced SEO)

After trying a few different approaches, I've finally found a clear direction for this blog. Some A-list bloggers consider that starting another SEO blog is a bad idea. There are thousands of bloggers writing about SEO and the market is already saturated. It's going to be difficult to stand out.

I have to say that I disagree. While there are lots of guys blogging about SEO, many of them are just repeating the same thing: "create good content" and "attract authority links."  If it is so simple and that is the only thing you needed to do, then why do we need to learn SEO in the first place?

Here are a few examples of sites with good content and relevant links that are not listed on the top 10.

allintitle_3examples.gif

Read more

At last! A Rock-solid Strategy to Help you Reach High Search Engine Rankings

rock.jpgEvery SEO strategy has its good and bad

Search engines have long kept their ranking formulas secret in order to give users the most relevant results and to preclude malicious manipulation. Despite all the secrecy, though, it remains extremely difficult for search engine companies to prevent either direct or indirect manipulation.

We can broadly classify search engine optimization (SEO) strategies into two camps: passive and active. Passive optimization occurs automatically by search engine robots, which scour the web finding and categorizing relevant sites. Most websites listed on the search engine result page (SERP) fall into this category. If it were up to search engine companies like Google and Yahoo, this would be the only type of optimization that existed.

Active optimization takes place when website owners purposefully engage in activities designed to improve their sites’ search engine visibility. This is the kind of search engine optimization we normally talk about when we think of SEO.

Let’s go one step further and classify these deliberate forms of optimization based upon the tactics employed. In general these are: basic principles, exploiting flaws, algorithm chasing, and competitive intelligence. Read more

The Never Ending SERPs Hijacking Problem: Is there a definite solution?

hijacker.jpgIn 2005 it was the infamous 302, temporary redirect page hijacking. That was supposedly fixed, according to Matt Cutts. Now there is a new interesting twist. Hijackers have found another exploitable hole in Google: the use of cgi proxies to hijack search engine rankings.

The problem is basically the same. Two URLs pointing to the same content. Google's duplicate content filters kick in and drop one of the URLs. They normally drop the page with the lower PageRank. That is Google's core problem. They need to find a better way to identify the original author of the page.

When someone blatantly copies your content and hosts it on their site, you can take the offending page down by sending a DMCA complaint to Google, et al. The problem with 302 redirects and cgi proxies is that there is no content being copied. They are simply tricking the search engine into believing there are multiple URLs hosting the same content.

What is a cgi proxy anyway? Glad you asked. I love explaining technical things :-) Read more

What do Search Marketing and Going to the Doctor have in Common? Learn my Best Kept Secret: How to find profitable keywords

doctor.jpgPicture this scenario:

Laura Martin has a mild headache. She knows she suffers from migraines, so she goes to her pharmacist. The pharmacist calls her doctor to get approval for a new refill of Imitrex, a migraine treatment medicine.

Paul Stevenson feels extremely tired. He is aware he suffers from type 1 diabetes. His body doesn’t make enough insulin. He only needs another insulin shot to help the food he has eaten turn into energy and get his body back to work.

These patients know exactly what they need. They have health problems and they already know the solution.

Now consider this:

Ralph Fortin was hit on the head by a baseball. He’s been having strong headaches. He doesn’t suffer from anything, as far as he knows. He doesn’t have a clue as to what might be wrong with his system. He goes to the doctor, explains what happened. The doctors gives him a check up and prescribes a pain killer.

Helen Willis was taking her usual morning walk. She’s a very healthy person with no medical history. She gets dizzy,  everything begins to spin, and she falls to the floor. She wakes up in an ambulance. The emergency doctor starts asking questions, but she doesn’t have a clue how to respond, so he needs to perform further tests. It turns out that she forgot to eat breakfast.

These patients didn’t have a clue as to what their problems were. They needed a solution to their problem.

What does this have to do with search? Good question. Read more

John Pushed Google's Limit and got Dropped! Should you push it too?

I just learned that John Chows rankings dropped even for his own name!  He suspects that it has something to do with his review for a link back promotion. I am sure that is the problem.

As of yesterday, I was not ranking for my own name. My problem was that when I moved from wordpress.com to my own server, I had http://hamletb.wordpress.com and http://hamletbatista.com, both with the same content. Fortunately, it was very easy to fix. I made hamletb.wordpress.com redirect to hamletbatista.com, and requested a removal of hamletb.wordpress.com from Google's index via Webmaster Central.

Some useful SEO tweaks I did  to my blog:

1. I installed all-in-one-SEO plugin so that I can have unique titles and descriptions. Meta keywords is not really useful for SEO.

2. All category pages, etc. have a meta robots noindex tag to avoid duplicate content.

3. I registered an XML sitemap of my blog to Google. 

Would these steps solve John's case? I don't think so. Read more

I'm not slacking, I'm working on a homerun post!

I am working on a killer post for next Monday where I am going to detail my best kept secret: a very simple technique to identify keywords with high demand and little or no competition.

Do you want to have keywords like this?

profitable_keywords3.gif

I've asked several A-list bloggers for their niche finding formula.  And guess what: they don't have time or don't think it's a good idea to share it.  I can't blame them.  Sharing this powerful information will render it practically ineffective and can cost them thousands of dollars in revenue due to the increased competition.

My blog reader Paul Montwill started the fire when he dared to ask me for such information.  I am sure he will be delighted when I post my technique next Monday.

Here are a couple of tidbits I was able to squeeze out of Aaron Wall and Neil Patel.  They are busy guys and I am glad they took the time to respond:

if he wants to become an SEO consultant then the easiest way to learn marketing is to start marketing one of his own sites… preferably covering a topic he is passionate about. Aaron Wall

The way I usually start is to look at terms that have a high CPC and
then from there I look for the least competitive ones and go after
them. I don't know of a quick way to do this because I myself don't
really do it, but there maybe some easy ways.  Neil Patel

What Neil mentions applies when you are planning to do SEO only. For PPC, you don't want to pay high bid prices.  In either case, what I personally look for is for terms that have high demand (search volume), good profit per sale, and low competition.  The profit per sale depends on the product and the affiliate commission you would get paid.

To measure the level of competition, I use two basic methods. If I am going to do PPC (which I usually do to start), I check the level of competing Adwords advertisers.  In the case of SEO, I check the SERPS (search engine result pages) to see how many sites are ranking organically for those terms.

How can you find those terms in the first place?

That is what I am going to answer in Monday's post.  I will include very detailed instructions and examples too.

 I am still debating whether this is a good idea.  Am I going to take food from my table by doing this?  Probably, but as I've committed myself to share, I guess I don't have an option.  I have to stick to my word.

Please leave some comments and let me know if this is something that you'll find useful.  Would I be giving away too much?  To share, or not to share: that is the question.

Watch out, Feedburner's numbers are woefully inaccurate! … but why?

This was Rand's response to a comment I made about Rand's confirmation of Aaron's claim that an RSS subscriber is worth 1000 links.

Here is my comment:

Wed (6/27/07) at 07:38 AM

Very useful links. I really like the Adwords tip.

An RSS Subscriber is Worth a Thousand Links – well said, Aaron, and very true (though I'd say, rather, 250 or 300)

I think it all depends on the quality of the links, the content on your blog, and your audience.

I checked some A-list blogs to compare subscribers count and inbound links:

SEOmoz

13,109 subscribers

998,000 links

76.13 links per subscriber

Problogger

25,579 subscribers

543,000 links

21.23 links per subscriber

Copyblogger

19083 subscribers

196,000 links

10.27 links per subscriber

Shoemoney

9737 subscribers

127,000 links

13.04 links per subscriber

John Chow

5,818 subscribers

127,000 links

21.83 links per subscriber

The gap doesn't seem to be so big.

What got me intrigued was the fact that bloggers are losing credibility on Feedburner's ability to accurately count RSS subscribers. I noticed, especially on Seomoz that RSS subscriber numbers jump up and down drastically, usually during weekends.

We all like to see our reader stats, count and traffic as a measure of whether we are doing things right or wrong. When WordPress.com dropped the RSS stats tab, they motivated me to host my blog on this server. I am glad they did as I have a lot more flexibility now. I will write a post with more details on the move soon.

I decided to dig deep for clues as to how Feedburners assess the subscriber count. I had the feeling they were measuring the hits to the RSS pages. But, how they account for the hits coming from aggregator services like Bloglines, Google Reader, etc was the question.

How Feedburner estimates the number of RSS readers?
Read more

Legitimate cloaking — real world example and php source code

It's been a while since I posted some juicy source code. This time, I am going to explain the infamous black hat technique known as cloaking with some basic PHP code.

While most people think of cloaking as evil (asking for search engines to penalize your site), there are circumstances where it is perfectly legitimate and reasonable to use it.

From Google quality guidelines:

Make pages for users, not for search engines. Don't deceive your users or present different content to search engines than you display to users, which is commonly referred to as "cloaking."

What is cloaking?
Read more

What is the problem with generic product names?

If you have read my about page, you already know one of my businesses: NearshoreAgents. You can learn more about what we do by visiting the website.

I want to share a nice little debate I recently had with the company's marketing and sales director, Michael Payne. We were brainstorming the right name for a new product. He wanted a generic name but I strongly refused.

One of the best things about blogging and interacting with potential customers is that you get to understand their needs. This is of tremendous value at the time you are trying to come up with new product ideas.

I review our chat transcripts every day. They tell me a lot about the quality of service we are providing, problems, etc.

At the moment we are only targeting businesses, but we have noticed we are getting service inquires directly from consumers. A lot of people don't know how to solve their PC problems. We saw this as a great opportunity to introduce our first B2C product. Live tech support for end users. I won't get into all the details as that is not the purpose of this post.

Michael wanted to name the product ChatTechSupport. The product name says exactly what we plan to do. It includes the main keywords (search engine friendly) and is not too long.

What is my problem with that name?
Read more

Google's Architectural Overview (Ilustrated) — Google's inner workings part 2

For this installment of my Google's inner workings series, I decided to revisit my previous explanation. However, this time I am including some nice illustrations so that both technical and non-technical readers can benefit from the information. At the end of the post, I will provide some practical tips to help you improve your site rankings based on the information given here.

To start the high level overview, let's see how the crawling (downloading of pages) was described originally.

Google uses a distributed crawling system. That is, several computers/servers with clever software that download pages from the entire web Read more