You’ve Won the Battle but not the War: 10 Ways to protect your site from negative SEO

rankings_robber2.jpgLast month there was an interesting article in Forbes about the search engine marketing saboteurs. These so-called “SEO professionals” proudly proclaim their job to be damaging the hard-earned rankings of their clients’ competitors. I understand a lot of people would do anything for money, but it’s still unsettling to see such people trumpet their efforts with such gusto. A huge thumbs down to all those mentioned in the article.

Earning high search engine rankings is challenging enough. Now we need to work twice as hard to protect the rankings once we earn them. The Forbes article lists seven ways you can damage someone else's website. I can think of three more — but instead of adding more wood to the negative SEO fire, I’ve decided to create a list of things you can do to detect, prevent and protect your rankings from these types of attacks.

Here are Hamlet’s countermeasures. (You may want to read the Forbes article first to better understand the terms.) Read more

Why Quality Always Wins Out in Google's Eyes

eyes.jpgQuality is king in search engine rankings. Of course spam sites using the latest techniques make their way up top—but their rankings are temporary, fleeting, and quickly forgotten. Quality sites are the only ones that maintain consistent top rankings.

This wasn’t always true. A few years ago it was very easy to rank highly for competitive search terms. I know that for a fact because I was able to build my company from scratch just by using thin affiliate sites. Everybody knows that there has been a drastic change. At least on Google, it is increasingly difficult for sites without real content to rank highly, no matter how many links back they get.

Why is this?

Search engines use “quality signals” to rank websites automatically. Traditionally, those signals were things you might expect: the presence of the searched words in the body of the web page, and on the links coming from other pages. There is increasing evidence that Google is looking at other quality signals to deduce the relevance of websites.

Here are some of the so-called quality signals Google might be using to provide better results: Read more

Out of the Supplemental Index and into the Fire

fire.jpgDo you have pages in Google's supplemental index? Get 'em out of there!

Matt Cutts of Google doesn't think SEOs and website owners should be overly concerned about having pages in the supplemental index. He has some pages in the supplemental index, too.

As a reminder, supplemental results aren’t something to be afraid of; I’ve got pages from my site in the supplemental results, for example. A complete software rewrite of the infrastructure for supplemental results launched in Summer o’ 2005, and the supplemental results continue to get fresher. Having urls in the supplemental results doesn’t mean that you have some sort of penalty at all; the main determinant of whether a url is in our main web index or in the supplemental index is PageRank. If you used to have pages in our main web index and now they’re in the supplemental results, a good hypothesis is that we might not be counting links to your pages with the same weight as we have in the past. The approach I’d recommend in that case is to use solid white-hat SEO to get high-quality links (e.g. editorially given by other sites on the basis of merit).

Google is even considering removing the supplemental result tag. There won't be any way for us to tell if any page is supplemental. Read more

Popularity Contest: How to reveal your invisible PageRank

podium.jpgLet's face it. We all like to check the Google PageRank bar to see how important websites, especially ours, are for Google. This tells us how cool and popular our site is.

For those of us who are popularity-obsessed, the sad part is that the other search engines do not provide a similar feature, and Google's visible PageRank is updated only every 3 months (the real PageRank is invisible). This blog is two months old and doesn't have a visible PageRank yet, but I get referrals from many long tail searches, ergo it has to have a PageRank already. 
How can you tell what your PageRank is without waiting for the public update? Keep reading to learn this useful tip. This technique is not bulletproof, but you can get a rough estimate of your invisible PageRank — and how important your pages are for the other search engines as well — by studying how frequently your page is indexed. Read more

Becoming a Web Authority: The story of Sally and Edward

authority.jpgGetting permanent search engine rankings for your site requires making it very popular within your specific niche. This is what we call a web authority. Other site owners will naturally link to your site when they are talking about your topic because you have some of the best content out there. 

It's common sense that if you want your site to become popular, you'll seek advice from those who have already made it. That's assuming they are willing to share how they did it. I personally like to read Neil Patel's blog. He is one of the few SEO celebrities who is open to share how he made it to the top.
Most advanced SEOs carefully study high-ranking pages for the keywords they are targeting. If you can understand how those sites got there, you can apply the same techniques to your site. Unfortunately, as I explained previously, not all sites ranking high will remain there. These sites are like a house of cards, destined to be brought down by the next fair wind. If you apply the strategies of such cardboard castles, yours will come down too. Let me illustrate my point with this story about Sally and Edward.
Read more

Anatomy of a Distributed Web Spider — Google's inner workings part 3

What can you do to make life easier for those search engine crawlers? Let's pick up where we left off in our inner workings of Google series. I am going to give a brief overview of how distributed crawling works. This topic is useful, but can be a bit geeky, so I'm going to offer a prize for you at the end of the post. Keep reading, I am sure you will like it. (Spoiler: it's a very useful script ).

 

4.3 Crawling the Web

Running a web crawler is a challenging task. There are tricky performance and reliability issues and even more importantly, there are social issues. Crawling is the most fragile application since it involves interacting with hundreds of thousands of web servers and various name servers which are all beyond the control of the system.

In order to scale to hundreds of millions of web pages, Google has a fast distributed crawling system. A single URLserver serves lists of URLs to a number of crawlers (we typically ran about 3). Both the URLserver and the crawlers are implemented in Python. Each crawler keeps roughly 300 connections open at once. This is necessary to retrieve web pages at a fast enough pace. At peak speeds, the system can crawl over 100 web pages per second using four crawlers. This amounts to roughly 600K per second of data. A major performance stress is DNS lookup. Each crawler maintains a its own DNS cache so it does not need to do a DNS lookup before crawling each document. Each of the hundreds of connections can be in a number of different states: looking up DNS, connecting to host, sending request, and receiving response. These factors make the crawler a complex component of the system. It uses asynchronous IO to manage events, and a number of queues to move page fetches from state to state.

If that was when they started, can you imagine how massive it is now? Read more

SEO is Dead! (Long Live Advanced SEO)

After trying a few different approaches, I've finally found a clear direction for this blog. Some A-list bloggers consider that starting another SEO blog is a bad idea. There are thousands of bloggers writing about SEO and the market is already saturated. It's going to be difficult to stand out.

I have to say that I disagree. While there are lots of guys blogging about SEO, many of them are just repeating the same thing: "create good content" and "attract authority links."  If it is so simple and that is the only thing you needed to do, then why do we need to learn SEO in the first place?

Here are a few examples of sites with good content and relevant links that are not listed on the top 10.

allintitle_3examples.gif

Read more

At last! A Rock-solid Strategy to Help you Reach High Search Engine Rankings

rock.jpgEvery SEO strategy has its good and bad

Search engines have long kept their ranking formulas secret in order to give users the most relevant results and to preclude malicious manipulation. Despite all the secrecy, though, it remains extremely difficult for search engine companies to prevent either direct or indirect manipulation.

We can broadly classify search engine optimization (SEO) strategies into two camps: passive and active. Passive optimization occurs automatically by search engine robots, which scour the web finding and categorizing relevant sites. Most websites listed on the search engine result page (SERP) fall into this category. If it were up to search engine companies like Google and Yahoo, this would be the only type of optimization that existed.

Active optimization takes place when website owners purposefully engage in activities designed to improve their sites’ search engine visibility. This is the kind of search engine optimization we normally talk about when we think of SEO.

Let’s go one step further and classify these deliberate forms of optimization based upon the tactics employed. In general these are: basic principles, exploiting flaws, algorithm chasing, and competitive intelligence. Read more

The Never Ending SERPs Hijacking Problem: Is there a definite solution?

hijacker.jpgIn 2005 it was the infamous 302, temporary redirect page hijacking. That was supposedly fixed, according to Matt Cutts. Now there is a new interesting twist. Hijackers have found another exploitable hole in Google: the use of cgi proxies to hijack search engine rankings.

The problem is basically the same. Two URLs pointing to the same content. Google's duplicate content filters kick in and drop one of the URLs. They normally drop the page with the lower PageRank. That is Google's core problem. They need to find a better way to identify the original author of the page.

When someone blatantly copies your content and hosts it on their site, you can take the offending page down by sending a DMCA complaint to Google, et al. The problem with 302 redirects and cgi proxies is that there is no content being copied. They are simply tricking the search engine into believing there are multiple URLs hosting the same content.

What is a cgi proxy anyway? Glad you asked. I love explaining technical things :-) Read more

What do Search Marketing and Going to the Doctor have in Common? Learn my Best Kept Secret: How to find profitable keywords

doctor.jpgPicture this scenario:

Laura Martin has a mild headache. She knows she suffers from migraines, so she goes to her pharmacist. The pharmacist calls her doctor to get approval for a new refill of Imitrex, a migraine treatment medicine.

Paul Stevenson feels extremely tired. He is aware he suffers from type 1 diabetes. His body doesn’t make enough insulin. He only needs another insulin shot to help the food he has eaten turn into energy and get his body back to work.

These patients know exactly what they need. They have health problems and they already know the solution.

Now consider this:

Ralph Fortin was hit on the head by a baseball. He’s been having strong headaches. He doesn’t suffer from anything, as far as he knows. He doesn’t have a clue as to what might be wrong with his system. He goes to the doctor, explains what happened. The doctors gives him a check up and prescribes a pain killer.

Helen Willis was taking her usual morning walk. She’s a very healthy person with no medical history. She gets dizzy,  everything begins to spin, and she falls to the floor. She wakes up in an ambulance. The emergency doctor starts asking questions, but she doesn’t have a clue how to respond, so he needs to perform further tests. It turns out that she forgot to eat breakfast.

These patients didn’t have a clue as to what their problems were. They needed a solution to their problem.

What does this have to do with search? Good question. Read more