Archive for May, 2007

LongTailMiner v0.1 alpha – find long tail keywords nobody thought about

I’m really enjoying this blogging thing! Every comment I am getting from my readers is a new idea that I feel rushed to put into practice.

My reader, Andrea, mentioned she parses log files to mine for keywords as well. That is an excellent idea.

I decided to put that idea into code and here is a new tool to mine for long tail keywords.

To make really good use of it, I would setup a PPC campaign in Google with a “head keyword” in broad match, bidding at the minimum possible. Make sure your ads maintain good click-through rates (over 0.5%) to avoid getting disabled. Run it for a week or two (preferably more) and you will have a good number of search referrals and “long tail keywords” that people are actually looking for. You can later create good content pages that include those keywords. In most cases, long tail keywords are really easy to rank with on-page optimization only.

I will probably write a Youmoz entry with more detailed instructions on how to take advantage of this. In this way I can get more people to try it and get really valuable feedback.

Here is the Python code:


# v0.1 alpha by Hamlet Batista 2007


import re

from urlparse import urlparse

from cgi import parse_qsl

p = r'[^"]+"GETs([^s]+)[^"]+"s2[^"]+"([^"]+(?:google|yahoo|msn|ask)+[^"]+)"'

log = open('tripscan.actual_log')

lines = log.readlines()

keywords = set()

for line in lines:

 m =, line)

	if m:

 	(internal, link) = m.groups()

		elements = urlparse(link)

		if elements[4]: #check to see if there is query string

 		params = parse_qsl(elements[4])#break qs in keyword, value pairs

			for (k,v) in params:

 			if k == 'p' or k == 'q':#top search engines use p or q for the keywords

 				keywords.add( elements[1] + " - " +  v)

#print the report

for k in  keywords:

 print k

Here is the output: – best places to vacation in april – help find a cheap vacation package anywhere – new york vacation package deals – vegas vacation packages – what is the best beaches to stay in jamaica – outrageous hawaii vacation packages – “paris in 5 days” versailles – Vacation Package Deals – vacationpackage – vacation packages – 10 best places for vacation – vacation package – best places to vacation in june/july – best travel deals for june – last minute caribean deals – package vacation – Tripscan – best places to vacation in June – best places to travel in october – vacation package – caribean vacation – Best Caribean vacation – Cheap Vacation Package – CANYON RANCH IN LENNOX – find vacation packages – Hawaii all inclusive Vacation Packages – california vacation ideas – vacaton package – caribean vacation – all inclusive package deals from New York to Cancun – best places to explore – caribean vacation island packages – vancation package – puerto vallarta nude resorts – all inclusive vacation places – vacation packge – vacation package – caribean deals – best hotels in caribean – the best caribean vacation – – vacation packages

This is just scratching the surface. One improvement we can make, is to identify the landing pages to which the keywords lead, so we can make sure visitors are finding what they want.


In order to use the script you need to download Python from The script should run in Unix/Linux, Mac, and Windows but I only tested it in Linux.

1. Copy your log file to the directory were you saved the script.

2. Change the name of the log file (inside the quotes) in the line log = open(’tripscan.actual_log’) to the name of your log file.

3. In the command line type: python and you should see the report.


LinkingHood v0.1 alpha

As I promised to one of my readers, here is the first version of the code to mine log files for linking relationship information.

I named it LinkingHood as the intention is to take link juice from the rich to give to the poor linking sites.

I wrote it in Python for clarity ( I love Python 🙂 ) . I was working on an advanced approach involving matrices and linear algebra. After reading some of the feedback regarding the article, it gave birth to a new idea. To make it easier to explain, I decided to use a simpler approach . This code would definitely need to be rewritten to use matrices and linear algebraic operations. (More about that in a later post). For scalability to sites with 10,000 or more pages, this is primarily an illustration and does everything in memory. It’s also extremely inefficient in its current form.

I simply used a dictionary of sets. The keys are the internal pages and the sets are the list of links pointing to those pages. I tested it with my log file and included the results of a test-run.

Here is the script:

#!/usr/bin/python# LinkingHood v0.1 alpha by Hamlet Batista 2007

import re

relationships = {}

p = r'[^"]+"GETs([^s]+)[^"]+"s2[^"]+"([^"]+)"'

log = open('tripscan.actual_log')

lines = log.readlines()

for line in lines:

   m =, line)

	if m:

 	        (internal_page, external_link) = m.groups()

		if'.css|.js|.gif|.jpg|.swf|?', internal_page):


		if not relationships.has_key(internal_page):

 		     relationships[internal_page] = set()

		if'yahoo|google|msn|live|ask', external_link):



print "Tripscan internal pages:"

for page in  relationships.keys():

   print "t"+page+ ": " +str(len(relationships[page])) + " links"

home = relationships['/']

about =  relationships['/aboutus.html']

print 'Home has ' + str(len(home)) + ' links'

for link in home:

   print 't'+link
print 'About has ' + str(len(about)) + ' links'

for link in about:

   print 't'+link

Here are the results from the run:

Tripscan internal pages:
/orlando.php: 2 links
/directory/money_and_finance.html: 3 links
/contact.php: 2 links
/favicon.ico: 3 links
/lasvegas.php: 2 links
/directory/services.html: 2 links
/index.php: 2 links
/directory/travel.html: 1 links
/charleston.php: 2 links
/sunburst.php: 2 links
/cancun.php: 2 links
/blank.php: 5 links
/london.php: 2 links
/discount_travel.php: 2 links
/santodomingo.php: 2 links
/directory/internet.html: 2 links
/phoenix.php: 2 links
/: 41 links
/paris.php: 2 links
/sanfrancisco.php: 2 links
/directory/drugs_and_pharmacy.html: 2 links
/honolulu.php: 2 links
/chicago.php: 2 links
/directory/general.html: 1 links
/directory/fun.html: 2 links
/sitemap.php: 2 links
/hiltongrand.php: 2 links
//: 1 links
/directory/travel2.html: 2 links
/directory/home_business.html: 1 links
/losangeles.php: 2 links
/directory/misc.html: 1 links
/jamaica.php: 2 links
/aruba.php: 2 links
/best_spa.php: 2 links
/amsterdam.php: 2 links
/puertovallarta.php: 3 links
/barcelona.php: 2 links
/newyork.php: 2 links
/submit_link.php: 2 links
/11thhour.php: 2 links
/directory/services2.html: 2 links
/neworleans.php: 2 links
/toronto.php: 2 links
/rome.php: 2 links
/directory/: 2 links
/aboutus.html: 4 links
/directory/other_resources.html: 2 links
/top_ten.php: 2 links

Home has 41 links
About has 4 links

One of the most common errors for people unfamiliar with Python is the issue of indentation. This code cannot just be copied, pasted to a text file, and passed onto Python to run. You need to make sure the indentation (spacing) is right. I will post the code somewhere else and provide a link if this causes too much trouble.

Some readers got lost when I talked about matrices in the previous post. Linking relationships and similarly connected structures are conceptually and graphically represented as graphs. A graph is an interconnected structure that has nodes and edges. In our case, the links are the edges and the nodes are the pages. One of the most common ways to express a graph is with a matrix. Similar to an Excel sheet, it has rows and columns, where the squares can be use to indicate that there is a relationship between the page in column A and the page in row C.

Matrices are great for this because one can use matrix operations to solve problems that would otherwise require a lot of memory and computing power to solve. In order to create the matrix, we would number each unique page and unique link. We would use the rows to represent the pages and the columns to represent the links. Each position where there is a 1 means there is a link between the two pages and a 0 means there is no relationship. Using numbers for the rows and columns, and ones and zeros, for the values saves a lot of memory. This makes the computation a lot more efficient. In the code I use the pages and links directly for more clarity.

I hope this is not too confusing.

Update: I made a small change to include the incoming link count for each page.

In order to use the script, download Python from The script should run in Unix/Linux, Mac and Windows but I only tested it in Linux.

1. Copy your log file to the directory where the script was saved.

2. Change the name of the log file (inside the quotes) in the line log = open(‘tripscan.actual_log’) to the name of your log file.

3. In the command line, type: python and you should see the report.

Mining your server log files

While top website analytics packages offer pretty much anything you might need to find actionable data to improve your site, there are situations where we need to dig deeper to identify vital information.

One of such situations came to light in a post by randfish of He writes about the problem with most enterprise-size websites, they have many pages with no or very few incoming links and fewer pages that get a lot of incoming links. He later discusses some approaches to alleviate the problem, suggesting primary linking to link-poor pages from link-rich ones manually, or restructuring the website. I commented that this is a practical situation where one would want to use automation.

Log files are a goldmine of information about your website: links, clicks, search terms, errors, etc In this case, they can be of great use to identify the pages that are getting a lot of links and the ones that are getting very few. We can later use this information to link from the rich to the poor by manual or automated means.

Here is a brief explanation on how this can be done.

Here is an actual log entry to my site in the extended log format: – – [29/May/2007:13:12:26 -0400] “GET /favicon.ico HTTP/1.1” 206 1406 “” “SurveyBot/2.3 (Whois Source)” “-“

First we need to parse the entries with a regex to extract the internal pages — between GET and HTTP — and the page that is linking after the server status code and the page size. In this case, after 206 and 1406.

We then create two maps: one for the internal pages — page and page id, and another for the external incoming links page and page id as well. After that we can create a matrix where we identify the linking relationships between the pages. For example: matrix[23][15] = 1, means there is a link from external page id 15 to internal page id 23. This matrix is commonly known in information retrieval as the adjacency matrix or hyper link matrix. We want an implementation that can be preferably operated from disk in order to be able to scale to millions of link relationships.

Later we can walk the matrix and create reports identifying the link-rich pages, the pages with many link relationships, and the link-poor pages with few link relationships. We can define the threshold at some point (i.e. pages with more or less than 10 incoming links.)

Why it’s good to mix your incoming link anchor text?

I’ve been reading John Chow’s blog for a while and it is very interesting how he is getting a lot of reviews with the anchor text “make money online” in exchange for a link from his blog. He is ranking #2 in Google for the phrase “make money online.”

I know a lot of SEOs read John’s blog and are not alerting him of some potential problems with this approach. I like the guy and I think he deserves to know.

It is not a good idea to have most of your incoming links with the same anchor text. Especially if most links are pointing to the home page, and the rest of the pages don’t get any links, or very few of them do. Search engines, notably Google, flag this as an attempt to manipulate their results.

Nobody knows for sure how it works but Google has proven in the past that they can detect this and act accordingly.

My advise is to request variations of the target phrase for the anchor text with each batch. For example: make money online free, making money online, make money at home online, work from home, etc… Use a keyword suggestion tool to get the variations and make sure you include synonyms too.

I would also require reviewers to include a link to their favorite post in the review. This way the rest of the pages will get links too and look more natural.

This is documented in other sites. Please check: Case #2

Your competitor is your best friend

As I mentioned earlier, for me success is about what, how, and work.  This is my simple formula.

Anywhere my customers or potential customers express their problems and frustrations is a place for me to dig out opportunities.  Forums, blogs, mailing lists, news groups, etc…   Your what should be driven by your customers’ needs.

Most critical for success is how we do it.  What sets us apart?  What is our UVP?  This is where following your best competitors closely, pays off.

Nobody is perfect.  There is always a better way to do things or at least to appeal to another audience.

My approach is not to simply copy what my competitors are doing.  This is the easiest path, but it is very difficult to stand out by just being another XYZ.

I prefer to look at my competitor’s solutions as their prescribed answer to customers’ specific problems.  The key here is that what needs solving is the customer’s problem, and there is rarely a single solution.  My solution is how I would solve it better leveraging my strengths.

The harder to get the link, the more valuable it is

Links that are too easy or relatively easy to get do not help much in getting traffic or authority for search engine rankings.

If your link is placed on a page where there are several hundred links competing for attention, it is less likely that potential visitors will click than if the page only has a few dozen links.

The value of your link source is in direct relation to how selective that source is when placing links on the page and how much traffic the source gets.  The value also declines with the number of links on the page.

Google is understood to use algorithms to measure the importance and quality of each page.  The PageRank was invented by Google founders and is used for measuring absolute importance of a page.  The TrustRank algorithm describes a technique for identifying trustworthy pages — quality pages.  We can not tell for sure to what extent Google is using this algorithm if at all, or at least their publicly known version.  What we can say, is that based on observation, we can definitely say that they do not treat all links equal and they do not pass authority to your page from all of your link sources.

Success in a $100 budget?

Patrick Saxon from has asked top names in the SEO industry a very useful question.  What most have missed is that Patrick has actually answered the question himself by writing the article.

First, he created a very useful piece of content, and second, he has received a large number of authority links from his peers.

He recently won a conference pass to SMX in Seattle from Arron Wall and he frequently comments and writes posts in the Youmoz section of  I can only see him moving up.  Congratulations Patrick on this cleverly created linkbait!

What would I do with $100-$500 if I had to start over again?  I hope I am allowed to keep my knowledge and experience and at least have the means to support myself for several months.

Give and you shall receive.

I would choose a topic I know a lot about, am passionate about, and invest the money in a domain name and creating useful content.  If I create the content myself I would pay a professional to make it look better.  I would host the content on a hosted blog such as or

After 20 or so posts I would use them as source for an ebook to be sold from the website.

To build buzz I would leverage social media sites and I would start helping and offering suggestions to others in popular forums and blogs.  Readership will build up.

Patrick has pretty much done most of this.  My only suggestion to him is to find or create a useful product for his audience.  If he decides to stick to Adsense, I would definitely move those ads above the fold!  Check the Adsense guidelines for better placement.

Subscribe to my Full RSS Feed

Add to Technorati Favorites

My Top Affiliate Sites

  • TripScan - "vacation package" - 50 million competitors - #13 - Yahoo
  • ReferralPay - "best affiliate programs" - 60 million competitors - #8 - Yahoo | 2 million competitors - #2 - Live
  • InkTrack - "ink cartridge" - 8 million competitors - #5 - Live
  • My Nice Gateaway