Archive for May, 2007

LongTailMiner v0.1 alpha – find long tail keywords nobody thought about

I’m really enjoying this blogging thing! Every comment I am getting from my readers is a new idea that I feel rushed to put into practice.

My reader, Andrea, mentioned she parses log files to mine for keywords as well. That is an excellent idea.

I decided to put that idea into code and here is a new tool to mine for long tail keywords.

To make really good use of it, I would setup a PPC campaign in Google with a “head keyword” in broad match, bidding at the minimum possible. Make sure your ads maintain good click-through rates (over 0.5%) to avoid getting disabled. Run it for a week or two (preferably more) and you will have a good number of search referrals and “long tail keywords” that people are actually looking for. You can later create good content pages that include those keywords. In most cases, long tail keywords are really easy to rank with on-page optimization only.

I will probably write a Youmoz entry with more detailed instructions on how to take advantage of this. In this way I can get more people to try it and get really valuable feedback.

Here is the Python code:

#!/usr/bin/python

# LongTailMiner.py v0.1 alpha by Hamlet Batista 2007

# 


import re

from urlparse import urlparse

from cgi import parse_qsl


p = r'[^"]+"GETs([^s]+)[^"]+"s2[^"]+"([^"]+(?:google|yahoo|msn|ask)+[^"]+)"'


log = open('tripscan.actual_log')

lines = log.readlines()


keywords = set()


for line in lines:

 m = re.search(p, line)


	if m:

 	(internal, link) = m.groups()


		elements = urlparse(link)


		if elements[4]: #check to see if there is query string

 		params = parse_qsl(elements[4])#break qs in keyword, value pairs


			for (k,v) in params:

 			if k == 'p' or k == 'q':#top search engines use p or q for the keywords

 				keywords.add( elements[1] + " - " +  v)


#print the report

for k in  keywords:

 print k

Here is the output:

search.sympatico.msn.ca – best places to vacation in april
http://www.ask.com – help find a cheap vacation package anywhere
http://www.ask.com – new york vacation package deals
search.yahoo.com – vegas vacation packages
search.yahoo.com – what is the best beaches to stay in jamaica
search.yahoo.com – outrageous hawaii vacation packages
http://www.google.se – “paris in 5 days” versailles
search.sympatico.msn.ca – Vacation Package Deals
search.yahoo.com – vacationpackage
search.yahoo.com – vacation packages
search.msn.com – 10 best places for vacation
in.search.yahoo.com – vacation package
search.yahoo.com – best places to vacation in june/july
search.sympatico.msn.ca – best travel deals for june
search.msn.com – last minute caribean deals
search.yahoo.com – package vacation
http://www.google.com – Tripscan
search.sympatico.msn.ca – best places to vacation in June
search.sympatico.msn.ca – best places to travel in october
search.yahoo.com – vacation package
search.msn.com – caribean vacation
search.msn.com – Best Caribean vacation
http://www.ask.com – Cheap Vacation Package
search.sympatico.msn.ca – CANYON RANCH IN LENNOX
search.sympatico.msn.ca – find vacation packages
ca.search.yahoo.com – Hawaii all inclusive Vacation Packages
search.yahoo.com – california vacation ideas
search.yahoo.com – vacaton package
ie.search.msn.com – caribean vacation
search.yahoo.com – all inclusive package deals from New York to Cancun
search.yahoo.com – best places to explore
search.msn.com – caribean vacation island packages
search.yahoo.com – vancation package
search.yahoo.com – puerto vallarta nude resorts
http://www.ask.com – all inclusive vacation places
search.yahoo.com – vacation packge
search.yahoo.com – vacation package
http://www.ask.com – caribean deals
search.msn.com – best hotels in caribean
search.msn.com – the best caribean vacation
http://www.google.com – related:www.exectourtravel.com/
ca.search.yahoo.com – vacation packages

This is just scratching the surface. One improvement we can make, is to identify the landing pages to which the keywords lead, so we can make sure visitors are finding what they want.

Usage

In order to use the script you need to download Python from http://www.python.org. The script should run in Unix/Linux, Mac, and Windows but I only tested it in Linux.

1. Copy your log file to the directory were you saved the script.

2. Change the name of the log file (inside the quotes) in the line log = open(’tripscan.actual_log’) to the name of your log file.

3. In the command line type: python LongTailMiner.py and you should see the report.

LinkingHood v0.1 alpha

As I promised to one of my readers, here is the first version of the code to mine log files for linking relationship information.

I named it LinkingHood as the intention is to take link juice from the rich to give to the poor linking sites.

I wrote it in Python for clarity ( I love Python 🙂 ) . I was working on an advanced approach involving matrices and linear algebra. After reading some of the feedback regarding the article, it gave birth to a new idea. To make it easier to explain, I decided to use a simpler approach . This code would definitely need to be rewritten to use matrices and linear algebraic operations. (More about that in a later post). For scalability to sites with 10,000 or more pages, this is primarily an illustration and does everything in memory. It’s also extremely inefficient in its current form.

I simply used a dictionary of sets. The keys are the internal pages and the sets are the list of links pointing to those pages. I tested it with my tripscan.com log file and included the results of a test-run.

Here is the script:

#!/usr/bin/python# LinkingHood v0.1 alpha by Hamlet Batista 2007
#

import re

relationships = {}

p = r'[^"]+"GETs([^s]+)[^"]+"s2[^"]+"([^"]+)"'

log = open('tripscan.actual_log')

lines = log.readlines()

for line in lines:

   m = re.search(p, line)

	if m:

 	        (internal_page, external_link) = m.groups()

		if re.search(r'.css|.js|.gif|.jpg|.swf|?', internal_page):

 		     continue

		if not relationships.has_key(internal_page):

 		     relationships[internal_page] = set()

		if re.search(r'yahoo|google|msn|live|ask', external_link):

 		     continue

		relationships[internal_page].add(external_link)

print "Tripscan internal pages:"

for page in  relationships.keys():

   print "t"+page+ ": " +str(len(relationships[page])) + " links"

home = relationships['/']

about =  relationships['/aboutus.html']

print 'Home has ' + str(len(home)) + ' links'


for link in home:

   print 't'+link
print 'About has ' + str(len(about)) + ' links'

for link in about:

   print 't'+link

Here are the results from the run:

Tripscan internal pages:
/orlando.php: 2 links
/directory/money_and_finance.html: 3 links
/contact.php: 2 links
/favicon.ico: 3 links
/lasvegas.php: 2 links
/directory/services.html: 2 links
/index.php: 2 links
/directory/travel.html: 1 links
/charleston.php: 2 links
/sunburst.php: 2 links
/cancun.php: 2 links
/blank.php: 5 links
/london.php: 2 links
/discount_travel.php: 2 links
/santodomingo.php: 2 links
/directory/internet.html: 2 links
/phoenix.php: 2 links
/: 41 links
/paris.php: 2 links
/sanfrancisco.php: 2 links
/directory/drugs_and_pharmacy.html: 2 links
/honolulu.php: 2 links
/chicago.php: 2 links
/directory/general.html: 1 links
/directory/fun.html: 2 links
/sitemap.php: 2 links
/hiltongrand.php: 2 links
//: 1 links
/directory/travel2.html: 2 links
/directory/home_business.html: 1 links
/losangeles.php: 2 links
/directory/misc.html: 1 links
/jamaica.php: 2 links
/aruba.php: 2 links
/best_spa.php: 2 links
/amsterdam.php: 2 links
/puertovallarta.php: 3 links
/barcelona.php: 2 links
/newyork.php: 2 links
/submit_link.php: 2 links
/11thhour.php: 2 links
/directory/services2.html: 2 links
/neworleans.php: 2 links
/toronto.php: 2 links
/rome.php: 2 links
/directory/: 2 links
/aboutus.html: 4 links
/directory/other_resources.html: 2 links
/top_ten.php: 2 links

Home has 41 links
http://www.directorypanel.com/detail/link-3571.html
http://www.campwalden.ca/web/travel14.htm
http://chiangmai.discount-thailand-hotel.net/chmresources/travel_resources-page17.php
http://hamletbatista.com/2007/05/29/mining-you-server-log-files/
http://www.the-happy-side.com/link_description.php?cat_id=1
http://www.popularaffiliate.com/travel.html
http://www.kingbloom.com
http://energytable.com/links/shopping.html
http://hamletbatista.com/page/2/
http://www.garyknight.com/links/vacations6.html
http://www.nicepakistan.com/directory/index.php?c=14
http://www.linkdirectory.com/Travel___Vacation/Destinations/
http://www.realestateingrandrapids.com/links/recreation.html
http://whois.domaintools.com/tripscan.com

http://www.abccoachhire.co.uk
http://www.1americamall.com/index.php?c=22&s=201
http://www.littlemarketstreet.com/links/travel3.html
http://www.uddsprinting.com/travellinks.html
http://uddsprinting.com/travellinks.html
http://www.whois.sc/tripscan.com
http://www.cheap-air-travel-fares.info/resources9.html
http://www.siteinclusion.com/directory?logic=or&maximum=&term=mexico+vacation+central&sr=20&pp=20
&cp=2
http://www.tripscan.com
http://www.goodsearch.com/Search.aspx?Keywords=vacation+packages&page=4
http://www.vts.net/links/travel3.html
http://hamletbatista.com/
http://www.search-the-world.com/search/search.php/search::cat/category::25/page::42/hpp::20/
http://linkcentre.com/search/?keyword=travel&page=4&flag=
http://www.patclarkconversions.com/links/travel2.html
http://www.goodsearch.com/Search.aspx?Keywords=www.tripscan.com&Source=mozillaplugin
http://hamletbatista.com/tag/link-building/
http://www.datingshare.com/sharelinks/travel.html
http://www.tripscan.com/directory/
http://www.webdigity.com/ws/
http://www.tripscan.com/
http://www.ottosuch.de/
http://www1.tripscan.com/hotel-deals/10015639-hotrate.html
http://www.link-exchange.ws/link-exchange/index.php?action=displaycat&catid=27&page=12&perpage=15
&page=13&perpage=15
http://www.bargaintraveleurope.com/Travel_Links.htm
http://www.weboart.com/links/recreation-sports-travel.html
About has 4 links
http://www.tripscan.com
http://res99.lmdeals.com/config.html?in_origination_key=371&in_pd_key=329&SRC=10015639&SRC_AID=no
ne&in_package_key=5034225&in_offering_key=1578846&in_slipclick=main_result&SRC=10015639&SRC_AID=none

http://www.tripscan.com/

One of the most common errors for people unfamiliar with Python is the issue of indentation. This code cannot just be copied, pasted to a text file, and passed onto Python to run. You need to make sure the indentation (spacing) is right. I will post the code somewhere else and provide a link if this causes too much trouble.

Some readers got lost when I talked about matrices in the previous post. Linking relationships and similarly connected structures are conceptually and graphically represented as graphs. A graph is an interconnected structure that has nodes and edges. In our case, the links are the edges and the nodes are the pages. One of the most common ways to express a graph is with a matrix. Similar to an Excel sheet, it has rows and columns, where the squares can be use to indicate that there is a relationship between the page in column A and the page in row C.

Matrices are great for this because one can use matrix operations to solve problems that would otherwise require a lot of memory and computing power to solve. In order to create the matrix, we would number each unique page and unique link. We would use the rows to represent the pages and the columns to represent the links. Each position where there is a 1 means there is a link between the two pages and a 0 means there is no relationship. Using numbers for the rows and columns, and ones and zeros, for the values saves a lot of memory. This makes the computation a lot more efficient. In the code I use the pages and links directly for more clarity.

I hope this is not too confusing.

Update: I made a small change to include the incoming link count for each page.

In order to use the script, download Python from http://www.python.org. The script should run in Unix/Linux, Mac and Windows but I only tested it in Linux.

1. Copy your log file to the directory where the script was saved.

2. Change the name of the log file (inside the quotes) in the line log = open(‘tripscan.actual_log’) to the name of your log file.

3. In the command line, type: python LinkingHood.py and you should see the report.

Mining your server log files

While top website analytics packages offer pretty much anything you might need to find actionable data to improve your site, there are situations where we need to dig deeper to identify vital information.

One of such situations came to light in a post by randfish of Seomoz.org. He writes about the problem with most enterprise-size websites, they have many pages with no or very few incoming links and fewer pages that get a lot of incoming links. He later discusses some approaches to alleviate the problem, suggesting primary linking to link-poor pages from link-rich ones manually, or restructuring the website. I commented that this is a practical situation where one would want to use automation.

Log files are a goldmine of information about your website: links, clicks, search terms, errors, etc In this case, they can be of great use to identify the pages that are getting a lot of links and the ones that are getting very few. We can later use this information to link from the rich to the poor by manual or automated means.

Here is a brief explanation on how this can be done.

Here is an actual log entry to my site tripscan.com in the extended log format: 64.246.161.30 – – [29/May/2007:13:12:26 -0400] “GET /favicon.ico HTTP/1.1” 206 1406 “http://www.whois.sc/tripscan.com” “SurveyBot/2.3 (Whois Source)” “-“

First we need to parse the entries with a regex to extract the internal pages — between GET and HTTP — and the page that is linking after the server status code and the page size. In this case, after 206 and 1406.

We then create two maps: one for the internal pages — page and page id, and another for the external incoming links page and page id as well. After that we can create a matrix where we identify the linking relationships between the pages. For example: matrix[23][15] = 1, means there is a link from external page id 15 to internal page id 23. This matrix is commonly known in information retrieval as the adjacency matrix or hyper link matrix. We want an implementation that can be preferably operated from disk in order to be able to scale to millions of link relationships.

Later we can walk the matrix and create reports identifying the link-rich pages, the pages with many link relationships, and the link-poor pages with few link relationships. We can define the threshold at some point (i.e. pages with more or less than 10 incoming links.)

Why it’s good to mix your incoming link anchor text?

I’ve been reading John Chow’s blog for a while and it is very interesting how he is getting a lot of reviews with the anchor text “make money online” in exchange for a link from his blog. He is ranking #2 in Google for the phrase “make money online.”

I know a lot of SEOs read John’s blog and are not alerting him of some potential problems with this approach. I like the guy and I think he deserves to know.

It is not a good idea to have most of your incoming links with the same anchor text. Especially if most links are pointing to the home page, and the rest of the pages don’t get any links, or very few of them do. Search engines, notably Google, flag this as an attempt to manipulate their results.

Nobody knows for sure how it works but Google has proven in the past that they can detect this and act accordingly.

My advise is to request variations of the target phrase for the anchor text with each batch. For example: make money online free, making money online, make money at home online, work from home, etc… Use a keyword suggestion tool to get the variations and make sure you include synonyms too.

I would also require reviewers to include a link to their favorite post in the review. This way the rest of the pages will get links too and look more natural.

This is documented in other sites. Please check:

http://www.marketingpilgrim.com/2007/01/google-defuses-googlebombs-does-this-change-link-building-practices.html

http://www.linkbuildingblog.com/2007/04/how_not_to_buil.html

http://diagnostics.googlerankings.com/anchor-text-link.html Case #2

http://www.webmasterworld.com/forum30/29269.htm

http://www.seobook.com/archives/000894.shtml

Your competitor is your best friend

As I mentioned earlier, for me success is about what, how, and work.  This is my simple formula.

Anywhere my customers or potential customers express their problems and frustrations is a place for me to dig out opportunities.  Forums, blogs, mailing lists, news groups, etc…   Your what should be driven by your customers’ needs.

Most critical for success is how we do it.  What sets us apart?  What is our UVP?  This is where following your best competitors closely, pays off.

Nobody is perfect.  There is always a better way to do things or at least to appeal to another audience.

My approach is not to simply copy what my competitors are doing.  This is the easiest path, but it is very difficult to stand out by just being another XYZ.

I prefer to look at my competitor’s solutions as their prescribed answer to customers’ specific problems.  The key here is that what needs solving is the customer’s problem, and there is rarely a single solution.  My solution is how I would solve it better leveraging my strengths.

The harder to get the link, the more valuable it is

Links that are too easy or relatively easy to get do not help much in getting traffic or authority for search engine rankings.

If your link is placed on a page where there are several hundred links competing for attention, it is less likely that potential visitors will click than if the page only has a few dozen links.

The value of your link source is in direct relation to how selective that source is when placing links on the page and how much traffic the source gets.  The value also declines with the number of links on the page.

Google is understood to use algorithms to measure the importance and quality of each page.  The PageRank was invented by Google founders and is used for measuring absolute importance of a page.  The TrustRank algorithm describes a technique for identifying trustworthy pages — quality pages.  We can not tell for sure to what extent Google is using this algorithm if at all, or at least their publicly known version.  What we can say, is that based on observation, we can definitely say that they do not treat all links equal and they do not pass authority to your page from all of your link sources.

Success in a $100 budget?

Patrick Saxon from Seoish.com has asked top names in the SEO industry a very useful question.  What most have missed is that Patrick has actually answered the question himself by writing the article.

First, he created a very useful piece of content, and second, he has received a large number of authority links from his peers.

He recently won a conference pass to SMX in Seattle from Arron Wall and he frequently comments and writes posts in the Youmoz section of Seomoz.org.  I can only see him moving up.  Congratulations Patrick on this cleverly created linkbait!

What would I do with $100-$500 if I had to start over again?  I hope I am allowed to keep my knowledge and experience and at least have the means to support myself for several months.

Give and you shall receive.

I would choose a topic I know a lot about, am passionate about, and invest the money in a domain name and creating useful content.  If I create the content myself I would pay a professional to make it look better.  I would host the content on a hosted blog such as wordpress.com or blogger.com.

After 20 or so posts I would use them as source for an ebook to be sold from the website.

To build buzz I would leverage social media sites and I would start helping and offering suggestions to others in popular forums and blogs.  Readership will build up.

Patrick has pretty much done most of this.  My only suggestion to him is to find or create a useful product for his audience.  If he decides to stick to Adsense, I would definitely move those ads above the fold!  Check the Adsense guidelines for better placement.