Are you looking at your website from the perspective of a search engine spider? If not,
you may be missing out on valuable information that can help your website rank better on
search engines and provide a better experience for users. Let’s look at a few tools to help
SEO’s see websites as search engine spiders do, what SEO’s should be analyzing and
how pages can become more spider- and user-friendly.
Search engine spiders (a program or automated
script that visits pages and makes a copy of the
source code for later processing and indexing by the
search engine) don’t always enter a site from the
home/index page. In fact, every page is a possible
entry point. For this reason it essential for each page
to have a link to an HTML or XML sitemap, as it
enables spiders to find and crawl all available pages.
A search engine spider does not “see” your
website the same as a Web surfer. Instead, it only
sees the text and code in the final rendered page. If
you have looked at source code before you know
it is virtually indecipherable to the untrained eye.
Thankfully, there are a couple of different tools
for viewing source code for SEO analysis. One
method is to use a text Web browser like Lynx.
Since the Lynx browser only displays text, it is
helpful in checking the usability of websites on
older browsers. And, most importantly, it does not
display tables, frames, images, or video,
so it loads pages very fast and gives you
that spider perspective. What you find
might be surprising.
An important factor in search
engines’ algorithms is the order in
which they index text and links —
those within the body of a page are
given more priority. Text and links within
sidebars, footers, and headers can be
given less of a priority. Therefore,
Google may not be giving as much
weight to site-wide links in sidebars
compared to one of your text links within
the body of the page content. As such,
deep analysis on how pages rendered
are seen by spiders is a very valuable
Get Indexed Quickly: It is important to note that before you can see how
search engines view your website, spiders must be
able to find your pages. Instead of manually submitting
your site directly to search engine indices, the best
way to get pages crawled and indexed is to procure
a link from an existing page. Get a direct link from
another website or get listed in a directory.
Let’s take a look at a few things you can do right
now to make sure your pages are properly indexed.
Keep page sizes under 100KB. MSN-Live and
Ask do not index body text beyond 102KB.
Make sure header tags and content appear near
the top of the page, and that anything you don’t
want indexed does not appear. SEO’s should also use
CSS to feed spiders text, links and images in the
CSS allows for the positioning of text and images
wherever you want on a page. Just because you want
your Flash navigation at the top of the page and
your text link navigation at the bottom, that doesn’t
mean you must have it this way within your HTML
code. You can list your text link navigation (with
keyword-rich text) at the top of your source code
and your image/Flash navigation at the bottom of
your source code with CSS.
Is this all overkill? Perhaps. But in competitive
keyword arenas where websites and SEO’s are
looking for any edge they can get, this may be
what pushes a website from the fringes of page
two on Google to a solid spot on page one. That
in itself is reason enough to think like a search
Current Search Engine Spider Information
About the Author:
Dante A. Monteverde is a Search Strategist specializing in Search Engine
Spider Bait SEO in 1996 and has over 10 years of SEO experience.