How Search Engines Work
Since this is a new blog about web visibility, I thought I would kick things off with a bit of basic information about the driving force of the SEO industry, the search engine.
The World Wide Web (www), or ‘the web’ as it’s more commonly referred to, consists of gazillions of interconnected computers called servers. Stored on these servers are squillions of web pages and files, the same way we store files on our own personal computers. The web pages and files for each website are stored in one or more folders (directories) on these servers.
Search engines work by sending out a robot (web crawler), an automated software program, that crawls the web collecting information. The robots crawl from web page to web page and from website to website by following links. They retrieve the content they crawl, the web pages and related files (image files, PDF files, Word documents etc.) which are then added to the search engine database (index). Search engine marketers commonly refer to this as ‘indexing’.
Each search engine has an algorithm, a set of rules by which it determines the value, validity and relevance of the content it has retrieved.
When a web user types in a search term (keyword) at that search engine, the content the search engine considers most pertinent to the search query, based upon its algorithm, will rank highest in the search engine results pages (SERPs).
The search engine robots aren’t genius and can only do what they’ve been programmed to do. They’re also not human, and historically have not been able to do many of the things that humans can do, e.g. see images, see flash movies, interact with web pages that require the clicking of a button or the filling in of a form.
A website can be built in ways that can either promote or inhibit crawling by a search engine robot. If a robot can’t navigate a website easily, it can’t access all the content. In turn, this means that the search engine will not index the content. If the content isn’t in the search engine database, there is no way for a web user searching at that search engine to ever see those web pages come up in the search results.
Ideally, to promote crawling and indexing of all of a website’s content, the website should be built to web standards using search engine friendly coding. If that content is also optimized for the search engines by adding target search terms in the appropriate places, then the search engine robots will be able to identify & evaluate the website content and rank it for its target search terms (keywords). This will give your web pages a better chance of being seen by the web user searching for your topic, products or services at that search engine.
Knowing the basics of how search engines work is not only the beginning of achieving good web visibility, it is also important to properly approach website search engine optimization.
Thanks to Jeff Jones Illustration for the artwork :-)