Search Engine Optimization (SEO)
What is a search engine and how does it work?
Understanding how search engines work can help your business use SEO to reach potential customers.
What is a search engine?
Search engines allow users to search for content on the Internet using keywords. Although the market is dominated by a few, there are many search engines that people can use. When a user enters a query into a search engine, a search engine results page (SERP) is returned, which ranks the pages found in order of relevance. The way this ranking is performed differs between search engines.
Search engines often change their algorithms (the programs that rank the results) to improve the user experience. Its goal is to understand how users search and give them the best answer to their question. This means prioritizing the most relevant and highest quality pages.
How do search engines work?
There are three key steps for most search engines to work:
Tracking: Search engines use programs, called spiders, bots, or crawlers, to crawl the Internet. They may do this every few days, so the content may be out of date until they crawl your website again.
Indexing: the search engine will try to understand and classify the content of a web page using “keywords”. Following SEO best practices will help the search engine understand your content so it can rank for the correct search queries.
Ranking: Search results are ranked based on a number of factors. These can include keyword density, speed, and links. The goal of the search engine is to provide the user with the most relevant result.
Although most search engines give you advice on how to improve your page rankings, the exact algorithms used are well protected and change frequently to prevent misuse. But by following search engine optimization (SEO) best practices, you can make sure that:
Search engines can easily crawl your website. You can also ask them to crawl for new content.
Your content is indexed for the right keywords so it can appear in relevant searches.
Your content may have a high ranking in the SERP.
Directory search engines
Some niche search engines function as directories for specific types of content. This means that they only show results for manually added content. They don’t scan the internet. SEO tactics can still be used to rank high in relevant queries within these directory search engines. See the types of search engines.
Rich Media Search Results
Universal or “combined” search is how search engines present different types of content to users in search results. In addition to the traditional text page results, the SERP will also show multimedia content such as images, videos, maps, articles and shopping pages.
Having different types of content on your website, such as an instructional video on how to use your product or a blog, could impact your chances of appearing on results pages and your ranking.
You can use “structured data” on your website to help search engines understand and display specific types of content. This is the code added to the HTML markup. Using structured data means that information such as review ratings, images, addresses and phone numbers can be displayed on the search engine results page.
Type of Search Engine
These types of search engines use a “spider” or “crawler” to search the Internet. The crawler searches for individual web pages, extracts the keywords, and then adds the pages to the search engine database. Google and Yahoo are examples of crawler search engines.
The advantages of trackers are:
They contain a large number of pages.
Easy to use.
Familiarity. Most people who search the internet are familiar with Google.
Trackers have several drawbacks:
Sometimes that’s too much information.
It’s easy to fool the tracker. Websites have hidden data that can be manipulated to make the page look like something it isn’t. So that search result from Descartes could lead you to a porn site.
Page rank can be manipulated. While search engine companies frown on this practice, there are ways to improve the page’s position in the results list.
Directories are human-powered search engines. A website is submitted to the directory and publishers must approve its inclusion. The Open Directory Project and the Internet Public Library are examples of directories.
Each page is reviewed for relevance and content before being included. This means that there will be no more surprise porn sites.
Sometimes fewer results mean finding what you need faster.
Unknown layout and format.
Delay in the creation of a website and in its inclusion in the directory.
You may have trouble with darker searches.
Hybrids are a combination of crawlers and directories. Sometimes when you search, you have the option to search the web or a directory. Other times, you may receive human-generated results and crawler results for the same search. In this case, human results usually appear first.
Meta search engines are those that search several other search engines at the same time and combine the results into a list. As you get more results from the search meta engines, the relevance and quality of the results can sometimes suffer. Dogpile and Clusty are examples of meta search engines.
Search engines are an extremely powerful way to promote your website online. Think of them as your silent PR firm, silently working in the background. Numerous studies have shown that between 40% and 80% of users found what they were looking for using the Internet search engine function.
According to Search Engine Watch (http://www.searchenginewatch.com), 625 million searches are performed every day!
The best thing about search engines is that they bring targeted traffic to your website. These people are already motivated to make a purchase, because they were looking for it.
With proper website optimization, search engines can always provide your site to their audience.
Crawler-based search engines
Crawler-based search engines use automated software programs to conduct surveys and rank web pages. The programs that search engines use to access your web pages are called “spiders”, “crawlers”, “robots” or “bots”.
A spider will find a web page, download it and analyze the information presented on the web page. This is a perfect process. Then the web page will be added to the search engine database. Then, when a user performs a search, the search engine will check its database of web pages for the keywords the user searched for to present a list of link results.
The results (list of suggested links to go to), are listed on the pages in order of “closest” (as defined by “bot”), to what the user wants to find online.
Crawler-based search engines constantly search the Internet for new web pages and update their information database with these new or changed pages.
Some examples of crawler-based search engines are:
Ask Jeeves (www.ask.com)
A “directory” uses human editors who decide which category the site belongs to; They put websites into specific categories in the “directory” database. Human editors go through the website thoroughly and classify it, based on the information they find, using a set of predefined rules.
There are two main directories at the time of writing:
Yahoo Directory (www.yahoo.com)
Open directory (www.dmoz.org)
Note: Since late 2002, Yahoo has been providing search results using crawler-based technology as well as its own directory.
Hybrid search engines
Hybrid search engines use a combination of crawler-based results and directory results. More and more search engines these days are moving towards a hybrid-based model. Examples of hybrid search engines are:
Meta search engines
Meta search engines take the results from all other search engines and combine them into one large list. Examples of meta search engines include:
Specialized search engines
Specialized search engines have been developed to meet the needs of specific areas. There are many specialized search engines, including:
Yahoo Shopping (www.shopping.yahoo.com)
Domain name search
Free parking (www.freeparking.co.nz)
Look for free software and shareware
CNET Download.com (www.download.com)