Tuesday, July 14, 2009

What is Search Engine Optimization?

Short for search engine optimization, the process of increasing the amount of visitors to a web site by ranking high in the search results of a search engine. The higher a web site rank in the results of a search the greater the chance that that site will be visited by a user. it is comman practice for the internet users to not click through pages and pages of search results,so where a site ranks in a search is essential for directing more traffic toward the site.

SEO helps to ensure that a site is accessible to a search engine and improves the chances that the site will be found by the search engine.



HOW WEB SEARCH ENGINE WORK:

Search engines are the key to finding specific information on the vast expanse of the World Wide Web.withour sophisticated search engines,it would be virtually impossble to locate anything on the web without knowing a specific URL.but do you know how search engines work?And do you know what makes search engines more effectives than others?

When people use the term search engine in relation to the web, they are usually referring to the actual search forms that searches through databases of HTML documents,initially gathered by a robot.

There are basically three types of search engine. Those that are powered by robots (called crawlers" ants or spiders) and those thar are powered by human submission:and those that are a hybrid of the two.
Crawler based search engines are those that use automated software agents (called crawlers)that visits a web site,read the information on the actual site, read the site's Meta tag and also follow the links that the site connects to performing indexing on all linked web sites as well.The crawler returns all that information back to a central depository, where the data i sindexed.the crawler will periodically return to the sites to check for any information that has changed. The frequency with which happens is determined by the administrators of the search engine.

Human Powered search engines
rely on human to submit information that is subsequelntly indexed and catalouged. Only information that is submitted is put into the index.





In both case, when you query a search engine to locare information,you"re actually searching through the index that the search engine has created--- you are not actually searching the web.These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo ! or Google, will return results that are, in fact, dead links.
Since the search rresults are based on the index, if the index hasn't been updated since a webpage become invald the search engine treats the page as still an active link even though it no longer is. it will reman that way until the index is updated.

So why will the same search on different search engines produce different results?Part of the answer to that question s because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted.But more important, not every search engine uses the same algorithm to search through the indices. The algorithm s what the search engines use to determine the relevance of the information n the index to what the user is searching for.

One of the element that a search engine algorithm scans for is the frequency and location if keywords na webpage. Those with higher frequency are typically comsidered more relevent. But search engine technology is becoming sophisticated in its attempts to discourage what is known as keyword surfing, or spamdexing

another common element that algorithms analyze is the way that pages link to other pages in the web by analyzing how pages link to each other,an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page s considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more sawy to web masers who buld artificial links into their sites in order to buld an artficial ranking.



DID YOU KNOW......

The first tool for searching the internet, created in 1990, was called "archie". it downloaded directory listing of all files located on public anonymous FTP servers, creating a searchable database of filenames.A year lated "Gopher" was created . it indexed plain text documents. "Veronica" and"Jughead" came along to search Gopher's index systems. the first actual Web search engine was developed by Matthew Gray in 1993 and was called "Wandex".

No comments:

Post a Comment