g

Wednesday, March 18, 2009

Seo Expertise

Originally, the first business search engines were directories, such as Yahoo! and Galaxy, and as such, site technology of the sites in their index was (by and large) not a real issue, except for aesthetic and site value considerations. However, with the overture of the major early spider based search properties, such as Lycos, AltaVista and Inktomi, the ability of "robots" to examine websites became a major reflection. Robots are known as spiders, software used by some search engines to examine the content of websites, and then present their result to the search engine database. Search results are then ranked according to an algorithm that attaches certain priorities to aspects of the database, and commands the sites in the search engine results pages as an effect of this.

However, there are many aspects of site coding that may present barriers to search engine robots. Many of the robots were automatic at around the time of the first few search engines, and so their position to HTML is in many ways stuck in the mid-late 1990's. Many have complexity parsing HTML that is taken for granted by webmasters, such as framesets, embedded tables, image links and maps, and JavaScript/dHTML. Although some robots have evolved well, Googlebot (the robot used by Google) being a prominent example of a robot that moves well with the times, there are as many that have not really tainted in the six years or so that they have been in existence. This means that an SEO company must have a full considerate of how complicated site coding may present barriers to search engine robots, and also of how site coding may present opportunities to improve rank by simple technological changes to the site.

However, it should be remembered that site coding should not be physically abused to artificially blow up rank, as again this will be measured spam by search engines, and may cause the site to be penalized or barred by those search engines.

One complex and fairly cunning way of fooling search engines is by using a technique that is commonly called cloaking. This technique involves recognizing site visitors by their user agent (browser or robot name) or by their IP address. This allows you to present pages specifically optimized for exact search engines, meaning that each engine can be optimized for independently. In principle this sounds like quite a good idea, but it is clearly open for abuse, and has been barred by many search engines as a result. Google, for example, takes the line that what its spider indexes, must be what the users of your site will see. IP and user agent release has been used in the past to fool robots into thinking they were indexing admired sites such as Hotmail or Microsoft, where in fact they were indexing pornographic sites. This is clearly not in the interests of search engines, and it is simple to see their point of view.

Labels: ,

posted by jarabni @ 12:00 AM,


0 Comments: