It is a black hat seo technique, where the content presented to the search engine spider is different to that presented to the user.
How does this happen?
The content is delivered based on the IP address or the User-Ageny HTTP header. When a user is identified as a search engine spider, then the server-side script delivers a different version of the webpage. This content is not present on a the visible page.
Why do people use cloaking?
They want to deceive the search engines to help boost the websites’ rankings for keywords.
What are the different forms of cloaking? (source: http://info.webtoolhub.com/kb-a24-what-is-cloaking-in-seo-types-of-cloaking-methods.aspx)
IP address Cloaking – presents different contents based on determining IP addresses. e.g. Search engines with certain IP addresses will be shown a one version of a web page and all other IP addresses will be shown another version.
User-Agent Cloaking – different versions of a website based on User-Agent are displayed. e.g. Search engines and/or users using different versions of web browsers are served with different contents of a web page.
HTTP_REFERER Header Cloaking – if a user is coming from a certain website (e.g. clicking a link from search results or a website) they will be presented a different version of a website based on the HTTP_REFERER header value.
HTTP Accept-Language Header Cloaking – may be used to show different versions of a website based on a users web browser language without letting them for an option of language selection.
JavaScript Cloaking – users with JavaScript enabled browsers are shown one version while users with JavaScript turned off (like search engines) are shown another version of a website.
Matt Cutts announced that Google are going to investigate more issues of cloaking on the first quarter of 2011. So cloakers, beware….