A lot of people are starting to use social bookmarking tools as a means for SEO (increasing your web traffic)…tagging your own blog posts gets you double the exposure…
Particletree points this out with an experiment using del.icio.us to bookmark webpages instead of waiting for his site to be crawled by Google …read the post for the results.
Here is more from this post:
”I think the reason del.icio.us is so successful at bringing the appropriate audience to good material is because they track the changing web by using people to calculate what is essentially page rank. They get access to decent fuzzy logic for a fraction of the cost and the democracy of the system allows anyone to get their idea of what deserves face-time into the system almost immediately.”
The differences for SEO:
- Crawling vs. pinging
Search engines like Google take longer to index content
Google Sitemaps is their solution to overcome this issue.
Pinging services enable the World Live Web or the ChangingWeb; you can keep track of new additions to the web
- Social bookmarking and blog categories vs. PageRank
The web community chooses the importance of a page, not an algorithm
…and once a page has been tagged, it is visible and shared around, and can be re-tagged by many people, applying many tags (according to their way of seeing the world, so to speak - hopefully applying a popular and/or accurate tag), increasing its exposure, and maybe making it visible on the front page (most popular tags).
Users track other users tag accounts (called an inbox in del.icio.us) on a daily basis because they like the stuff they bookmark, or people track a general tag itself (also at the user level) to keep up with the latest according to content bookmarked with that tag.
Now if you bookmark a page for SEO reasons, someone tracking the tag you used will come across your page, and then they might tag your page with another tag, and someone tracking that tag will see it, and at the same time people tracking someones whole account or a tag within an account will see it as a new inclusion.
So people will come across your page without even looking for it (serendipity)…as folksonomies are largely about discovery (see the end of this post).
In constrast to Google; where you will come across the page if you are trying to find something specifically.
Although people do set Google Alerts, or do repetitive searches daily with their favourite search terms…so they may come across your page this way, but again will your page be ranked at a reasonable level before people stop clicking through results.
Two engines that are using tagging instead of page rank are Technorati Tags (covers the blogosphere) and Gataga (covers the folkosphere – or whatever it’s called?… I guess the tagosphere would be both of these combined).
NOTE: see my post on Gataga searching in many fields and searching in just the tag field…according to my trials and observations.
These tools cover a portion of the web that is savvy with current awareness, so it’s a good place to be a part of if you want traffic to your site.
Tags vs. free-text search
What subject tags try to achieve is to filter through the noise in search results, often this is called the “gray web”.
Even though the authors of websites tried this in the earlier days, it failed because of spam issues, and now it’s different as users are tagging pages (multiple heads are better than one when defining the aboutness of something)…see more here.
In Google you see results according to your search terms coupled with the algorithm (that decides on the ranking of results).
So there may be more relevant results than you think, but you can’t see them as they may be hit 2,620 or hit 100,265…you’re just not going to scroll through that many results (so this is a part of the open web, that is not invisible, it’s just hidden)..see more here.
So you’re only solutions to bring all the relevant hits to the top is too improve your search terms, make them more precise, use boolean, etc…
In social bookmark tools pages are found according to a tag or subject term, as opposed to a free keyword search, or both eg. Zniff (even though this is just for one bookmark tool)
…soon these social bookmark tools will have lots and lots of results per tag (will the same problem emerge even at the subject level, let alone the free-text level).
At the moment this is alleviated by searching for more specific tags or combining tags.
Also social bookmark tools have a popular tags home page which increases visibility
So, the more your webpage/post is seen, the more it has a chances of being blogged about, increasing your traffic again.
Socialbookmark tools don’t index all the web, only the pages people choose to tag (it’s a selective version of the web)
But this isn’t a problem for SEO, as you can just bookmark your own page, and away it goes…
Of course there is way more to SEO, but this is just about one aspect of SEO in relation to traditional search compared to social bookmarks.
This post digresses to ideas of the semantic web, where blogs, webpages, index their markup codes with more fields so to speak, eg. date, author, subject, review, job listing…this way anyone can aggregate the content and share it…structured blogging is a foray into semantic blogging.
In the end, is Google (PageRank) suitable to the lay persons searching needs, or will a subject fielded search be more appropriate.
I guess the arguement is in the accuracy in indexing the aboutness of a page…if the page is not correctly indexed according to the searcher, then they may think it doesn’t exist, that’s why free-text search is the safe option.
So the problem is; we are getting too many results
…we need more context for better precision
…but who defines the context is the question?
(tag/subject name and bookmarking the right items within this tag)
…I guess this is now a combination of the user (social bookmarks) and the author (blogs)
…another question is of controlled vocabularies
(that’s out the window in a non-domain specific, multi-discipline environment, with millions of contributors - the content is already uncontrollable, let alone trying to control labelling it all…labelling it brings context, which is want we want, but we can’t control the labels…well we have no choice, and who says controlled labels will help people search in context better than a user-defined/free-tagging system).
…as mentioned the user may have more of a chance finding something with free-text, rather than jumping from tag to tag trying to locate something (although when you find the right tag, you will hopefully find lots of items of quality, compared to free-text where you are competing with a lot of noise).
But then again, maybe free-text is for finding and tags are for discovery…maybe they shouldn’t be compared as one or the other, as they are slightly different tools.
Do we need to educate users on search terms and syntax techniques?
…or should we define webpages by user-defined tags?
(well this is happening anyway)
But then if someone searches a tag/s and doesn’t find what they are looking for, they may quit, whereas the item they were looking for was located in another tag.
The search experience needs to be intuitive.
I think our current answer is too use a bit of both.
Whether the tags become part of the pagerank (Zniff), or from the results page you show the tags applied to each hit (see comment on this post).