Having people tag images by hand is an onerous task. Shenoy and Tan of Microsoft Research developed a way to tag images automatically by reading people’s brain scans while they look at images. The people did not even have to specifically think about trying to tag the image; they merely had to passively observe it.
People naturally group information by topic and remember relationships between important things, like a person and the company where she works. But enabling computers to grasp these same concepts has been the subject of long-standing research. Recently, this has focused on the Semantic Web, but a European endeavor called the Nepomuk Project will soon see the effort take new steps onto the PC in the form of a "semantic desktop."
Your musical compatibility with nfisk is Unknown
Don't just bookmark! Highlight the web! Add sticky notes too!