This week, Twitterati started posting photos of them with an odd line-up of labels. However, most of the labels were very troubling. For instance, some were labeled as debtors, rape suspects, and not just black but ‘negro’ and more.
The App behind this has been named ImageNet Roulette. It is an attempt made by researcher Kate Crawford and artist Trevor Paglen. They are trying hard to point out the hazards of supplying inaccurate data into artificial intelligence. It takes data inputs from ImageNet. ImageNet is the database of 14 million images meant for the purpose of feeding deep learning which is the science behind everything from self-driving cars as well as facial recognition.
The algorithm behind the Roulette tool is tutored using images within ImageNet, which tags people across 2395 categories, starting from Uzbeks to slatterns. However, a problem that could be seen with ImageNet is that it pretty well shows how inaccurate and biased AI can be.
It had its source in the mid-1980, with a project at Princeton named WordNet. Within WordNet, words were organized hierarchically with their meaning. Similarly, the developers of ImageNet wanted to form the same hierarchy of images. They thought it would be a great tool for teaching AI ways to identify as well as categorize objects.