Cognitive Bias Captcha

Cognitive Bias Captcha

Cognitive Bias based CAPTCHA

While we don't necessarily have a complete explanation for why we humans have cognitive biases, it is undeniable that we have them. One interpretation is that they are a bunch of shortcuts or heuristics as a result of evolution of a reptilian brain into our own. Whatever the reason, their prevalence in humans means that it is a difference with machines that can be exploited to tell them apart.Requirement of generating CAPTCHA: Trapdoor function. Easy to generate, hard to get the answer from the generated toolWe need to take clouds, or whatever fuzzy image to some object. But there are many such objects , and you could come up with which look similar. Mapping a cloud back to the other images that were just shown is much harder. The Test:Given an irregular shape, like a cloud and asked to resolve it into a specific object given a set of options, there is bound to be significant variance. Now the test taker is asked to answer the question, "Which of these does this shape most resemble" and is given a set of options. The objective is that the tester has a specific option in mind, although there is no objectively wrong answer. The tester asssumes the the testee is human and tries to subtly guide the testee to the expected answer by exploiting cognitive biases.Which cognitive biases to use?Pareidolia: It is the tendency for the incorrect perception of a stimulus as an object, pattern or meaning known to the observer, such as seeing shapes in clouds. When applied as a projective test, we end up with the Rorschach test. The idea behind this is that left to one's own devices, pareidolia manifests in identifying objects based on the specific individual's experiences. There is existing literature on pareidolia in machines.Anchoring effect and Availability Bias: Individuals depend too heavily on an initial piece of information (anchored on to it) and tend to use that information in making the decision. Can be used to guide pareidoliaSalience Bias: Humans are more likely to remember or focus on more striking objects than duller ones

Implementation:

Step 1: Cloud-image pairs

Approach 1:Use the neural networks that are currently used to convert pictures into art of different styles (arbitrary neural style transfer). Train it such that given any image, the output is a 'cloud' styled image of that content. POC implementation of this works, but is very slow. Content Style Output on my system Output by some online implementation

Approach 2:Existing repos of clouds exist. A semi labelled dataset which is pretty messy can be found on Reddit. Alternatively there are repos of clouds for various other purposes. The problem of course is that we would need to create the dataset for this by hand, since it needs to be labelled. But let's say we have a labelled dataset with image-cloud pairs/name-cloud pairs. We wouldn't need any ML here since you would basically be picking from the list to show to each user. Useful as a POC, but at large scale, not really workable to manually create datasets.Additionally, once that is done, can a model be trained to take an image and find the cloud most similar to it? Similar to a one way trap, this must be easy (quick) in this direction but hard to do otherwise.

Step 2: Using subconscious clues to guide the userThe guiding is done by showing the original as well as pictures of all the other options OR do we show a video OR use visual cues.1)Show the user an image or a video with many different objects with the relevant object in bright colours or in a prominent position or present many times. This object is recognised by the salience effect, registered subconsciously by the anchoring effect and comes to bear upon pareidolia when trying to id the cloud.Use word or sound associations to make the user think of the answer. (Rhyming with the answer, colour cues, similar sounding words or context clues)Show the user the original image as well as images for the others, but the original fits the cloud much better. This approach is obviously the easiest, but is also easy enough to circumventby any classification algo or basically by reversing the style and content objects in the arbit style transfer, since the algo is effectively open source.Using just the names and hoping that, statistically, if one 'correct' answer and 3 other options are chosen at random, only the correct version will fit the image statistically speaking, and pure pereidolia will be enough to steer the user to an answer. Feels like wishful thinking, but could work if you have a large enough set that you pull from.

Potential Vulnerabilities:Image recognition: Object recognition also are obviously going to classify the given object as a cloud but there will be other options. Check to see if the answer with the 2nd highest probability is likely to be the original image in a general case.Given options, can an image search allow an attacker to id the right option based on getting a similarity score when compared to the other options. For example, when used here, the output plus jellyfish original gave a much closer similarity score than the output and an actual cloud.This can be combatted by not giving images for the options or by tweaking the test, but the possibility of similar attacks remain.Though they are likely to be computationally expensive

Related reading

More Stuff I Do

More Stuff tagged ai , cognitive bias , trapdoor functions , image recognition , turing test , captcha

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.