There’s no denying that one of Facebook’s core competencies is image sharing. It’s why the platform is ridden with memes, it’s why we always have image-based ads, it’s why we have videos playing every 5 seconds on our news feed, it’s why we have display pictures, it’s why we have selfies…the list goes on. Images get messages across quickly, and you’re far more likely to consume an image than a block of text. Knowing the value Facebook places on imagery as a bastion for content, I thought I’d share some insights on the code that governs images within Facebook’s news feed. Here’s what I found:
Every image is analysed to see what’s in it
Every single image on your news feed has been analysed by Facebook. Within each image’s code, there is an ‘alt’ tag that begins with “Image may contain: ” – then lists what is present in the photo. If you have a look in the following image of me and a mate. You can see that within the code, it says:
Image May Contain: 2 people, sunglasses, beard and hat
Here’s another example that picked up a lot more detail:
Image may contain: 2 people, people smiling, people sitting, sky, outdoor, water and nature.
Facebook achieves this by running analysis on any image, the moment you click ‘Upload’. The analysis is performed by a neural network / machine learning algorithm, tears down the photo data, identifies what it looks visually similar to, and when it finds a set of similar images, it becomes part of the same system that recognised it. In essence, this is the core of the ‘learning’ side of machine learning – some other similar technologies include Google Image’s ‘Reverse Image Search’, as well as the Facebook photo-tagging feature. All of these services only get better with the more images that are uploaded.
As far as Facebook’s one is concerned – it looks like only very high level wording, and the most detail I got to was ‘trees’. Each keyword that appears comes as a result of the machine learning algorithm, and is assigned a confidence level – i.e. the machine is 91% sure that the image contains a tree. I would have thought that there is a lot more detail behind these images, but the high level ones are presented to ensure accuracy.
The great thing about the “Image may contain” text is that it complies with web accessibility standards – so vision impaired people can scroll through their feed and know what type of images are appearing before them. Web accessibility is incredibly important to ensure that everyone can have access to content on the internet, and that it is ever inclusive.
As a user, if I start to get targeted with a whole heap of ads / content, simply because of my image data, I’m going to start to be a little worried. Now, it doesn’t necessarily have to be on Facebook’s platform, but it could be to do with targeted advertising (e.g. if Facebook were to sell this image analysis data) – if I start to see things like “We’ve noticed you haven’t been on a beach holiday in the last 2 years! Click here to book one” or “What happened to your favourite sunnies? Replace them here for half price” – I wouldn’t know whether to feel impressed or worried. These are all very tangible options for Facebook to pursue, and talks to the broader argument on data ownership, data analysis, and the extent to which corporations can mine your data.
I’m pretty on the fence about this one – let me know what you think below.