Opinion / What machine vision means for creativity

  • Karl Marsden
If you’re still using Facebook after the Cambridge Analytica scandal, then you will have recently been asked to review its facial recognition policy. This isn’t because Facebook is introducing facial recognition to its platform – it’s been using it for years to suggest tags, etc. – it’s just been in a wee bit of trouble for using it without anyone’s permission.
Facial recognition is made possible by machine vision, the artificially intelligent systems that allow computers to see and analyse visual information in the same way that humans do. It’s the same thing that powers image and object recognition. These processes can happen independently and have become much more reliable in recent years – which is why you no longer get requested tags such as ‘Is this John Smith’ on photos of your sister.
When I say they’ve ‘become much more reliable’, what I actually mean is computers are now better at recognising things than humans are. Baidu’s Minwa supercomputer surpassed human capabilities of recognising different breeds of dog back in 2015 (its error rate was 4.58%, compared to our 5%). And Alipay’s Smile to Pay technology, which uses facial recognition to process handsfree payments, was deemed dependable enough to have its first trial run at KFC last year. And no number of wigs, decoy customers or layers of makeup could fool it.
This increased reliability has the potential to do wonders for efficiency. Just as Facebook could automatically tag people in photos, companies could automate quality control in factories or – as KFC did with its Smile to Pay trial – streamline the in-store experience.
It can also boost business success. We spoke to Anastasia Leng, founder and CEO of startup Picasso Labs, which uses machine vision and artificial intelligence to analyse images and videos being used by brands to determine which visuals work best. ‘We’ve done social, we do display advertising, we’ve done web, we can do email, we’ve even talked to some companies about doing print,’ said Leng. ‘If it’s an image or piece of content, and there’s some way of measuring its performance, then our system will work pretty much anywhere.’
Each of Picasso Labs’ clients gets access to their own Google Analytics-style dashboard, which shows them not only what images they’re using and which ones are performing the best, but also gives tips based on patterns in the brand’s image data. A fashion brand, for example, might be told that images featuring shoes on their own, without people, get 20% more engagement than images of people wearing shoes.
Leng isn’t trying to replace creativity with data, she’s just trying to streamline the creative process. Picasso Labs, and other machine vision-based technologies, can actually make creative ideas more effective. ‘The right creative director will take these insights and essentially put them on steroids to create a creative execution that will really resonate,’ Leng said. ‘The best creative directors that we’ve worked with will also use this data to help push their ideas through.’ So, instead of having to alter their ideas when anyone and everyone decides that they don’t quite like it, creatives can say: ‘You might not like it, but it really doesn’t matter because we’ve got data that says the users do.’
Disney has also been working to use data to better its creative output. The media and entertainment company worked with scientists at Caltech to build a system that used infrared cameras to analyse the expressions of people watching its films (in a cinema-cum-focus group). The data gathered could then be used to rejig the content to perform better and, with just 10 minutes-worth of data, the system could accurately predict how an audience member would react to upcoming scenes.
This kind of in-depth consumer insight is invaluable. Synaps Labs is using similar technology to upgrade out-of-home advertising with supercharged targeting, and Microsoft is already working with brands to incorporate machine vision technology into their products. ‘We’ve been working with large auto manufacturers who are thinking about putting driver-facing cameras in their cars,’ says Andy Hickl, Microsoft’s principal group program manager, cognitive services. ‘They’re interested in really creating an empathetic user experience, so if I get in the car and I’m sweaty and out of breath, the car could change the climate control to cool me off. Or, if I’m getting in the car and I’m out of control or angry, it could look at ways to augment my driving and keep me safe. And that means using computer vision to understand the driver’s facial geometry.’
The benefits of implementing machine vision tech are staggering, but they also come with important privacy considerations. In her research, which used Facebook profile pictures to determine personality traits, Cristina Segalin, postdoctoral scholar at Caltech, found that machine vision can give very specific insights into people’s lives. ‘When it comes to personal data there are still a lot of issues,’ she said. ‘Machine vision technology might infer traits or characteristics about a person that they weren’t aiming to share – so you might be sharing more information than you think.’
This is something that brands must keep in mind. Even with GDPR just around the corner, asking people to re-opt in to transferring their data over to you isn’t enough. If the data being gathered isn’t treated respectfully, or there isn’t enough transparency around the insights being generated, eventually trust is going to disappear completely. I declined Facebook’s request to use facial recognition on my account and I expect I wasn’t alone. Machine vision has the potential to strengthen and streamline creativity, so don’t follow Zuckerberg’s example or you might end up screwing yourself out of a major competitive advantage.
If you're interested in knowing more about how businesses and agencies can make use of machine vision technology, look out for the in-depth feature in Contagious 55 (out in June). Click here to subscribe