AI: Robot hand image


AI the robots are coming

Greenwich Design

Back to

Will a robot take my job? With AI and machine learning advancements, it’s a fair question for those of us working in the marketing and technology sectors. Type it into Google and you’ll even find a website that provides an answer for you. I’m informed that my line of business is not in jeopardy – graphic designers are ‘totally safe’ apparently, with only an 8% probability of automation, so the creatives amongst you can breathe a sigh of relief.

As the managing director of a design agency, I like to think that advances in technology have improved our daily lives, reducing the tedious grunt work and freeing up designers’ time for creative thinking and exploration. Technology has also allowed us to work faster and smarter. It wasn’t so long ago that if we mocked up a piece of packaging for a client and they didn’t like the colour, it would be a day’s work to re-do it; nowadays it takes seconds. Technology allows us to experiment more and offer clients a broader choice of options that just wouldn’t have been viable if we were doing everything by hand.

Companies are already incorporating in AI and machine learning into software – Adobe is leading the way with its AI platform, Sensei – and although many of us are not yet realising the full potential of these tools on a day-to-day basis, on the whole designers feel confident they will enhance the creative experience rather than detract from it. That’s according to a study commissioned by Adobe last year, which also found that “creatives are not worried about these technologies taking over their jobs because creativity, they believe, is profoundly human”.

One of the criticisms of ‘Portrait of Edmond de Belamy’, the painting that was created entirely by AI and sold at Christies last year, was that it lacked feeling. The brainchild of art collective, Obvious, the work was created using AI to scan thousands of existing images and then use the data to recreate a new painting. The Guardian’s art critic, Jonathan Jones, suggested that up close, it was clear the artwork was a serious of dots rather than a skilfully painted masterpiece; the eyes lacked the depth and sense of emotion that a painting by a human artist can instil. Nonetheless it sold for £337,000 suggesting there’s a demand for AI art. My feeling is this is purely a novelty, for as much as a computer algorithm can recreate or mimic the style of an artist, it is only filling in the gaps for something that already exists. We’ve yet to see an original ‘masterpiece’ created by a computer.

Earlier this year, Engineered Arts announced they had created the first robot artist, Ai-da. Fitted with microchip in its eye and holding a pencil in its robotic hand, Ai-da uses algorithms to accurately sketch or paint her subjects. What Ai-da can’t do, however, is input original thoughts, emotions, experiences – or any of the other personal aspects that add individuality to an artist’s work.

And it’s not just the visual arts that require human input – the same is true for music. The growing popularity of AI music systems like Amper and Jukedeck have given credibility to AI music composition. However to create a truly amazing piece for music still requires human involvement. Just as the ‘Portrait of Edmond de Belamy’ was created using neural networks that learn by examining existing images, music can be created by analysing thousands of musical scores, allowing AI to create tunes in a similar style.

That may cut it for elevator music but can it really make a hit? Talking to BBC Music recently about whether AI could create a number one, American singer/songwriter Taryn Southern demonstrates that while AI-created music is a fantastic starting point to her work, it is the elements that she adds, adapts and changes that make the end result a success. A computer may well be able to analyse every song on the planet, but it can’t recreate the certain je ne sais quoi that icons from Frank Sinatra to Lady Gaga possess. I don’t believe a computer can figure out how to generate that intangible essence – star quality – no matter how many examples we feed into it.

And that brings me back to how AI affects creatives working in the marketing industry. Just like stardom, the elements that go into creating an iconic brand are ethereal; the essence of a strong brand is unique, the combination of everything that touches the company, from where it sits in the marketplace to its product/service to its staff. Certainly, you could feed a neural network with the most iconic brands in history and I’m sure it would come up with an algorithm that determines what ingredients make a successful brand, but would it be authentic? I don’t believe it would be because it’s based on learned assumptions rather than living and breathing examples of how a brand instinctively behaves.

Accessible design software made it easy for anyone to create a half decent logo and I remember designers being concerned for their livelihood when sites like Fiver started to pop up offering logos for next to nothing. Rather then take away jobs, it has actually helped clients appreciate the value of good design and the fact that a brand’s visual identity is about far more than its logo. Understanding the true essence of a brand requires human strategy and analysis around its ethos, values and unique selling points in order to define its creative identity 

As far as I’m concerned, the same applies to AI. While robots may be able to carry out perfectly acceptable levels of creative work based on learned behaviour, there’s no need for humans to hang up our hats yet. Freed from the shackles of repetitive, time-consuming design work, we’ll have more time for experimentation. And perhaps, in a roundabout way, computers will spawn the next Picasso – by allowing artists more freedom to embrace creativity over drudgery.

This article originally first appeared on Net Imperative.

We create and grow brands for the most ambitious companies in the world.