By David Jenkin
Learning social thinking
From the progress being made in building AI through deep learning on social media platforms, it’s tempting to announce that the age of the sentient machine is here. There has been a flurry of activity in this sector in recent months which has included
Facebook launching its own AI research lab, and
Google,
LinkedIn and
Pinterest each acquiring AI specialist firms in order to enhance their services.
Brennan White, the CEO and co-founder of
Cortex, a Boston-based company that develops AI for social media marketing, says; “The data available on social media is a great starting point for training computer models. Social data formed the basis of our original models.”
He explains that AI is already being applied to content and strategy, acting as a specialised team member. “We use AI to both design and maintain optimal content deployment models for brands, agencies and publishers, and to determine what content should be created for each social channel.” He says that we can expect to see much more of this in time.
Bad influences
Microsoft went a step further than purely practical applications with Tay, a chatbot that learned from and interacted with Twitter users for the purpose of entertainment. Unfortunately, Tay the chatbot turned into a bit of an embarrassment after she started spouting racist and anti-sematic hate-speech.
Peter Lee, corporate vice president of Microsoft Research, said in a
statement following Tay’s deactivation, “It’s through increased interaction where we expected to learn more and for the AI to get better and better. The logical place for us to engage with a massive group of users was
Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay.”
After reading some of Tay’s tweets, especially those inciting violence, one might share Elon Musk’s concerns when he warned that AI could be more dangerous than nuclear weapons.
Sarah Perez writes for TechCrunch, “While technology is neither good nor evil, engineers have a responsibility to make sure it’s not designed in a way that will reflect back the worst of humanity. For something like Tay, you can’t skip the part about teaching a bot what ‘not’ to say.”
Where to from here?
Musk’s concerns about AI centre on the possibility that we could be creating a technology that will ultimately grow beyond our control. “Hope we’re not just the biological boot loader for digital super-intelligence. Unfortunately, that is increasingly probable,” he tweeted in August 2014.
White, on the other hand, was dismissive of Musk’s remarks. He explained that, firstly, most AI companies aren’t working with human-like intelligence (AGI) but rather specialised intelligences, and secondly, that “computers have been smarter than humans in certain and specific ways of thinking for years”.
Even for the distant future, White isn’t worried. “The idea that HAL* won't open the pod-bay doors is intriguing and frightening and ultimately possible, but we will have other AI systems watching HAL for crazy decisions just like we have error checks in our current tech.” He added that although he respected Microsoft’s open AI vision, in not restricting Tay with too many rules, they should have given more attention to social and ethical parameters.
Musk, however, is not willing to leave it all in the hands of others. In December 2015 he co-founded OpenAI, a non-profit artificial intelligence research company with the stated goal of “advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
*HAL 9000 was the fictional artificial intelligence in Arthur C. Clarke's Space Odyssey series which famously turned on its human colleagues.
How do you think social AI will change our lives? Let us know in the comments below.