埃森哲Fjord: 2020年人机互动超过人人交流,AI未来在情感智能
作者:    浏览:14290

编译自:Emotional intelligence is the future of artificial intelligence: Fjord

http://www.zdnet.com/article/emotional-intelligence-is-the-future-of-artificial-intelligence-fjord/

在人工智能(AI)技术中注入类似人类的情感能力,这将成为2017乃至以后更长阶段的前沿领域,埃森哲Fjord近日表示。

该公司澳大利亚、新西兰部门领导Bronwyn van der Merwe认为,具备高度情感智能、能够和人一样进行互动的AI系统将会取得最大成功。

AI概念由来已久,但van der Merwe表示,2017情感智能将成为新一代技术的发展驱动,因为人们会被类似人人交流的人机互动所吸引。

在第一代AI技术的基础上,情感智能将提升AI的情感信息领悟能力,同时逐步从相关信息中汲取经验并如人类一样进行实时合适的回应,van der Merwe解释道。

目前,全球52%的用户每月使用AI技术支持的聊天软件或手机App,埃森哲Fjord报告指出。其中,62%表示对于AI助手的回复感到舒适满意。

随着用户对AI兴趣和需求的持续快速增长,van der Merwe预测情感智能将成为区分优秀和一般AI产品的关键所在。在她看来,2020年人机互动会超过人人交流,届时情感智能会变得更为重要。

van der Merwe透露,目前亚马逊、微软、谷歌都在聘用喜剧演员和剧作家以赋予AI技术产品人类一般的个性特点和情感能力。

在AI技术应用中用户参与成为必然,但各企业必须高度重视信息透明和信任问题,并告知用户在同机器进行交流时需注意底线,van der Merwe指出。

“眼下,我建议客户运用数据进行智能回复验证,”她说道。

“我的第一感受是,最好要完全透明,这样才能建立起信任。如果不以信息透明为核心,可能会有造成媒体风波,会对品牌造成巨大冲击。”

但是,开发出掌握人类情感的AI产品并不能确保成功。比如,微软为18至24岁的年轻人设计的人工智能聊天机器人Tay上线仅16小时就学会了出言不逊、脏话不断,言语甚至涉及种族主义、色情、纳粹,充满歧视、仇恨和偏见。

Tay由包括喜剧演员在内的人员通过公共数据和社论共同研发,同时具备学习人类交谈从而进行个性化回复的能力。

“Tay让微软颜面大失,所以很快被关停,”van der Merwe表示。“围绕AI存在很多伦理道德方面的问题。”

另外,关于智能机器抢夺人类工作的讨论一直不断,但van der Merwe确信AI不会完全取代人类。

“我们人类有语境意识,有同理心,这些都是目前AI难以很高程度具备的。但我们相信,未来取得成功的将是那些可以将这些人类情感能力融入人工智能技术的公司,”她断言。

“企业需要同时掌握AI和人类两方面的长处,实现两者的无缝连接,这样才能大幅改善用户体验。”

不过van der Merwe也建议,企业在进入情感智能新领域之前需要就AI对社会、就业和环境的影响进行长期认真的思考。

 

Emotional intelligence is the future of artificial intelligence: Fjord

Those injecting human-like emotional capability into artificial intelligence will emerge as the front-runners in 2017 and beyond, according to Fjord.

By Asha McLean | February 20, 2017 — 22:31 GMT (06:31 GMT+08:00) | Topic: Innovation

The most successful artificial intelligence (AI) systems will be those comprising an emotional intelligence almost indistinguishable from human-to-human interaction, according to Bronwyn van der Merwe, group director at Fjord Australia and New Zealand — Accenture Interactive’s design and innovation arm.

While the concept of AI is not new, in 2017 van der Merwe expects emotional intelligence to emerge as the driving force behind what she called the next generation in AI, as humans will be drawn to human-like interaction.

Speaking with ZDNet, van der Merwe explained that building on the first phase of AI technology, emotional intelligence enhances AI’s ability to understand emotional input, and continually adapt to and learn from information to provide human-like responses in real time.

Currently, 52 percent of consumers globally interact via AI-powered live chats or mobile apps on a monthly basis, Fjord reported, with 62 percent claiming that they are comfortable with an AI-powered assistant responding to their query.

With consumer appetite for AI expected to continue to grow at a rapid pace, van der Merwe predicts emotional intelligence will be the critical differentiator separating the great from the good in AI products, especially given that by 2020 she expects the average person to have more conversations with chat bots than with human staff.

“People are probably going to be more drawn into engaging with chat bots and AI that has personality,” she said. “We’re seeing this already … it’s a companion and it’s something people can engage with.”

Van der Merwe explained that Amazon, Microsoft, and Google are hiring comedians and script writers in a bid to harness the human-like aspect of AI by building personality into their technologies.

With audience engagement somewhat guaranteed out of necessity when it comes to employing AI technology, van der Merwe said companies will have to focus very heavily on transparency and trust, and tell customers when they start speaking with a machine, be careful not to blur the lines.

“Right now, my recommendation to our clients is that we need to experiment with this … and we need to get data to validate our response,” she said.

“My intuition is that it’s better to be completely transparent so that you are building the trust, because I think if you build solutions that don’t have transparency at their core, you risk unintended consequences that could create a media storm and a backlash of a brand.”

An AI capable of human emotion is not a guaranteed win, however, with van der Merwe pointing to the public relations nightmare that was Microsoft’s Tay.

Microsoft announced in March last year that it was testing a new chat bot, Tay.ai, that was aimed primarily at 18- to 24-year-olds in the US. After a brief 16-hour Twitter rampage, Microsoft suspended Tay for spouting inflammatory and racist opinions.

Tay was designed by the tech giant to use a combination of public data and editorial developed by staff, including comedians. But, as an AI bot, Tay also used people’s chats to train it to deliver a personalised response.

“Much to Microsoft’s embarrassment, they had to shut it down very quickly,” van der Merwe said. “There’s a real, big question around ethics and how you build the morality into AI.”

While the debate over machines displacing workers has been discussed at length, van der Merwe is certain AI won’t ever completely replace human beings.

“As human beings, we have contextual understanding and we have empathy, and right now there isn’t a lot of that built into AI. We do believe that in the future, the companies that are going to succeed will be those that can build into their technology that kind of understanding,” she said.

“[Organisations need to] harness the strengths of AI and human beings and deliver those things seamlessly to a user in order to deliver a great customer experience.”

As organisations enter into the new territory that is emotional intelligence, van der Merwe recommends a long and hard think about AI’s impact on society, jobs, and the environment.

本文作者:

1、本文是中教全媒体原创文章,转载此文章请注明出处(中教全媒体)及本文链接。
2、本文链接:http://www.cedumedia.com/i/3308.html
3、如果你希望被中教全媒体报道,请发邮件到 new@cedumedia.com告诉我们。

来源:中教全媒体

参与讨论 0

评论前必须登录!