Abstract
Several studies have tested chatbots for their abilities to emulate human conversation, but few have evaluated the systems’ general knowledge. In this study, we asked two chatbots (Mitsuku and Tutor) and a digital assistant (Cortana) several questions and compared their answers to 67 humans’ answers. Results showed that while Tutor and Cortana performed poorly, the accuracies of Mitsuku and the humans were not significantly different. As expected, the chatbots and Cortana answered factual questions more accurately than abstract questions.
Recommended Citation
Park, Mina; Aiken, Milam; and Vanjani, Mahesh
(2018)
"Evaluating the Knowledge of Conversational Agents,"
Southwestern Business Administration Journal: Vol. 17:
Iss.
1, Article 3.
Available at:
https://digitalscholarship.tsu.edu/sbaj/vol17/iss1/3
Included in
Business Administration, Management, and Operations Commons, E-Commerce Commons, Education Commons, Management Information Systems Commons