Several studies have tested chatbots for their abilities to emulate human conversation, but few have evaluated the systems’ general knowledge. In this study, we asked two chatbots (Mitsuku and Tutor) and a digital assistant (Cortana) several questions and compared their answers to 67 humans’ answers. Results showed that while Tutor and Cortana performed poorly, the accuracies of Mitsuku and the humans were not significantly different. As expected, the chatbots and Cortana answered factual questions more accurately than abstract questions.
Park, Mina; Aiken, Milam; and Vanjani, Mahesh
"Evaluating the Knowledge of Conversational Agents,"
Southwestern Business Administration Journal: Vol. 17:
1, Article 3.
Available at: https://digitalscholarship.tsu.edu/sbaj/vol17/iss1/3