Research in the International Journal of Technology Enhanced Learning has looked at the potential and limitations of automated systems designed to improve one’s public speaking skills. The work reveals new opportunities and obstacles for the future of communication training, but shows that there is a long way to go before computers can pass judgement on a human performance.
Oral Presentation Automated Feedback systems, OPAFs, use cameras, microphones, and algorithms to monitor a speaker’s performance. They can evaluate multiple aspects of presentation, including speech clarity, tone, body language, and overall structure. In principle, such systems offer an accessible alternative to traditional methods like self-practice, video review, group workshops, or one-to-one coaching, all of which require more time and effort and human resources. Public speaking, a skill often linked to professional advancement, academic success, and effective collaboration, stands to benefit from such scalable support, the research suggests.
That said, OPAFs themselves are largely in training mode and have remained of little more than experimental, academic interest. This new work offers the first systematic attempt to assess the field comprehensively, examining both the features offered by existing systems and the ways these systems are evaluated. Researchers conducted expert interviews and a detailed literature review, identifying 83 functional features and 12 additional elements deemed essential for an effective OPAF. These include alignment between verbal and non-verbal cues, personalized guidance tailored to the learner’s level, and structured recommendations on content organization.
The analysis of 14 existing OPAFs revealed a striking gap between design and implementation. On average, systems incorporated only 16% of the identified features. Particularly underdeveloped were adaptive feedback mechanisms, those that adjust guidance based on a speaker’s performance, and tools that ensure consistency between verbal delivery and body language. Structured support for content organization was also largely missing. The findings suggest that while some systems can support isolated elements of public speaking, no current solution addresses the full spectrum of skills necessary for meaningful improvement.
It’s perhaps not surprising that computers cannot yet assess the very human qualities needed in a public speaking engagement of whatever kind. This is especially true given that the systems reviewed do not seem to even cover much of what one would expect from an assessment of a person’s performance in public speaking. There is, the research suggests, a lot to be done to develop effective OPAFs that can understand the nuanced demands of public speaking. There is the potential to support not only individual skill-building but also the cultivation of more effective communicators in professional, academic, and social contexts.
Hummel, S., Schneider, J., Mouhammad, N., Klemke, R. and Di Mitri, D. (2025) ‘Enhancing presentation skills: key technical features of automated feedback systems – a systematic feature analysis’, Int. J. Technology Enhanced Learning, Vol. 17, No. 6, pp.1–25.
No comments:
Post a Comment