Wednesday, April 20, 2011

Paper Reading # 24: Usage Patterns and Latent Semantic Analyses for Task Goal Inference of Multimodal User Interactions

Comments:
Comment 1
Comment 2 

Reference Information:
Title: Usage Patterns and Latent Semantic Analyses for Task Goal Inference of Multimodal User Interactions
Authors: Pui-Yu Hui, Wai-Kit Lo, Helen M. Meng
Venue: IUI 

Summary:
In this paper, the authors discuss means of finding patterns between tasks and multimodal user interactions. The authors would assign a task and record the speech and pen interactions of users. They would use this data in order to perform a matrix style analysis of patterns that emerged from tasks, speech, and pen gestures. Using this data they found they could infer the task simply from the gestures with 99% accuracy.

Discussion:
This was by far the least enjoyable paper I have read. The entire paper is severely complex and offers no explanation of anything that isn't itself even more complex than what they should be explaining. I had to read the paper several times just to figure out what they are doing, and even now I am 99% sure that I am mistaken in what I think the authors did. If they did what I think they did, however, it seems like recognizing tasks based purely on gestures could make developers more aware of how users naturally interact with multimodal devices.

2 comments:

  1. I think that this is an interesting idea if it is as you said, but I agree that it seems very convoluted and confusing. Maybe it could be used in some predictive program.

    ReplyDelete
  2. Based on the graphic you posted, it looks like a good idea. I like the way it accepts input not bounded my language constraints.

    ReplyDelete