• Nie Znaleziono Wyników

Help Me Help You. Designing Support for Person-Product Collaboration

N/A
N/A
Protected

Academic year: 2021

Share "Help Me Help You. Designing Support for Person-Product Collaboration"

Copied!
200
0
0

Pełen tekst

(1)
(2)
(3)

Help Me Help You:

Designing Support for

Person-Product Collaboration

Proefschrift

ter verkijging van de graad van doctor

aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus prof.dr.ir. J.T. Fokkema,

voorzitter van het College voor Promoties,

in het openbaar te verdedigen op 22 juni 2004 om 11.00 uur

door

Elyon A. M. DEKOVEN

Master of Arts in Mathematics, University of California

geboren te Media, PA, USA

(4)

Dit proefscrift is goedgekeurd door de promotor:

Prof. dr. J. Aasman

Toegevoegd promotor:

Dr. D.V. Keyson

Samenstelling promotiecomissie:

Rector Magnificus, voorzitter

Prof. dr. J. Aasman, Technische Universiteit Delft, promotor

Dr. D.V. Keyson, Technische Universiteit Delft, toegevoegd promotor Prof. dr. H.C. Bunt, Universiteit van Tilburg

Prof. dr. H. de Ridder, Technische Universiteit Delft Prof. dr. M.A. Neerincx, Technische Universiteit Delft Dr. G. van der Veer, Vrij Universiteit van Amsterdam Dr. C. Sidner, Mitsubishi Electric Research Labs, USA

Prof. dr. ir. F.W. Jansen, Technische Universiteit Delft, reservelid

ISBN 90-9018231-4

© Copyright 2004 Elyon A.M. DeKoven

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission from the author.

(5)
(6)

Help me help you Help me

Wouldn't that be cool (Holly Valence)

If I am not for myself, who is for me? And when I am for myself, what am I? And if not now, when?

(Pirkei Avos)

Cover: SenSay image based on original Thermy design, Patent (Dutch) 1014792, (International) PCT/NL01/00257, by Keyson, D.V., Freudenthal, A., de Hoogh, M.P.A.J. and DeKoven, E.A.M. (2000)

(7)

Acknowledgements ____________________________9

1

Introduction ___________________________15

1.1 On the usability of programmable products _____________ 17 1.2 On the usability of smarter programmable products ______ 21 1.3 Towards collaborative person-product planning _________ 24 1.4 Supporting person-product collaboration: From ambiguity to

productive uncertainty______________________________ 27 1.5 Thesis overview ___________________________________ 34

2

Supporting person-product collaboration via

three tiers of interaction__________________37

2.1 Introduction ______________________________________ 37 2.2 Three tiers of support for collaborative planning _________ 40 2.3 The three tiers in Thermy ___________________________ 47 2.4 Design concerns for the three tiers in the SenSay________ 52 2.5 Related work _____________________________________ 62 2.6 Discussion and next steps ___________________________ 66

3

Evaluating collaborative interfaces from task

models: A case study ____________________69

3.1 Introduction ______________________________________ 69 3.2 Measuring task support _____________________________ 71 3.3 Case study _______________________________________ 81 3.4 Results __________________________________________ 90 3.5 Discussion _______________________________________ 94 3.6 Conclusion and next steps___________________________ 97

4

Participatory design of collaborative interaction

for programmable appliances ______________99

4.1 Introduction ______________________________________ 99 4.2 Collaborative interaction prototyping platform (CIPP) ____ 102 4.3 Two tools for modeling collaborative dialogue __________ 109 4.4 Towards a method for using CIPP ___________________ 112 4.5 Related work: Tool support for designing collaborative

interaction ______________________________________ 117 4.6 Discussion and next steps __________________________ 119

(8)

5

Shifting focus of attention in person-product

collaboration: An exploratory study in

SenSay design ________________________ 123

5.1 Introduction _____________________________________123 5.2 Case study_______________________________________128 5.3 Results__________________________________________141 5.4 Discussion and next steps __________________________146

6

From collage to Collagen: A step towards

collaborative interaction design __________ 151

6.1 Towards collaborative interaction design ______________151 6.2 Scenario descriptions and elaboration (SX) _____________153

6.3 Participatory task modeling (SX TMx) ________________155

6.4 Implementation (TMX IX) _________________________157

6.5 Evaluation (IX EX) _______________________________158

6.6 Iterate (EX SX+1) ________________________________159

6.7 Discussion _______________________________________160

7

Conclusions and next steps ______________ 163

7.1 Introduction _____________________________________163 7.2 Generating the SenSay: Modeling the tasks ____________164 7.3 Structuring the SenSay within the product interface: What

goes where? _____________________________________164 7.4 Presenting the SenSay: How to see what to say_________165 7.5 On agent style in SenSay design: Saying what you see ___166 7.6 In conclusion: What next? __________________________168

Summary__________________________________ 171

Samenvatting ______________________________ 177

Appendix A Thermy: A programmable thermostat

interface ____________________________ 183

Appendix B Task model for Thermy ____________ 189

References ________________________________ 191

(9)

The progress of science depends on people working together, over extended periods of time. In many ways, this thesis is the result of many collaborative dialogues and activities, some spanning more than four years, and some lasting just a few weeks.

My supervisors, David Keyson and Jans Aasman, have provided ongoing assistance and guidance throughout my time in Delft. Thank you for your patience while I struggled with formulating my research questions. Your advice, comments and criticisms of my writing helped me clarify both the vision underlying this thesis and the expression of that vision in my writing. David, thank you for taking so much of your time to help me get this thesis and my papers done. Thank you for being available so often to answer questions and help think things through. Thank you for reading and re-reading and reading again my various attempts at getting my thoughts in order, and helping craft some of the crucial bits. Your name is on so many of my papers for a reason. Jans, thank you for your willingness to help, your guidance, and your supporting words. Thank you for helping me focus, and for seeing this project through to the end. Thanks also to Huib de Ridder for supporting the project through its last stages of birth, and helping optimize the project plans to actually get it done in a reasonable amount of time.

I would like to particularly acknowledge the contributions to my thesis made by the Collagen group at MERL: Charles Rich, Candace Sidner and Neal Lesh. The team, individually and collectively, made significant contributions to my project. Thank you, Chuck, for the many discussions we had, your patience in explaining (and explaining again) how Collagen works, providing insights into my research, and helping me with the coding. Thank you, Candy, for your patience in helping me understand SharedPlans and discourse theory, your insights on what my contributions to dialogue design research could be, and your prompt and thorough answers to my questions, and comments on my writing. Thank you, Neal, for your insights on how plan recognition and generation works, and your ongoing attempts to help me build things with Collagen. Thanks to all three of you for your hospitality, and your ongoing support and respect for me and my project.

(10)

Special thanks are also due to my officemate and paranimf, Marc de Hoogh. The majority of the work in this thesis was directly touched by your tireless efforts on my behalf and on behalf of my project. I enjoyed our walks, our talks and our co-coding sessions. Thanks for putting up with my advice, my songs and my jokes (such as they were), and thanks for letting me keep the window open.

The discussions I have had with Harry Bunt and his Dialogue Club at Tilburg University were always interesting, insightful and educational. I enjoyed the exchange of ideas we had during our meetings and your poignant questions. Harry, you and your research helped me understand some of the issues underlying my own work, and helped shape the way I grounded my research. Thanks also for taking the time to comment on some of my articles, helping me make them better.

My meetings with Don Bouwhuis were thought provoking and constructive. You have asked great questions and gave thoughtful answers. I have greatly enjoyed our discussions, and benefited from your literature leads and research advice.

The conversations I had with Gerrit van der Veer about tasks and task modeling were invaluable. Thank you for your comments on my papers and for pointing me in some good directions. I also enjoyed the semi-regular meetings between your research group and ours; the exchange of research goals and ideas were inspiring and informative. In Robbert-Jan Beun I found a tutor and a colleague. I enjoyed and benefited from our talks about the results of my first experiment. These discussions gave me good insights into the relevance of dialogue theory to my work, and clues as to how my work could contribute to the field.

I also had the pleasure of talking about dialogue with Emiel Krahmers, and working on words with Johan Hoorn. Their educated and experienced insights into issues of text and communication were helpful, at different stages of my project.

While we only had one extended conversation, Jeff Rickel gave me some keen insights into discourse theory, issues in tutorial design and using Collagen. I was fortunate to have met him, but I am sorry I will not have the chance to work him further. My condolences to his family.

(11)

The membership of the Intelligence in Products Group changed over time, but there was always a sense of that each individual was committed to the group’s success. Thanks Adinda for your timely comments on my Thermy design thoughts, my writing and my experiments, and all your efforts with designing and testing Thermy. Thanks also to Marco, Marijke, and Martijn, as well as past colleagues Bregtje and Karin. It was fun building the lab together, and I learned a lot from sharing insights and questions about our different research efforts and approaches.

Thanks to Arnold Vermeeren, my colleague and my teacher. I really enjoyed working with you on research projects – I’m glad you joined in when you did. Thanks also for keeping your door open, and for being there so often to provide help and answers to my assorted questions. Thanks also to the lunchtime ‘play’ group, including, Agnes, Arnold, Els, Marco, Tjamme and Xi, for helping me stay a little more balanced.

Thanks to Jouke Verlinden for talking shop with me. I enjoyed our meetings, both planned and happenstance, both on and off the train. Thanks also to Jon Restrepo for going Dutch with me. It was nice to have a fellow buitenlander around to share resources and knowledge about the new world around us.

When an ‘ambitious’ group of new Ph.D. students started at IO to do research on different aspects of smart products, there was no working definition of what it meant to be ‘smart’. For a while, the SmartProducts discussion group, both online and face-to-face, provided the only existing forum for exploring the term. Thanks to the group members, including Amina, Erik, Serge, and Stephan. Unlike many meetings, our discussions were actually productive, resulting in several complementary student projects.

While I did not get to sit in the Studio Lab, I am honored to have been considered a member. Thanks to those that were there, who provided me with friendship, help, and community, including Aadjan, Aldo, Annemieke, Carlita, Caroline, Daniel, Els, Ianus, Onno, Paul, Pieter, Pieter-Jan, Rob, Ronald, and Thomas, as well as the lab’s ‘extension’ in Eindhoven: Kees, Tom, Stephan, and Joep. Particular thanks to Corrie van der Lelie, for all your help with design, layout, and general creative input on my thesis and other materials; to Gert Pasman, for

(12)

your example and advice, which helped smooth the flow of the end-game of my projectl; and to Rene van Egmond for help with statistics. Thanks to the many around the department who provided such useful support for my research. Thanks to the ‘lit lady’, Jarmila Kopecka, for the stream of literature updates. Your pointers more than made up for the lack of students for my projects. Thanks to Agnes Tan, who provided useful feedback on early experiment designs. Thanks to Henri Christiaans for finding such good students for me. Thanks to Theo Boersema and Ans Koenderink-van Doorn for help with eye-tracking literature. And thanks to Rolf den Otter for the good sounds.

Technology-based research requires ongoing help from those good with technology. Thanks to the ‘guys downstairs’: Arend Harteveld, Kees Jorens, Henk Lok, and Fred Steenbruggen. In addition to your readiness and ability to help, I appreciated your constant friendliness. While working on this thesis, I had the opportunity to work with a number of students and student groups on explorative design research projects. In particular, thanks to Erwin Wolf for working with me on developing a design approach and the Build-Design-Evaluate diagram; thanks to Tanja Veldhoen for careful and continuous assistance at getting my experiment going; and thanks to Xi Zeng, for helping me understand some of the SenSay design challenges, and for providing me with another way to see the Thermy task model. Thanks also to all the students with whom I had the pleasure of working together, for giving me more insight into doing design research together, including (among others): Jan-Eidse, Mieke, Wiljan and Lisette, Brendan and Hein, Tamara and Rui, Merijn and Fleur, Abboy, Sabine and Wendy, as well as the 2001-2002 TWAIO class.

Thanks to Prof. van der Burg, for helping me with my math homework. Thank you for taking so much time to tutor me, and to take me further in the book. Thanks also to my Dutch teacher Margot Sarton, who helped make it more pleasant to live and work in Holland.

Thank you Carla, Martine, and Thea of the secretariat of the Industrial Design department at TU Delft. Thank you for all your help with the thousands of details necessary for working at TU Delft, and your patience with me as I struggled to learn the forms and processes of the university.

(13)

Additional thanks are due to particular people for helping on particular chapters:

o Chapter 2: An earlier version of the four SenSay design issues arose out of discussions with Xi Zeng, Harry Bunt and Arnold Vermeeren while collaborating on a research project.

o Chapter 3: Thanks to Marc de Hoogh for his programming efforts, and Rui Medeiros-Santos for his assistance in data collection and analysis. Thanks are also due to Harry Bunt, Candace Sidner and Robbert-Jan Beun for their insightful comments on earlier versions of this chapter.

o Chapter 4: Thanks to Marc de Hoogh for all his assistance on designing and writing the code for the CIPP, and to Tanja Veldhoen for checking my code and supporting the approach.

o Chater 5: Thanks to Tanja Veldoen, for making the experiment materials (some of it multiple times), for helping with the translation work, and for running many of the subjects through the experiment. Thanks to Marc de Hoogh, for getting it all to work (well enough), and for filling in for me at crucial moments. Thanks also to Imke Helmes, for your assistance with the subjects and the data.

Getting to the ‘meet’ of the matter, this thesis is about collaboration, about working together, about helping each other to be better together than they are apart. These things I learned from my family. Thanks to my parents, for your beautiful model of partnership and parenthood. My father taught me throughout me life the importance of home and heart, and the intrinsic value of productive fun. This thesis is a tribute to your technographic efforts at helping the world help itself. Thanks for your valuable comments on my visions and revisions, and helping me find good pictures and formatting some of my diagrams. My mother helped me appreciate the world around me. Thank you for showing me the colors in a lake, and for teaching me about seeing beauty in light and valuing life in all its variety (except maybe fleas). Thank you for your art, including the Vermeer-like woman’s face, the painted arrows, and the hand-drawn GUI that appear in this thesis.

My wife, my eishes chayil, continues to help me learn about what it takes to really be a good partner. Thank you for helping me continue to move up, while at the same time keeping me grounded with your

(14)

fine sense of fun. Thank you for making the house feel like home, thanks for giving me the time and space to do my work, even when that meant not being there for you, and thanks for being my friend. Thanks also to Maya and Reina for being such good agents to model. Growing up with you, I have learned a lot about being a guide, a playmate, and a partner.

Most importantly, B”H, thanks for giving me ongoing inspiration, motivation and strength to stick with my studies.

My apologies for names I have left off these abbreviated lists; please know I am grateful for your contributions to my project. To everyone, thank you for helping me.

(15)

My alarm clock: an example of something we are used to programming. I like that it is easy to press snooze (the long bar on top), but that it requires pressing two buttons at the same time (one on the right-hand side and one on the left) to turn off the alarm.

This central panel for Honeywell’s Hometronic home organizer can be used to control home heating, lighting and security, once it has been properly programmed.

The Electrolux Trilobyte robot vacuum cleaner has to know to go around feet without asking.

1.1 ON THE USABILITY OF PROGRAMMABLE PRODUCTS 1.2 ON THE USABILITY OF SMARTER PROGRAMMABLE PRODUCTS 1.3 TOWARDS COLLABORATIVE PERSON-PRODUCT PLANNING

1.4 SUPPORTING PERSON-PRODUCT COLLABORATION: FROM AMBIGUITY TO PRODUCTIVE UNCERTAINTY

1.5 THESIS OVERVIEW

1 INTRODUCTION

We use programmable appliances, such as VCRs and thermostats, to get things done, every day. Like pressing snooze on an alarm clock, some are so familiar to us we use them without stopping to think about how we do it.

But there are other programmable appliances that we can’t use very well. We may not

even know what they could do, or what they could really do, if we could only figure out how to use them.

We may ask ourselves, “Why can’t the product just help me get this done?” Especially when what we are doing is tedious, repetitive, or difficult, the product should do its utmost to help us get it together to get it done better.

It better be nice about it, too. We wouldn’t want to be bothered every few seconds for

some irrelevant suggestion it has. We would rather the product acted like a proper collaborative partner, sharing the burden and working with us on achieving our goals. In fact, we would like our partnership to be productive, that is, telling our products what we want to do and how to do it should help us get things done better. But appliances can’t really hold good conversations, now, can they?

Technologies exist, such as speech recognition, which can help us communicate with our products more freely than before. But no product can understand everything – we still have to know the limits of what we can and cannot

(16)

Mattel’s My Interactive Pooh talks with the player about playing the computer game.

say. Other technologies, such as pattern recognition and plan recognition, can help the product know more about what we are trying to do. A product could use such smarts to suggest specific things we could do or say next.

Even a smart programmable product cannot know everything we want. They need help from us to be sure about what it is we want to do. People and products working together is a form of collaboration. Over the past few decades, there have been many improvements in supporting human-human collaboration, including both technological advancements (e.g. the fax machine, the internet, mobile telephones, sticky notes and projectors) and methodological advancements (e.g. Technography, by DeKoven, 1990). However, there is little known about how to design interfaces for home appliances that support person-product collaboration.

This thesis documents an exploration into designing programmable appliance interfaces (PAI, pronounced ‘pie’) that support person-product collaboration. The specific design challenge discussed in this thesis is how to develop a collaborative PAI (CoPAI) that capitalizes on complementary strengths of the person and the product: the product’s ability to interpret user actions and goals and make efficient plans to achieve them, and the person’s communication skills and knowledge of what he or she wants to do.

The research question in this thesis is how to design a CoPAI based on a known theory of collaborative communication. Products designed according to the communication patterns modeled in the theory should be familiar to the people who use the products, and thus easier to use. Moreover, by looking at person-product interaction in terms of collaborative discourse theory, results from individual design studies should be readily generalizable to different types of products and people. While not directly addressed in this thesis, it should also be possible to reflect from study results back onto the theory, providing more insight into the theory and nature of collaboration. It is not immediately obvious how collaborative behavior in a product will contribute to product usefulness: Will it help people develop more efficient Supporting person-product

collaboration: Technography at work

(17)

The controls for our central water heater. The manual on how to use these three knobs was more than 10 pages long. My wife figured out how to use our washing machine, even without a manual.

strategies for using the product? Will they like it? Would they feel like they are collaborating with the product? In addition to design exploration and evaluation studies, this thesis also takes up the challenge to create a user-centered process for modeling user tasks and strategies, and sharing this task knowledge with the designer, developer, people and product. By addressing these questions and challenges, this thesis contributes to the development of an integrated approach towards the design, implementation and evaluation of collaborative appliances.

This chapter outlines some of the challenges in designing such interfaces, describes the theory of collaboration underlying the design and research efforts, and gives an overview of the dissertation structure.

Let’s begin.

1.1

On the usability of programmable products

Over the past few decades, there has been a tremendous increase in the use of embedded actuators, sensors and software in everyday home products, such as microwave ovens, vacuum cleaners, VCRs, and thermostats. Such products are called ‘programmable’ because they give the user control over myriads of product features through setting sequences of preferences, typically to occur for some specific amount of time, at some point in the near future. In order to use the product, the user creates (possibly reusable) programs, or selects among preset programs, such as ‘pizza’ or ‘popcorn’ on a microwave oven.

Most current programmable appliance interfaces (PAI) provide access to programming capabilities through an interface with many buttons, knobs, switches and screens. Typically, these are visually arranged in groups of nearly identical items, distinguished from each other perhaps only by a textual and/or iconic label. In order to use the product, people have to translate what they want to do into one of these words or pictures.

(18)

Figure 1: Getting something done with a programmable appliance requires figuring out what the appliance can do, which of those things apply to what we want to do, and then making the right choices at the right time.

It was not easy to figure out how to program our VCR.

For example, in order to program my VCR, first I have to change to channel 10 on the television, and then press the ‘menu’ button on the remote. This brings up a menu of six labeled icons, one of which is ‘program’. To select it, first I have to use the arrow buttons on the remote to move around the screen, and then press the ‘Ok’ button on the remote. Now there is a table on the screen. After a moment of thought, it makes sense – separate columns for day, start time, end time, etc., separate rows for multiple programs. After some experimentation, I find out the ‘program’ column in the table actually refers to the channel to record (that is, the channels as the VCR knows them, which is different than the way the TV knows them). Now the final step: I have to turn off the VCR in order for the timer recording to take effect! At least the VCR is nice enough to tell me this by putting three lines of text on the screen telling me what to do next.

The VCR example above demonstrates the difficulties people might have in converting what they see of the product into what they want to do (see Figure 1). In order to record a show, I had to find buttons and menu items labeled ‘Program’, ‘Menu’, ‘Ok’, and ‘Stand By’, along with buttons with arrow pictures and other graphical icons, and read a bunch of on-screen text.

It gets worse. Even discovering the set of words/actions the product understands can be difficult. Many programmable appliances are designed to fit into a small space, such as on the kitchen counter, in the palm of a hand, or on a wrist. In order to simplify the interface without reducing functionality, some of the controls may support more than one function. An example sits on my wrist. The backlight button

(19)

My old watch. It offered a large range of functions with two just two buttons and a knob. I forget how I found out that there were additional subtasks (for most modes), accessible only by pulling the knob out one or two positions.

Our thermostat has one knob for selecting one temperature for the whole house.

Some thermostats support different temperatures in different rooms at different times and days during the week, requiring people to make a lot of settings to make the house comfortable. on my digital watch is also used to reset the stopwatch function as well as marking laps (it’s called ‘split’). One button is used to switch between multiple modes, then another series of button-presses to access the right function within that mode, and then a turn of the knob to choose a value for the function.

In comparison to conventional non-programmable appliances, using programmable appliances often requires the user to execute more actions to achieve the same basic goals. For example, controlling home heating with a conventional thermostat requires simply turning a dial. Controlling the heat with a

programmable thermostat can require first entering a certain programming mode, then choosing the right day and time, and finally selecting a temperature (or set of temperatures). When using programmable appliances, people are often overloaded with complex interaction sequences and confronted with a high threshold of entry to usage. Simply put, people might not even know where to begin.

Product designers have traditionally focused on the product's physical form (including textual labels and icons), onscreen text and printed manuals to convey the product's functionality and usage. However, many of the features in current consumer products are due to embedded hardware and software – aspects of the product generally invisible to the user. This is what Hummels (2000) calls ‘withdrawal of the machinery’ and Norman (1998) calls ‘invisible computing’. The system functionality added by computational power tends to be hidden, arising only during usage and through experience. People are often not aware of what the product can and cannot do.

(20)

Interfaces on a current microwave oven, a combination microwave/convection oven, and a washing machine. Buttons are labelled with images and text to communicate what each button means. Pressing a button is like ‘saying’ the associated label to the product.

Design research has shown that PAIs with many buttons and multi-functional controls, though often called ‘smart’ or ‘intelligent’, are difficult for people to use, particularly for the elderly (Freudenthal, 1999), and are held to be generally less engaging (Hummels, 2000). There is plenty of anecdotal evidence indicating that even highly educated people are not motivated to program their thermostats, VCRs or microwaves. For example, one thermostat manufacturer reported (in a personal communication) that, of those programmable thermostats returned for repair, 80% are still in their original factory settings. The programming capabilities of these thermostats were barely used! Even though the product may be called ‘smart’, people are often unable to figure out how to communicate with it, or they are just not interested in trying.

Moran (1981) describes four levels of analysis (what he calls a ‘representational scheme’) for the design of interactive products that can help provide some insight into the challenges here:

o TASK LEVEL: a non-formal structure that “represents the purpose of a system by enumerating the tasks the system is intended to accomplish”, o SEMANTIC LEVEL: a set of entities and operations that are “useful for

accomplishing the user’s tasks”, giving an abstract definition of product functional capability,

o SYNTACTIC LEVEL: the “language structure” of the interface, similar to syntax in human languages), and

o INTERACTION LEVEL: the specific sequences of operators and interface

(21)

Figure 2: Interaction with task-aware multimodal programmable products.

Compared to earlier, non-programmable versions, programmable products have roughly the same task structure. For example, like its non-programmable cousin (just a knob on the wall), a programmable thermostat is still primarily for managing temperature comfort and energy expenses. However, programmable products are much richer in terms of the semantic and syntactic levels, since there are so many more entities, operators, and commands possible. Current programmable product interfaces support many possible sequences of interface actions for the same underlying Task Level, which tends to make them difficult to learn how to use. The interaction design challenges come from trying to help people figure out which commands apply to their current task, and then choosing the right one to get it done well.

1.2

On the usability of smarter programmable products

People have problems telling their products what they want to do. With technologies existing today, it is possible to increase the communicative capabilities of programmable appliances (see Figure 2 and compare with Figure 1). In particular, increased system ability to guess the user’s intentions and suggest the next step the user should take, could reduce the amount of user effort necessary to find the right product features to complete a task. Furthermore, additional input modalities, such as natural language supported by speech recognition, could provide the user with more flexible access to product functionality.

Consider, for example, a programmable home thermostat that could understand a fair amount of normal (heating-related) spoken Dutch, and could build up a history of user-preferred temperature settings and actual room temperatures. Through spoken commands, it is in principle possible to tell the thermostat to make a number of changes in one statement, such as lowering the temperature in one room while raising it in another at certain times of the week, or freely express higher-level goals, such as saving energy subject to specific constraints. Moreover, through pattern recognition, a thermostat could compare the history of user preferences to observed thermodynamic

(22)

Thermy in use (see Appendix A). Photo: Piet Musters / Delft Integraal.

patterns in the house, and then figure out a good way to maintain home comfort, while minimizing energy expense. Through plan recognition and generation, it could figure out whether the user wants to save more energy or be more comfortable, combining this with what it knows about thermodynamics and user preferences, could make a good plan to optimize the heating schedule. It could then autonomously make the internal and external changes necessary to carry out this plan, for example by opening or closing vents to optimize the heating flow through the house, or offer suggestions for what should be done.

Technologies such as speech recognition, pattern recognition and planning hold the potential to ease the challenges of getting things done with a programmable product. Nevertheless, more technology does not automatically generate additional usability. In fact, such flexible interaction styles might increase, rather than decrease, the challenges people face in figuring out how to use a product, in at least three ways.

First of all, even though research has shown that the use of multiple modalities can increase recognition rates (e.g. Oviatt, 1999), speech recognition is not perfect – even people have problems understanding each other sometimes. The problem is to get people to say things the system can understand, and at the same time get the system to understand a broader range of things people might say. Some efforts to improve the usability of person-product dialogue have focused on designing the agent’s utterances to model or guide the user to making well-formed commands (i.e. the system can recognize them). For instance, there is research on guidelines for designing spoken prompts (Hansen et al., 1996; Yankelovich, 1996; Gamm and Kaeb-Umback, 1995; Zoltan-Ford, 1991), and research on optimizing other aspects of the agent’s dialogue strategies, such as through automatic learning models (Walker, 2000). Another direction to improve person-product dialogue usability is to make it easier for people to figure out what they can say, such as by providing a set of ‘universally intuitive’ expressions (Rosenfeld et al., 2001). However, products, like people, will only be able to understand a subset of normal human languages. People will still require help in understanding the range of things the product could understand, and in determining what they should say at

(23)

any given moment. Without proper support in using speech, people may restrict themselves to simple short commands, possibly limited to the words and objects they can see in the product interface, or they may choose not to talk at all. If only we knew more about what we could say to our products…

Second, autonomous action, as in the thermostat example above, reduces the amount of control the user has over the functioning of the device. For instance, the user may not agree with system-detected patterns and may not want them automated. Without proper feedback from the system, the user may not even be aware the system took the actions. Significant user frustration can be caused through having a lack of control, or even feeling out of control. As Hummels (2000) says:

“Strangely, these days we are often the servants again. This time, we are the servants of machines which dictate to us how they should be used. If we do not know how to use them, we are to blame, because a machine is intelligent and we are dumb.” (p. 1.13)

However, we might not feel this way if the system automatically did something useful, like saving energy without decreasing our comfort. If only the product knew more about what we wanted to do….

Third, increased system intelligence can engender overblown expectations, as Norman (1997) describes:

“Speech recognition has this problem: develop a system that recognizes words of speech and people assume it has full language understanding, which is not at all the same thing. Have a system act as if it has its own goals and intelligence, and one expects full knowledge and understanding of human goals.” (p. 52)

Such a mismatch between what people expect a product to understand and do, compared with what the product actually can understand and do, can lead to miscommunications between the person and the product, and in general, unsuccessful attempts by the person to use the product.

People need help understanding how to apply a product to their current task. At the same time, in order to better help the user, the product needs help in knowing out what the user wants to do, and how they want it done. If only the person and product could work better together…

(24)

1.3

Towards collaborative person-product planning

To summarize the discussion so far, in spite of the potential conveniences PAIs can offer, current PAI designs present a number of obstacles to being used well. In particular, it can be difficult for people to figure out:

o What the product can do and what it can understand at any given moment o Which of the things it can understand applies to a specific user goal o How to use the product interface in an efficient way

As this list shows, it is not enough simply to build computational intelligence. Rather, the product should help people make and execute plans for using the product to get things done well. As (Ortiz and Grosz, 2002) say:

“People should be able to communicate in terms of the work or effects they want to accomplish, rather than being required to tell a system the specific steps it must take to satisfy their needs.”

People should feel supported in reaching their goals, to feel confident in using this support, and to feel fully in control, or at least to feel they have the ability to assert control whenever desired. At the same time, people should not have to work too hard to determine how the features of the product apply to their goals, and should not be unnecessarily burdened with product capabilities or details much beyond their expertise or needs.

Figure 3 shows a model of task-oriented communication between a person and a product. Inside the product the diagram distinguishes an agent and an interface. An agent is someone or something you talk to in order to help you to get something done. In Figure 3, the agent inside a product is a collection of algorithms and data models responsible for maintaining the dialogue with people. The interface is where the dialogue between the Figure 3: A model of person-product collaboration, in which both

can contribute to the same task. Arrows indicate observable discourse acts, such as spoken utterances and actions.

(25)

person and the agent takes place. The product is what holds it all together. The distinction is technically somewhat arbitrary; people may not even notice the difference, depending on the design. In fact, the rest of this thesis uses the terms product, agent and interface interchangeably when there is no chance of significant ambiguity. Nevertheless, by making this separation, it is possible to talk about designing a product interface to support collaborative dialogue between a person and an agent concerning using the product towards getting something done.

“Collaboration is a process in which two or more participants coordinate

their actions towards achieving shared goals.” (Rich, Sidner and Lesh, 2001)

In addition to communicating about what they have done, in a ‘good’ collaboration the participants find it worthwhile to spend effort making sure they agree about what they are trying to do, how to do it, and what to do next. As (Roth et al., 1997) point out: “A recurrent finding in studies of human-human interaction is the importance of participants having a common understanding of the situation and of each other’s intentions and actions.” In other words, a primary component of collaborative dialogue is developing and maintaining a shared plan.

In order to make products that can collaboratively plan with people, a number of researchers have turned to the SharedPlans formalization of collaborative dialogue (Grosz and Sidner, 1986 and 1990; Grosz and Kraus, 1993 and 1996; Lochbaum, 1994). Unlike the guidelines for agent utterance design mentioned above that focus on single actions and utterances, SharedPlans is concerned with interpreting an ongoing task-oriented dialogue. Successful collaboration, according to the theory, requires shared beliefs about what needs to be done, how it will get done, and who will do it, as well as about what each participant is capable of, what they intend to do, and their commitment to getting it done. SharedPlans can be used to make a product that can plan with people by interpreting dialogue events, such as button clicks and verbal utterances, in terms of a task

Figure 4: SharedPlans theory models collaborative dialogue in terms of three related tiers (Intentional State, Attentional State and Linguistic Structure) based on a hierarchy of tasks (discourse goals) and plans.

(26)

model1. As the dialogue progresses, every dialogue event is identified

with a specific task in the model and interpreted in terms of how it contributes to the completion of the task, resulting in updating the participants’ individual and shared beliefs about the status of the plan. The SharedPlans model combines three essential components of collaborative dialogue (see Figure 4): intentional state (beliefs and intentions of the participants), attentional state (the participants’ current focus of attention), and linguistic structure (how the discourse acts fit together). Most design analysis techniques such as (Moran, 1981; Card et al., 1983; Constantine and Lockwood, 1999) utilize a model of intentions (but not ‘intentional state’), such as a task model, to document what people would want to do, and how they could do it with the product interface. In contrast, model-based interaction engineering techniques (e.g. Szekely et al., 1995; Johnson et al., 1995) use task models to track shifting user intentions over time, to determine what the product should do or say next. Dialogue interpretation and dialogue management techniques (e.g. Bunt, 2000; Oviatt et al., 1998; Traum and Dillenbourg, 1996; Sparks et al., 1994) improve on model-based techniques by relating the user’s current intentions to linguistic and attentional components of the interaction. These two latter components are particularly useful for automatically detecting ambiguity and repairing miscommunications.

Compared to other models of collaborative dialogue, SharedPlans is unique in its unification of empirical evidence of how people communicate when they work together, with theory and rigorous models of collaborative planning. As Terveen (1995) says of SharedPlans, “A fundamental difference of this work from earlier plan recognition work is that it allows for collaboration in the planning process itself.” In particular, SharedPlans incorporates participants’ commitment to a plan, coordinates between individual and shared beliefs and plans, and significantly, can handle situations when the participants do not have a completely worked-out plan for getting things done (Grosz and Kraus, 1996). Since SharedPlans captures shifts in plans and beliefs at different levels over time, not just a primitive level of interface actions, it is well suited for making products

1 Computational linguistics literature tends to talk about discourse goals; design literature tends to talk about user tasks. Both fields are concerned with what people want to do with the system and how they get it done. The relationship between goals and tasks is discussed further in Chapter 3.

(27)

that can work with people on iteratively refining plans and carrying them out.

Some researchers have looked to SharedPlans as a specification of how the product should think (Rich et al., 2001), and others have used SharedPlans as a guide for the design of the product implementation (Babaian et al., 2002; Ortiz and Grosz, 2002). Such efforts take what Terveen (1995) called the Human Emulation approach, which “looks first to develop formal models of human-human collaboration, usually focusing on collaboration in language, then to apply the models to human-computer collaboration.” The general assumption in this approach is that designing interaction based on familiar rules of collaborative discourse will lead to products that are easier to learn and use. Moreover, it is easier to generalize design principles when working with formal models such as SharedPlans. As Grosz and Shieber (1997) say of their work: “Our research aims to provide the scientific and technological base for a new paradigm, one that enables the principled design of multi-modal dialogue-supporting interfaces.” This same goal is echoed throughout the SharedPlans literature. However, there has been little examination of how to use SharedPlans to guide practical interaction design. There is a gap between existing design tools and techniques, and constructing agents that can think in terms of SharedPlans. There are also few existing examples of SharedPlans-compatible products to evaluate. What should a SharedPlans-compatible agent be designed to do?

1.4 Supporting

person-product

collaboration: From ambiguity to

productive uncertainty

To summarize the above discussion, current designs of programmable appliance interfaces (PAIs) can present people with significant challenges when using the appliance towards achieving their goals. The technology exists to make products that can collaborate with people on making plans to overcome these usage challenges, but there is little known about how to actually design such interfaces.

The design challenge explored in this thesis is how to design programmable application interfaces (PAIs) that support useful, usable

The central design challenge explored in this thesis is how to design interfaces for programmable appliances to support productive person-product planning.

(28)

and enjoyable person-product collaboration. In particular, this thesis explores how to design a PAI to facilitate the person and product in establishing, developing and carrying out plans necessary for achieving commonly held goals.

The design work discussed in this thesis is predicated on the Help Me Help You2 principle:

We use products to help us get things done better - we could eat with our hands but a fork is less messy. The phrase 'providing help' in interface design has historically meant an expert machine tutoring someone on the right way to use the machine or explaining what something in the interface means. The phrase 'help me help you' implies more: a collaborative dialogue of near-equals, where each participant has to make sure all participants are on the same page, i.e. that they have a common commitment to achieving shared goals, using mutually agreed-upon plans, given group beliefs (for a rigorous definition of these notions, see Grosz and Kraus, 1996). The HMHY principle states that, in order to help the product better help the user, a product interface must support and persuade the user to clearly state what it is they want.

Grosz and Shieber (1997) echo this view. They describe traditional interface design as encouraging a master-slave relationship, and then go on to state: “Little attention has been given to the middle ground, or semi-automatic systems, in which the interface would be a means for the computer and its user to work together on solving some problem.” In this way, the product interface provides a medium of exchange to support the person and product in developing, sharing and carrying out plans to get things done, together.

Over the past decade there have been a number of attempts to develop interfaces that adapt to the user’s current task, to help the user figure out what to do next. Efforts such as (Kuhme et al, 1993; Puerta and Eisenstein, 1999; Szekely et al., 1996; Johnson et al., 1995) provide model-based engineering approaches for developing

2 This phrase has been previously used in at least one movie (“Jerry McGuire”, Tristar Pictures, 1996), and at least one song (“Help Me Help You”, Holly Valance, 2002).

Help Me

(29)

task-adaptive assistance. However, these efforts do not build from existing theory of communication, and have little support for ongoing collaborative person-product planning. Moreover, they offer tools, but little concrete guidance for interface or interaction design. In a similar vein, most dialogue design research, such as (Walker, 2000; Hansen et al., 1996; Yankelovich, 1996; Gamm and Haeb-Umback, 1995), offers good advice for designing understandable system prompts, but not within the framework of a theory of collaborative planning. While there is much known on how to support effective human-human collaboration (e.g. de Vreede, 1997; DeKoven, 1990), such efforts are carried out without reference to a specific theory of collaborative planning, and the results of such efforts have not been directly applied to person-product collaboration.

Rather than tackling the engineering and theoretical challenges that lay ahead for building effective HMHY-compliant collaborative products, this thesis is concerned with developing practical methods and design insights based on an established model of collaborative dialogue. Specifically, the research question addressed in this thesis is how to design interfaces to support person-product planning dialogues, in terms of the SharedPlans model of collaboration.

In addition to the discussion in the preceding section about the strengths of SharedPlans, there are two main reasons why SharedPlans is a good basis for design research. First, SharedPlans theory models collaborative dialogue in terms of a hierarchy of discourse goals (see Figure 4). Designers have been talking about hierarchical task modeling for years (e.g. Annett and Duncan, 1967; Shepherd, 2001). While goals and tasks are not exactly the same3, the

opportunity for synergy is significant. Second, SharedPlans theory has recently become a practical foundation for interaction design studies via the Collagen (Rich, Sidner and Lesh, 2001) middleware for making collaborative agents.

3 See Chapter 3 for more discussion on the relationship of tasks and goals.

The research question addressed in this thesis is how to design interfaces to support person-product planning dialogues, in terms of the SharedPlans model of collaboration.

(30)

From a design perspective, one strength of Collagen is that it manages person-product interaction, but does not decide the actions the agent should take next. While Collagen does provide some default agent behaviors, the intention of the Collagen creators is that these behaviors should be overridden based on additional decision-making routines appropriate to domain-specific or user-specific information. Collagen thus provides for principled interaction design, without dictating how the interaction needs to be designed.

Another strength of Collagen is its marriage of plan-based discourse theory with a robust plan recognition algorithm. Without plan recognition, the product always has to be told what the user is now doing. With planning capabilities, the product can try to figure it out for itself. Plan-based agents are software entities whose behavior is based on task models and the ability to determine next actions based on those models. A plan-based agent using keyhole plan recognition compares observed user actions to a task model to figure out the user’s current intentions, even without explicit confirmation from the user. Based on its interpretation of the purpose of the user’s action, it updates its internal picture of what the user is doing, called the discourse (or task) state. Through a process of plan generation, it can then determine efficient plans for achieving the user’s goals, and offer the user task-specific advice, such as things the user may want to do or say next. As discussed in (Lesh, Rich and Sidner, 1999), plan recognition is intractable in the general case. Plan recognition in Collagen is robust in that it uses the collaborative focus of attention and clarification questions to limit the search process.4 Moreover, as they say, “Unlike the ‘classical’

definition of plan recognition, which requires reasoning over complete and correct plans, our recognizer is only required to incrementally extend a given plan.” This

4 This works because of the assumption in SharedPlans that collaborators want their actions to be observed and their intentions to be recognized, to limit the amount of communicative effort required to maintain the collaboration.

Collagen was developed by Charles Rich, Candace Sidner and Neal Lesh of the Mitsubish Electric Research Labs (MERL), located in Cambridge, MA, USA.

Charles Rich Collagen (Rich et al., 2001) is Java middleware for making collaborative interface agents, partially implementing SharedPlans theory. It is the only existing collaborative discourse manager designed as a reusable component, to enhance an existing application.

Candace Sidner Collagen relies on the theory of collaborative discourse structure proposed by Grosz and Sidner (1986, 1990), and on the artificial language for collaborative negotiation (Sidner, 1994), which is used to code dialogue moves in Collagen.

Neal Lesh

Collagen-enabled agents have the ability to reason over incomplete plans by taking advantage of features of collaborative planning, such as the focus of attention (Lesh et al., 2001).

(31)

latter aspect is crucial to enabling synergy with the task models in current design approaches, which are generally not complete and correct in the sense of specifying every possible task situation (i.e. what are known as ‘full causal models’). While other task model-based agent development platforms exist (e.g. Szekely et al., 1996; Puerta and Eisenstein, 1999), Collagen is the only one built upon a theory of collaborative dialogue, designed to be a reusable component of an agent’s intelligence.

Unlike interaction designs resulting from conventional user-centered design approaches, interaction designs resulting from plan-based approaches are able to incorporate an arbitrary number of goals and similar paths to achieving the same goal. Plan-based agents can consider different paths when the user does something inconsistent with the current plan to find other possible interruptions, and in this way be flexible with respect to shifts in the user’s goals. However, using keyhole plan recognition alone (even under the assumption that the collaborators want their intentions to be recognized) is insufficient to precisely pinpoint the user’s intentions. For example, just because I start a document with a greeting (e.g. ‘Dear Julie’), does not have to mean that I am writing a letter. The product can rarely be completely sure of its interpretations. Through plan recognition, a product might need to handle two kinds of task ambiguity5:

o Up-Ambiguity: An action is performed that can contribute to multiple user goals, i.e. there is ambiguity about the intentions that motivated the observed action. For example, when you turn down the heat in the house, how could your thermostat know whether you were uncomfortable, or trying to save energy?

o Down-Ambiguity: Given a single interpretation of an observed action, there could be many appropriate subsequent actions. That is, there is ambiguity (uncertainty) about what to do next. For example, with a programmable thermostat, there could be many ways to save energy while maintaining comfort, such as turning the heat up or down in only certain rooms or at certain times or days of the week.

Incorrectly interpreting the user’s intentions can lead to significant user frustration. As Kuhme et al. (1993) say: “the provision of guidance has to be designed very carefully since wrong assumptions

5 Neal Lesh first brought the distinction between these two types of ambiguity to my attention in personal communications.

(32)

about the appropriateness of items can cause fatal problems. Obviously, misleading the user would be even worse than no guidance at all.” In any case of ambiguity, an interactive agent must decide whether to choose (i.e. guess) one ‘best’ interpretation, ask the user for more information, or simply wait until it is more sure of what the user actually wants.

Lesh, Rich and Sidner (2001) discuss the tradeoffs between guessing the user’s intentions versus asking the user what they want to do. In particular, they note that:

“The right balance [between guessing and asking] depends, in part, on how often typical users unexpectedly shift their task focus and how often this intention is verbally communicated to the agent.”

People need to make a commitment to their own success by making their goals as explicit as possible to their products, and to make sure the product is thinking about and doing the right thing. At the very least, people and their products need to explain to each other what they are thinking about, what they are trying to do, and where they are confused. People have to try to work with their products.

The product interface is, by definition, the only way to mediate this dialogue, and to provide a shared frame of reference concerning current goals, plans and next steps. As Schneiderman (1998) writes:

“Agent promoters believe that the computer can ascertain the users’ intentions automatically, or can take action based on a vague statement of goals. I doubt that user intentions are so easily determined or that vague statements are usually effective. However, if users can specify what they want with comprehensible actions selected from a visual display, then they can often and rapidly accomplish their goals while preserving their sense of control and accomplishment.” (p. 213)

In other words, interfaces (graphical or physical, or any perceptible interaction modality) can facilitate person-product collaboration by giving people cues and clues about how to express their goals to products, and which ones to express at any given time. It is a design challenge to determine the set of ‘comprehensible actions’, and to figure out how to present them to people in a useful and usable interface.

Consider, for example, the pictures in Figure 5. These examples demonstrate a number of current trends in designing interfaces for

(33)

Figure 5: Three examples of graphical interfaces designed to support person-product collaboration, even when the product is not sure about what the person wants to do next. Clockwise from upper left: Paperclip agent from Microsoft Office XP, Thermy thermostat described in Appendix A, and part of a VCR interface described in (Sidner and Forlines, 2002).

supporting person-product communication. The similarity among these efforts is the use of an adaptive list or menu of statements the user could make next. They differ in their approach to generating, presenting and organizing the set. Some provide a flat list of commands (e.g. the ‘What can I say’ list in ViaVoice, IBM Corp., 2003), others adapt to user models and task models (e.g. Kuhme et al., 1993), and others to the current discourse state (e.g. Sidner and Forlines, 2002).

In these examples, the product is aware of more than one useful or appropriate course of action. By visualizing the agent’s ‘confusion’, ambiguity becomes opportunity, what could be called productive uncertainty. Such conversational back channels can support person-product communication even at times when the user would not want to be interrupted with a question. The agent is not forced to guess or ask. At the same time, the user can “specify what they want with comprehensible actions selected from a visual display.”

The design research in this thesis centers on exploring Help Me Help You interfaces, to obtain more examples and insights on issues of usefulness and usability of such interaction styles.

(34)

Figure 6: Primary stages of task-centered iterative design, and the foci of this thesis.

The design research goal in this thesis centers on obtaining more examples and insights on design issues for HMHY interfaces such as those shown in Figure 5. Textual lists such as those in Figure 5 above require people to be able to read, present an additional information load on the user, and still may not cover what the user wants to do or say. As with any words in the interface, the elements of these lists need to be carefully designed to correspond with the way people would incline to express them. The designer needs to know what kinds of things the user expects the system to handle, what the user would really want to do, and how the user would expect or want to discuss these things with a product. Designing such interfaces requires some research.

1.5 Thesis

overview

It is axiomatic that the interfaces shown in Figure 5 may not be appropriate for every programmable product, every task, or every user. Moreover, there is a potential problem in making intelligent products of engendering high expectations in the product user. As intimated by Norman (1997) (see quotation above), it is reasonable to assume that uncertainty will beget uncertainty, that is, there will be some interaction effects between supporting ambiguity and the user’s expectations for such support. In particular, the more capable the product is in terms of understanding what the user wants to do (even if the user doesn’t know), the more the user may come to expect from the product, in terms of both functionality and communication capabilities. The product may become less usable or desirable due to an increased mismatch between user expectations and product ability. In order to explore the efficacy of the Help Me Help You (HMHY) approach, there is a need for:

1. exploration of novel designs for

supporting person-product collaboration,

2. tools and techniques for rapid implementation of design prototypes, 3. identification of critical measures of

design success, and

4. an iterative, user-centered approach to designing, building and testing HMHY interfaces.

(35)

The rest of the thesis is structured around these four issues, towards developing practical insight into developing programmable appliance interfaces that support person-product collaboration.

1.5.1 Design

One of the aims in this thesis is to discuss different interface designs in terms of collaborative discourse theory. Towards this end, the relationship between SharedPlans discourse theory and the design goals in this thesis is discussed in more detail in Chapter 2. Three tiers of collaborative dialogue support are established and exemplified through the design of a multimodal, programmable home thermostat. The chapter identifies key issues for design of HMHY interfaces, and proposes an initial set of rules for generating HMHY interfaces from task models based on Collagen’s implementation of SharedPlans theory. The results of an initial usability study demonstrate the usefulness of the three-tier approach for discussing and analyzing the design of collaborative dialogue support.

Chapter 5 describes a test of the rules proposed in Chapter 2. The study examines the degree to which the resulting interface supports people in using the product well, and how they feel about using it. The study also discusses the utility of supporting people in making otherwise unexpected focus shifts, graphically and through speech. A number of usability and usefulness tradeoffs are identified. In particular, participants often seem to ignore the product’s suggestions, even when it offers the most efficient way to get a task done. Results from a small follow up eye-tracking study further demonstrate that small graphical changes can increase the amount of attention people pay to the suggestions. Thus this chapter demonstrates the need for further research in the design of HMHY interfaces.

1.5.2 Evaluate

In order to design good interfaces, it is necessary to have a way of establishing the quality of a particular design. Chapter 3 gives a technique for evaluation of the success of an interface in terms of supporting person-product collaboration. The technique is based on comparing the embedded task model to the observed interaction history. This provides a means of identifying critical points in the dialogue, at which either the task model is wrong, or the dialogue requires further redesign. The application of the measures is demonstrated in a study using an interface similar to current

(36)

programmable thermostats, i.e. without explicit support for planning dialogues, comparing the same basic interface with and without speech recognition.

1.5.3 Build

As mentioned above, designing HMHY interfaces requires iterative user-centered design and testing. Current technology for developing this type of interaction can be difficult for designers to use, and are thus not well suited for earlier stages of design. To do design exploration, there is a need for tools to support rapid prototyping of different design ideas. At the same time, interaction design efforts should be easy to translate into more powerful collaborative agent implementation platforms, such as Collagen. To address this gap, Chapter 4 describes a new platform, tool and technique supporting participatory design of collaborative interaction, demonstrates the technique in a small design study, and discusses how the design results can be translated into Collagen. Results of the study were used for the study in Chapter 5.

1.5.4 Iterate

Chapters 2 through 5 present theoretical, technical and empirical results towards the design research goals of this thesis. Chapter 6 combines the work of the rest of thesis, and describes an iterative, participatory approach for designing product interfaces that support person-product collaboration. The approach collages existing design methods and tools towards sharing task knowledge at different product design and development stages.

The thesis concludes with a review of the contributions of this thesis, and suggests some important issues for future design research.

(37)

2.1 INTRODUCTION

2.2 THREE TIERS OF SUPPORT FOR COLLABORATIVE PLANNING 2.3 THE THREE TIERS IN THERMY

2.4 DESIGN CONCERNS FOR THE THREE TIERS IN THE SENSAY 2.5 RELATED WORK

2

SUPPORTING PERSON-PRODUCT COLLABORATION VIA

THREE TIERS OF INTERACTION

Abstract

This chapter presents a pragmatic model towards systematic design and evaluation of programmable household appliances, in terms of collaborative dialogue theory. In this way, this chapter provides the conceptual machinery to support the design and research goals of this thesis. The model consists of three tiers of communicative actions: (a) primitive task actions and objects, such as low-level button presses and identifying user preferences, (b) task navigation, for communicating about higher-level user goals and processes, and (c) task control and dialogue control, for managing the direction and flow of the collaboration between the user and the product. In a usability study involving an initial prototype design based on the three tiers, age effects were found in relation to how each of the tiers were utilized. In particular, older users tended to effectively utilize higher-level dialogue support and judged it favorably, but used task control and dialogue control in unexpected ways. In contrast younger users tended to be facile with all three tiers, and often preferred using lower-level interaction styles.

The chapter also outlines the challenges involved in designing for the three tiers. Central to the approach taken here is a modular task navigator, called the Some Things to Say (SenSay) menu, which simultaneously provides the user with feedback about what the system thinks the user is doing, as well as providing feedforward to the user about some useful things to say or do next. The word some in the SenSay name indicates that not every possibly useful item can be usably contained on the list; it is a design decision as to what to include and exclude. The three-tier model and SenSay design issues raised in this chapter serve as the basis for the research presented in this thesis.

2.1 Introduction

People communicate when they work together to get something done. As stated in (Rich, Sidner and Lesh, 2001):

(38)

“Participants in a collaboration derive benefit by pooling their talents and resources to achieve common goals. However, collaboration also has its costs. When people collaborate, they must usually communicate and expend mental effort to ensure that their actions are coordinated.” This chapter presents an approach to reducing the ‘costs’ of communicating to products, by designing interfaces to support person-product collaboration, based on collaborative discourse theory. The assumption here is that, in general, interaction design based on familiar rules and patterns of collaborative dialogue, as modeled in collaborative discourse theory, should be easier and more pleasant to use, than interaction designs resulting from an ad-hoc approach. The design approach discussed in this chapter is a first step towards answering the research question of this thesis, namely to develop a design approach for programmable household appliances based on collaborative discourse theory.

As discussed in Chapter 1, current programmable products can do a lot, and, as a result of current interface designs, there is a lot that people might need to say or do to them to reach their goals. However, it can be difficult for people to i) understand what the product can do and how to get the product to do those things, ii) figure out possible plans to achieve a particular goal with the product, and then iii) choose one thing to say or do next (see Figure 1)6. All interface design efforts are

concerned with helping people with step (i). In addition, an interface designed to support person-product collaboration should be able to help people with step (ii), i.e. helping people with making and changing their plans to achieve their goals with the product, as well as with (iii), i.e. helping people choose the next best step to best complete the plans.

6 Compare with the first three of the ‘Seven stages of action’ in (Norman, 1988, p. 53). Figure 1: Figuring out how to get something done when using a

programmable appliance involves figuring out the best way to use the product interface to what we want to do. This involves planning and making choices.

Cytaty

Powiązane dokumenty

ELSEVIER APPUED SCIENCE...

U zasadniono też, iż sfor­ m ułow ane przez C antora kryterium istnienia obiektów m atem a­ tycznych uniezależniało poznanie w m atem atyce nie tylko od ja

Sprawa pierwsza jest nieco skomplikowana, o tyle mia­ nowicie, że o większości pisarzy, których dorobek wszedł w skład „Historyj świeżych i nie zw yczajnych“,

Rokow ania rozpoczęto jeszcze w czasie trw ania w ojny sukcesyjnej hiszpań­ skiej; F rancja chciała odciągnąć Saksonię od Austrii, A ugust zaś zabiegał o

Vigilanzio parti dunque eon la lettera di Girolamo (Ep. 58) poco prima che la polemica origenista si arroventasse; la risposta a Paolino era poco invitante, eppure era chiara

Po raz pierwszy - poza dwoma wycinkowymi pokazami: portretu nie­ mieckiego ( 1961) i sztuki Albrechta Dürera oraz jego kręgu z okazji pięćsetlecia uro­ dzin artysty (1971), a

Organizatorzy i członkowie TNP postawili sobie do realizacji bardzo am­ bitne cele, m.in.: stworzenie materialnej bazy do badań naukowych przez założenie biblioteki,

Nie jest nim na pewno artysta. Obserwowany przez Wolność patrzy w siebie. Coś go powstrzymuje w marszu - lecz czy ma wybór? Za nim - prący tłum i jego pars pro