Safeguarding user privacy in the context of personal digital assistants


Robin Cohen
School of Computer Science
University of Waterloo
Waterloo, Ontario, Canada N2L 3G1
rcohen@uwaterloo.ca

Abstract

In this paper, we discuss the need to safeguard the privacy of information about users in the context of intelligent agents, personal digital assistants being designed by artificial intelligence researchers to carry out personalized tasks on behalf of their users. In environments where user models are constructed and employed in order to provide individualized service, there are opportunities to better inform users of the information that has been modeled and is being used, when assistance is provided. One challenge, however, is to construct interaction with users that is not overly bothersome and does not interrupt the important processing that the agents are carrying out on behalf of the users. We comment on how users can be better informed about the use of their private information, with a strategy that accounts for minimizing the degree of interruption to the users. We conclude with some comments on the growing proliferation of personal agents and multi-agent communities, proposing greater focus on considerations of privacy by both computer science researchers and users.

Introduction

Intelligent agents are being designed within artificial intelligence to provide personalized services to users, for applications as diverse as assisting with the processing of e-mail (Litman et al. 98; Fleming 98; Maes and Kozierok 93), meeting scheduling (Cesta and D'Aloisi 98); Tambe et al. 02), finding websites of interest (Lieberman 95), recommending products for purchase (Burke 99) and purchasing in electronic marketplaces (Tran and Cohen 02; Kephart 00). These agents make use of information about the user's preferences, goals and needs in order to deliver customized service. Users typically provide this information through questionnaires or interviews before the agent is launched (external acquistion of user models, as in Kass and Finin 88), though information may also be inferred on the basis of actions taken by the user (internal acquisition of user models, as in Kass and Finin 88). For example, in order to recommend websites of possible interest to a user, Lieberman (Lieberman 95) models websites previously visited by the user.

The question we are interested in exploring is: are users aware of the extent to which their privacy is being compromised by these intelligent agents? Are there ways of ensuring that users are better informed of any use of private information, in a way that will still ensure a smooth operation of the assisting agents?

We will propose that interactions with users to inform them of privacy considerations take into account a model of the current dialogue with the user, to ensure that users are not bothered unduly. In doing so, we aim to promote better communication with users regarding the release of private information.

Considerations of Bother and Coherent Interaction

In our work on designing more interactive interface agents (Fleming and Cohen 99), we comment on the need for intelligent agents to be sensitive to the degree of bother exerted on their users. A bother factor is in fact calculated and used to determine whether or not to interact with a user, when providing the requested digital assistance. Although the user sets the factor initially to reflect the degree to which he or she is willing to be bothered during the processing of the agent, the factor may be adjusted by the user, once interactions have taken place. In addition, the agent may determine that the user is less tolerant of interruption, if several interactions have already taken place.

Our current work attempts to characterize the circumstances under which an autonomous system will decide to initiate interaction with a user. The main motivation is to interact, if the benefits of interaction outweigh the costs (Fleming and Cohen 01). Bothering the user is determined to be one of the possible costs of interacting, so that the user's willingness to interact must be modeled as part of the user model. We are in fact investigating a formula to adjust bother over time, so that recent interruptions carry more weight than interruptions in the distant past, when determining how bothersome it would be to interact with the user at this point in time. In essence, a calculation of the bother factor to reason about the costs of interaction makes use not only of a predetermined willingness to interact indicated by the user and an evaluation of how bothersome the dialogue with the user has been, thus far.

Towards Unobstrusive Interactions with Users Regarding Privacy

With intelligent agents reasoning about not bothering their users unduly, what can be done to adjust these agents to better inform their users about challenges to the privacy of their personal information? Certainly every agent will know the information it uses in order to make decisions about its problem solving. In addition, whether this information is in fact an indication of some user's personal preferences will also be known, as this is typically represented within a user model in a class labelled to be user preferences or user goals (Kobsa and Wahlster 89).

There appear to be two main choices for what to do with this information, if users are to be informed. The first suggestion follows a proposal included in (Fleming 98), to incorporate an additional window at the bottom of the screen, in order to provide output for the user, possibly with an auditory signal. While the screen would continue to be used to provide the output from the problem solving for the user, the window at the bottom would appear somewhat unobstrusively. The user would also have the option to block out the additional window from the display.

A second option would be to inform the user of information that is about to be used, before it is used, to allow the override to take place earlier. In both cases, there are various options for the information to be provided to the user. A generic message indicating that user modeling information is about to be employed could be sent. This would help to further protect the specific information, so not broadcasting it through the network. The general category of information could also be described. For example, a message may indicate, "Your general buying preferences, as indicated by your buying preference survey are being used now" or "Your buying preferences are being transmitted to the seller" or "Your buying preferences are being shared with other buyers in the marketplace".

How do these two options compare? The second method provides greater privacy protection, but results in a delay on performing tasks for users. The first offers less protection but allows agents to perform more autonomously. Both of them put the onus on the user to address any concerns; the default action will be to continue to process, making use of the personal information.

Currently, many websites do post warnings to users about the security of the information they have just provided, asking users to confirm that it is acceptable for this information to be used. This warning, however, doesn't typically identify what information is being used and to whom that information may be distributed. A useful extension for the "bottom window" plan outlined above is to clearly indicate: what information is being used where.

The Need for Privacy Communication in the Context of Intelligent Agents

We envisage that it will be important for users to be provided with this kind of additional information in busy environments where personal information provided long ago may be used in several possible capacities. For example, user buying preferences may be used and reused in several electronic marketplace auctions (Sandholm and Suri 00) and user meeting preferences may be constantly reused in group scheduling activities (Tambe
et al. 02).

When a user adopts a personal buying agent, for the purposes of receiving advice about purchases (or even to allow purchases to be made for him or her), it may be unclear whether the agent is then authorized to release this information to third parties, in an effort to optimize its search for products for the user. As mentioned, some buying agent strategies rely on the advice from neighbourhoods of similar users (e.g. Sung and Yuan 01). In addition, recommender systems that rely on collaborative filtering in order to suggest products are most definitely designed to make use of information about the purchases made by other users (Burke 99). Although these systems do not in any way need to reveal the identity of the users employed to provide advice for the current customer (by a kind of similarity matching), it is still the case that actions taken by these users will be generally conveyed to other users. One can imagine that in a very small community, it may then be possible for some users to discern information about others, unwittingly. Thus, when a user fills out a questionnaire or even simply makes a purchase in an electronic commerce setting, it may be the case that this information is no longer
private.

Recommendations for the Future

Our position is that users with personal digital assistants need to be better informed of possible compromises to the privacy of their personal information. We see this being increasingly important with the move towards multi-agent systems and collaborative information sharing in societies populated by user agents.

We also believe that in spite of the fact that users want agents to act autonomously and therefore not interact unduly, it is possible to provide for communication from the agent to the user in a way that does not generate
excessive bother for the user. This is first of all accomplished by specifying a bother factor, to model the willingness of the user to be interrupted. In addition, once it is determined that communication with the user may be beneficial, it is possible to channel that interaction into an unobtrusive part of the display and to allow the user to control whether that information continues to be displayed or not.

We have also briefly discussed some strategies for informing users about the use of their private information without specifically broadcasting once more details of the private information that is currently being used.

As such, we are claiming that one important concern in the challenge to protect privacy of personal information for users is to determine when interaction with users should take place, within the more general context of problem solving for these users. This consideration should be addressed, in conjunction with decisions of what information to present and how to present it. We believe that making careful consideration of the design of effective interactions with users will allow for useful and important conversations regarding privacy with these users.

References

Burke, R.; "Integrating knowledge-based and collaborative filtering recommender systems"; Proceedings of AAAI workshop on AI in electronic commerce; 1999.

Cesta, A. and D'Aloisi, D.; "Mixed-initiative issues in an agent-based meeting scheduler"; User Modeling and User Adapted Interaction Vol. 9, No. 1-2; 1998.

Fleming, M.; "Designing more interactive interface agents"; M.Math thesis, Computer Science, University of Waterloo; 1998.

Fleming, M. and Cohen, R.; "User modeling and the design of more interactive interfaces"; Proceedings of User Modeling 99; 1999.

Fleming, M. and Cohen, R.; "A user modeling approach to determining system initiative in mixed-initiative AI systems"; Proceedings of User Modeling Conference 2001; 2001.

Kass, R. and Finin, T.; "Modeling the user in natural language systems"; Computational Linguistics Vol.14 No.3; 1988.

Kephart, J.; "Economic incentives for information agents"; in Proceedings of Cooperative Information Systems Workshop IV; 2000,

Kobsa, A. and Wahlster, W.; "User modeling in dialog systems", in User Models in Dialog Systems, A. Kobsa and W. Wahlster, eds.; Springer-Verlag; 1989.

Lieberman, H.; "Letizia: an agent that assists in web browsing"; Proceedings of IJCAI95; 1995.

Litman, D., Pan, S. and Walker, M.; "Evaluating response strategies in a web-based spoken dialogue agent"; Proceedings of Association of Computational Linguistics Conference 1998; 1998.

Maes, P. and Kozierok, R.; "Learning interface agents"; in Proceedings of AAAI93; 1993.

Sandholm, T.; and Suri, S.; "Improved algorithms for optimal winner determination in combinatorial auctions and generalizations"; Proceedings of AAAI00; 2000.

Sung, H. and Yuan, S.; "A learning-enabled integrative trust model for e-markets"; Proceedings of Agents01 workshop on deception, fraud and trust in agent systems; 2001.

Tambe, M., Scerri, P. and Pynadath, D.; "Adjustable autonomy: from theory to implementation"; Proceedings of AAAI02 Workshop on Autonomy, Delegation and Control: from Inter-agent to Groups; 2002.

Tran, T. and Cohen, R.; "A learning algorithm for buying and selling agents in electronic marketplaces"; Proceedings of AI02 conference; 2002.