User Tools

Site Tools


book:positive_computing:12_caveats_considerations_and_the_way_ahead

12 Caveats, Considerations, and the Way Ahead

Ten years ago, if you’d told me (Dorian) that technology would get involved in things as personal as mindfulness, happiness, and wellbeing, I would have covered my ears and said, “Make it go away.” In fact, that’s kind of what I did when Rafael first proposed the antecedents of this book. Wasn’t technology already part of the problem? Didn’t it already get in the way of happiness and intrude far too much into our humanity? People are losing touch with nature, with the tactile, the alive, the private, the unmediated, and they’re forgetting how to focus, introspect, or engage in reflection that isn’t subsequently performed as a status update. We seem to be increasingly disconnected from the present moment, from the raw reality of the physical world and from our friends and families because of technology. And now we want it to invade the few spaces we have left to keep us sane? Despite the fact that I’m far from a Luddite and have worked as a digital designer since the 1990s, I have to admit that these thoughts have often run through my head.

However, as time passed, I began to consider a few things. First, technology already was intruding into these areas, and if that intrusion was in fact detrimental, there should be a way of proving it and changing it. Second, perhaps it wasn’t technology per se, but the way we’d been designing and using it that was the problem, and perhaps we could do better.

In the past few years, we’ve had the privilege of coming across researchers and practitioners doing absolutely inspiring work. Some of them have had the incredible generosity to share their ideas with us for this book. Examples of technology that saves lives through education, cultivates compassion through role play, enriches lives by offering new forms of creation and meaning making, connects grandparents with grandchildren, allows those with disabilities to be empowered as never before, curbs anxiety, reduces depression, and opens the door to positive change?examples of deep connection, engagement, and meaning that were never possible before?have made me see that technology is not the problem any more than it is the solution. It can support wonderfully life-affirming things or perpetuate our own destructive habits, and which direction it goes will always be up to us?the people who design and consume it.

Nevertheless, we will always maintain that not everything in our lives should be mediated by technology. There will be times when the best thing for someone’s wellbeing will be to shut down her computer and go on a retreat, tend to some vegetables in the garden, play a game of real soccer on real grass, or have a get-together with friends without the presence of mobile phones. There will be times at work when, in order to focus on your writing, stimulate creative thinking, or connect meaningfully with the people around you, you will be better served by the absence of devices, by a pencil and paper, by hands and faces. We tend to get caught up in the idea that ubiquity and pervasiveness are inevitable and desirable, without stopping to question those assumptions. Maybe a vast increase in embedded technologies will make everyone happier, but can we assume that it will? A march toward ubiquity could just as believably infringe not only on our privacy, but also on our autonomy and wellbeing. In other words, is there a point at which it no longer makes sense to continue to add digital technology to everything? Do the benefits outweigh the costs, and are we appropriately measuring the costs?

As technologists, we have to learn how to stand by the radical notion that adding technology won’t always make experience better and that the answer to making something better won’t always be technology. Once we can do this, we can focus our efforts and resources on those places where it really can make a significant difference.

In this final chapter, we take a moment to honor the complexity of human experience and propose a humble, balanced approach to technology development. We attempt to “out” or shine an early spotlight on some of the various pitfalls and misuses that may quietly emerge as we seek to design for wellbeing. We have elected to bring attention to these issues not because we have clever solutions to them, but because they are open questions that we believe should not be neglected in our consideration, research, and ongoing debate moving forward. We close with a decidedly pragmatic look at a way forward for positive computing, both as a field of research and as an area of development that will help us to shape the future of our world.

Humans as Complex and Inconsistent Creatures
As technologists, we are keenly aware of how enthusiastic our community can be about technology intervention. When a new algorithm, technique, or design innovation is released, we have a tendency to want to apply it to anything and everything. We all have found ourselves asking how we might apply a newfound skill set, an approach we’re familiar with, or our current abilities to something new. With the best of intentions, we carry around a solution in search for a need. This will always be part of the way that technology and research move forward. But we are also keenly aware of the downsides to this approach. When technologies are imposed on users, when inappropriate solutions are imposed on problems, new problems arise.

As we have alluded to previously, there is a tension between (a) the reality of humans, their minds, and their wellbeing as things that are incredibly complex, variable, and individual, and (b) the need for designers and developers to seek out generalizable models and manageable goals and to operationalize abstract notions into concrete solutions. Some level of reductionism will always apply. The danger is obvious: we can oversimplify and end up doing harm rather than good.

Take, for example, the quantifying of self. Whereas person A is inspired by the progress she sees in her tracked weight loss and exercise regimen, person B is angry that her scale is telling her she’s not good enough and eats more sweets to compensate, and person C uses the tracking to reinforce already unhealthy attitudes and fuel excessive weight loss, self-criticism, and pathology. Humans are complicated. Many admonish friends or family for overindulging in cigarettes, calories, or beer, thinking that reproach is a sensible thing to do, when really it’s fueling rebellious behavior, shame, or depression, which leads to more of the problem. Humans are complicated.

We have already alluded to what psychologists call “ironic processes” and the mind’s ability to go against its own intentions. If I tell you that for the next 30 seconds, you should not think about food, you’ll think about it more.

We don’t believe that the idea that some people will always use technology in unintended ways is a reason not to develop it. However, as is true in other design contexts, being aware of edge cases and extreme users should help inform the way we design, distribute, and contextualize our innovations and prevent us from falling into the trap of oversimplifying our models of human thinking. Moreover, our measures for how well a technology is working and for whom it is most usefully targeted need to include evaluation over the longer term and employ multiple measures. Because people are complicated.

Privacy and Security
At the time we were writing this book, secret documents from the National Security Agency were leaked that revealed how personal communications by individuals around the world are recorded and data mined by security agencies and by the consulting companies that work for them, who allege they are doing so for security reasons. With no transparency, it is impossible to say what is legitimate. Whereas police officers need a warrant to search your house, those with access to our private conversations and activity traces thus far require no warrant to search them. Even if this process were transparent and justified, that the data exist and are entrusted to private companies means they’re subject to commercial misuse and corruption internally as well as to data theft and terrorism externally.

Closer to our everyday lives is the way these data are already legally mined for commercial purposes. Most of us realize that private companies such as Facebook, Google, and Microsoft mine the data we provide (via web searches, social media interaction, etc.), so they can use what they learn about us to sell us products more effectively, which is why they can provide services for free.

As free-software evangelist Richard Stallman would say, these services are free as in beer, not free as in freedom. Product marketing involves attempts at behavior change, which is why marketers are among those who employ behavior design and persuasive strategies. Advertisements, by nature, exploit our ways of thinking and feeling in order to convince us to act or buy. In exactly the same way, these mined data could be used to inform propaganda to incite subtle changes that may even be in line with what is understood to improve our health, our wealth, or even our psychological wellbeing. It is our responsibility to remain critical of any behavior change incited subconsciously (untransparently) and with the use of personal data. We should always question the motivations and ethical drivers behind the organizations and governments with whom we trust knowledge about our lives.

Our position is that positive-computing shouldn’t be used as an excuse to further encroach on our privacy?in other words, much can be done with the data already collected transparently by software systems?and that we need to be wary of rigid prescriptions for change. We should support transparency, autonomy, and conscious engagement in any effort for positive change. We hope some of the examples given in this book have reflected this notion, and we hope that designers and researchers will remain sensitive to these issues as we work to appropriately manage and keep control of our privacy, security, and autonomy as data spread out and into proprietary clouds.

Replacing Humans?Livelihood and Wellbeing
At a recent research summit, one of the talks included a graph evaluating the accuracy of computer vision techniques: humans, 98 percent; algorithm, 93.5 percent. “We are only 4.5 percent behind!” the speaker exclaimed enthusiastically, ironically positioning us all in the algorithm column rather than in the human column. Her talk is just a reflection of the larger view that those of us in computer science and engineering often take, competing with our own species as a way of setting targets for progress.

Most of us harbor hopes that technology will continue to surpass our skills, thereby serving and augmenting our capabilities as it has so many times before. After all, it is by applying technology that we are now able to travel at unnatural speeds, send messages over thousands of miles in nanoseconds, and fly over oceans. It is thanks to engineering efforts that a 100-ton truck can be driven into an Australian mine?where most humans dare not go owing to the blistering heat, collapsing rock, deadly snakes, and hairy spiders?by someone sitting in the comfort of an air-conditioned office in Sydney. Those of us who spent childhood hours watching The Jetsons dreamed of having a Rosie to sweep and cook and one of those conveyor belts that gets you showered and dressed in the morning. Technologies have long been designed to do those things we have done ineffectively, too expensively, or unenthusiastically.

But the trade-offs are not always straightforward. Almost 20 years ago, I (Rafael) worked as a consultant for a government project training young and unemployed Argentineans in a trade. I’ll never forget the glowing look of pride on the faces of supermarket checkout trainees when they were praised for their newly acquired customer-service or procedural skills. They were gaining autonomy and competence. It is opportunities like these, the ones available to a wide group of people, that are increasingly replaced by machines.

Likewise, the driverless car, which we mentioned previously, no doubt will open new worlds of opportunity for some, but the millions more whose working lives depend entirely on transporting people and freight in taxis, buses, and trucks across countries and continents are facing a devastating threat to their livelihoods.

Of course, technology has always led to shifts in livelihoods, and some would argue these new technologies will decrease certain types of work but increase knowledge jobs. However, it’s hard to imagine that trade-off will be one to one. If the point of new technology is often to reduce costs (not simply move those costs around), can we really hope to replace one million trade jobs with one million “knowledge” positions? Perhaps, but the argument also makes a value assumption that an office-based knowledge job is superior in everyone’s view to other kinds of work. Driving a truck is not like working in the life-threatening depths of a mine. Many truckers value the autonomy and tranquility of their job on the road. Assuming that everyone has the secret wish (and appropriate capacity) to “upgrade” their profession feels like the insular view of a tech industry little connected to the people affected by its progress. Although it’s true that over history we have been able to adapt to these changes and have generally ended up better off, there is no reason to assume that this proposition has no limits whatsoever to its advantage.

In fact, Erik Brynjolfsson at MIT’s Sloan School of Management has identified an emerging paradox: “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs” (Rotman, 2013). In their book Race against the Machine, Brynjolfsson and coauthor Andrew McAfee (2011) argue that this technological revolution, driven by advances in artificial intelligence and increased computing power, rather than eliminating blue-collar jobs, as has happened in the past, is now eliminating white-collar jobs, such as clerical work, accounting, and those that generally support the middle class. This has shrunk the middle class and increased economic inequality, something shown to decrease psychological wellbeing.

There are other reasons why replacing humans would be foolhardy, and we have mentioned some of them with regard to technologies for mental health. There are things people are better at, in both obvious and quantifiable ways, but also in less obvious qualitative ways, because human presence and connection are important too. Sure, we might be able to get on without a hello from the banker, local advice from a taxi driver, or a chat with the grocer, but at what point will this loss of human contact begin to disconnect and isolate us? Connection to others, to a community, and to a sense of common humanity has been shown to be critical to happiness, compassion, self-compassion, and many of the other aspects of wellbeing we have discussed in this book. To what extent do we prioritize efficiency, convenience, and profit for a few at the expense of opportunity, autonomy, and wellbeing for many?

A full discussion of this issue is, of course, far beyond the scope of this volume, but we bring it up because it’s relevant to wellbeing. After all, building a technology that causes some measurable increase of wellbeing to users but contributes to the loss of livelihood for many more farther down the chain just might negate its positive effect. So social and economic impact is something researchers in positive computing should take into account. The issue is not so much about saying that no one should build machines that replace humans, but about questioning our assumptions about benefit and value. It is about using these considerations to help guide our focus and our resources toward work that can have benefit to wellbeing from a holistic perspective.

Who’s in Control? Autonomy, Competence, and Empowerment
In a stirring scene from the biopic Ray (Taylor Hackford, 2004), musician Ray Charles as a very little boy falls to the ground, apparently helpless. Already completely blind, he instinctively cries out to his mother to come and help him. In this poignant moment, his mother, who stands only a few feet away, makes a powerful and courageous decision not to help him. Forced to help himself, he soon finds his way out of his predicament, and this moment of transition symbolizes the beginning of an independence?stubborn in the face of both physical disability and racism?that would be critical to his success in life.

All parents must navigate a border between providing support and encouraging the development of independence in children, and the issue is all the more complex at those various times in our lives, whether young or old, injured or ill, when we require assistance or adaptation in the face of change. Technology has the ability to assist in ways that either support or undermine our autonomy.

A current project we are involved in provides a uniquely illustrative example: How might technology be used to support young people with chronic illness in the transition from pediatric to regular care? For children growing up with chronic illnesses such as diabetes or cystic fibrosis, parents will be responsible for taking them to appointments, tracking medication compliance, and otherwise managing the illness as caretakers. However, there comes a time when they transition from being children to being adults and must take full responsibility and have full control over their own care if they are able. Although it would be fairly straightforward to develop a medication reminder service or automated doctor appointment app for the mobile phone, and we could have added contingent rewards such as badges or virtual currencies to motivate compliance, these things may be convenient in the short term, but they do not at all address the core issue, which is developing and supporting autonomy.

In contrast, by looking at ways in which we can support these young adults in integrating self-care actions with their goals and values, visualizing the impact of their decisions, encouraging a growing sense of competence and autonomy, perhaps in concert with a fading scaffold of practical support, we can begin to think more deeply not just about assisting, compensating, or training in the limited carrot-and-stick sense, but about respecting, empowering, and building opportunities for self-determination.

Owing to the fact that autonomy is a key component to wellbeing, the core of SDT, and validated cross-culturally, making a concerted effort to respect user autonomy and design for it is all the more critical to positive computing. Unlike other factors such as self-awareness and compassion, design decisions about autonomy are automatically embedded into all the technology we create.

Batya Freidman (1996) brought attention to user autonomy in the context of her seminal paper on VSD: “Autonomy is protected when users are given control over the right things at the right time. Of course, the hard work of design is to decide these what’s and when’s.” She goes on to identify aspects of systems that can promote or undermine user autonomy, including system capability, system complexity, misrepresentation of the system, and system fluidity. Though this article was written almost 20 years ago now, we seem to persist in undervaluing the importance of autonomy, both at the software task level and at higher levels of human experience.

More recently, in The Design of Future Things Don Norman (2007) discusses at length the problems associated with “inappropriate automation” and points to the multifaceted nature of autonomy and automation issues: “Automation is a system issue, changing the way work is done, restructuring jobs, shifting the required tasks from one portion of the population to another and, in many cases, eliminating the need for some functions and adding others.” Assisting, helping, compensating, removing tasks, meeting needs?these are generally thought to be good things, but a closer look reveals that the design decisions we make on behalf of meeting these goals resonate in unexpected ways. In attempting to solve a problem technically, simplify a task, or compensate for one isolated need, we can lose track of the larger ecosystem at the expense of human agency and wellbeing.

By automating everything, users have no capacity to adapt, customize, do parts of tasks themselves, their way?they can be left feeling helpless, especially when the technology breaks down. In this view, users must adapt to the technology rather than be able to adapt the technology to themselves.

Most recently, Yvonne Rogers and Gary Marsden (2013) shed critical light on the history of HCI approaches to design for helping “those in need.” They contest that in HCI we are frequently guilty of taking a third-person view that reflects an unequal partnership. In designing to compensate or “augment frailty,” well-meaning designers who identify needs from the perspective of their own contexts perpetuate an “asymmetrical relationship between those who have and those who have not, underlying an uneasy dependency between those who need and those who can help.”

Speaking from extensive experience designing assisted-living technology, and based on work for the developing world, Rogers and Marsden propose a new approach and a new rhetoric of empowerment that seeks to engage users in creating technology for themselves on their own terms. They advise that we invest more effort in design tool kits and design education to empower these users to be the creators of their own solutions. What Rogers and Marsden suggest is really the ultimate in autonomy. As researchers and developers in positive computing, we should always be questioning our approaches, lest we attempt to cultivate one aspect of wellbeing at the expense of another.

Who’s in Control? Motives, Power, and Paternalism
There are at least two loci of control in the making of any technology: the user and the designer. Behind the designer, or design team, is an organization, a project, or a group of stakeholders with their own motives, values, and goals. Judgments made by designers and organizations about what should go into a technology and how it should be designed will always be shaped by how they expect to profit and by their personal and cultural perspectives. Given the impact that technologies can have on people’s behaviors, on their ways of thinking and feeling, it is important to make these motives and values as explicit as possible. Designers should be candid about their motivations; they won’t always be fully aware of them, so they should engage in practices that help them to become conscious of these motives.

This is true even when developers are government or nonprofit organizations. Even when intentions are utterly blameless and totally laudable, the values and expectations underlying the team have an influence. What will funding agencies expect to see in terms of outcomes? What do designers know about the user groups? As is evident in reactions to persuasive computing and nudging, many have valid fears about the makers of technology using it to manipulate people. Whether it’s persuading for marketing or paternalistic care in the case of government-led wellbeing campaigns, there will be an ongoing tension between support and control. Sensitive as we are in the United States to the values and influence of government, only a small number of us question the values and influence of technology makers, simply because these makers largely share the same culture as we do. But many more outside of the United States are keenly aware of the fact that a relatively small number of wealthy Americans in Silicon Valley determine the design, distribution, and sale of digital technology based on their own cultural and socioeconomic context and set of values. It’s important to acknowledge this concern about a cultural imperialism.

We must seek to be aware of the assumptions we make and the biases we carry and the fact that we base so many decisions on personal experience or limited testing. We also need to humbly acknowledge that US culture currently has disproportionate influence globally and that it is therefore everyone’s responsibility in every country in our increasingly globalized society to not be insular, to acknowledge our biases, and to seek to think and act more broadly and empathically. How exactly? Fortunately, over the past 10 years, researchers in VSD have been developing approaches for making designer and user values explicit. For all these reasons and more, we strongly support the application of VSD to positive-computing work.

Well-Washing?Research versus the Bandwagon
Transparency with regard to motives and values will also help us to discriminate genuine cross-disciplinary research-based work from projects with weaker foundations and those largely hijacking a notion of wellbeing for profit or bandwagon effects. This bandwagon phenomenon is nothing new. We have watched genuine environmental concern and innovative green design precede a rush of “greenwashing”?a marketing strategy that combines unfounded environmental claims and deceptive design to exploit the public interest in living sustainably. Overuse of the words natural and organic without certification?even the superfluous addition of safer ingredients (such as baking soda to corrosive cleaners) is used to falsely convey a sense to customers that this product is somehow safer, healthier, or more sustainable. It’s easy to see how wholesome concerns can be exploited for profit, and wellbeing is not immune to such exploitation.

The public’s embrace of the new science of wellbeing has led to an unsurprising explosion of products and promises you might call “well-washing.” Dorian’s father, who worked in advertising for many years, often said, “If you want to sell me something, it has to make me money, get me fed, or get me laid” (with corresponding unsavory hand gestures that we fortunately can’t replicate in a book). Interestingly, nowadays we can add “make me happy” to the list. It sounds wholesome enough, but the trouble is that companies that sell everything from shampoo to Coca-Cola are hijacking notions of wellbeing to sell products. The irony in the case of Coca-Cola is particularly thick?a company associated with both human rights abuses against workers and a role in the obesity epidemic has, at the time of writing, a section on its website that quotes Aristotle and Ghandi on “happiness.” It’s interesting that it cites wellbeing experts who can’t object. The section carries on by “blinding with science” and providing a list of tips to be happy, a few of which are well founded, and one of which, of course, is to drink the company’s product.

How then do we “keep it real”? Obviously there is no easy answer, but acknowledging motives will be critical. Just because we are privileging an apparently altruistic goal in working in the area of positive computing, that doesn’t mean there won’t always be other pressures, values, and ideas about how to pursue that goal.

As always, science?in combination with a healthy skepticism?is our greatest ally in the battle against snake-oil salesmen. We can’t ensure the scientific method is upheld at the grocery store or in web marketing, but when it comes to our own work, we can require evidence-based approaches and protocols. In addition to evidence-based practice, perhaps our greatest prophylactics are multidisciplinary collaboration, multidimensional evaluation, and, of course, that design stalwart, iteration. In summary, we must:

  • Be honest and explicit about our motives and values.
  • Demand research integrity and scientific method.
  • Ensure multidisciplinary collaboration.
  • Employ multidimensional evaluations.
  • Take an iterative approach that allows adaptation based on evaluation.

Wellbeing and Culture
Evidence thus far shows that the factors of wellbeing we have discussed in this book are beneficial to all humans across cultures. For example, there is no evidence that cultivating compassion or practicing mindfulness will help people on one continent but not on another. In addition, there is evidence for the cross-cultural validity of factors such as autonomy to human psychological wellbeing. However, the extent to which cultivating each is beneficial and, even more so, the extent to which various strategies for wellbeing will be most effective are influenced by culture.

For example, Nancy Sin and Sonja Lyubomirsky (2009) found that the most commonly used positive-psychology interventions (PPIs), which were developed by psychologists in Western countries, were most effective for people in those countries. “Members of individualist cultures, whose values and cultural perspectives are highly supportive of the pursuit of individual happiness, have been found to benefit more from PPIs than members of collectivist cultures. As a result, clinicians are advised to consider a client’s cultural background, as well as his or her unique inclinations, when implementing PPIs. For instance, a client from a collectivist culture may experience greater boosts in wellbeing when practicing prosocial and other-focused activities (e.g., performing acts of kindness, writing a letter of gratitude), compared with individual-focused activities (e.g., reflecting on personal strengths).”

We can’t also help but wonder if those in individualist cultures would benefit more from other-focused practices in the longer term and vice versa for collectivist cultures benefiting from individualist practices. The point is that culture does have some effect on how wellbeing factors are communicated, combined, and manifest.

Aside from strategy development, culture may influence how scales of measurement are designed and communicated. Researchers and psychologists working to evaluate and improve wellbeing among Tibetan torture survivors (Elsass, Carlsson, & Husum, 2010) found that although their Western methods were helpful, the Tibetans found Western conceptualizations of emotions and wellbeing to be unsophisticated. Specifically, the authors report that Tibetan leaders in interview “questioned the validity of our western rating scales and explained that our results might be influenced by the Tibetan culture, which among other things can be characterized as having a view and articulation of suffering much more complex than the units of our study’s rating scales.”

Culture as a mediator of wellbeing and cross-cultural validation of instruments constitute ongoing work in social psychology and other fields. Although there is plenty of evidence for a degree of universality when it comes to what contributes to wellbeing, evidence also tells us that there is great value in seeking out cross-cultural collaborations in our work. We still have much to learn from each other. (See the work of Ed Diener, Shigehiro Oishi, and their colleagues for more insight into how culture and wellbeing interact [e.g., Diener & Diener, 2009; Schimmack, Radhakrishnan, Oishi, Dzokoto, & Ahadi, 2002].)

Balance and the Mean?When More Isn’t Better
Most factors of wellbeing, wholesome though they may be, like oxygen and water can still be overconsumed. When it comes to positive emotions, reflection, engagement, and so on, there are situations in which more ceases to improve wellbeing. It’s important we take this into account, lest we carry on down a naive if convenient path that assumes anything we do to increase any wellbeing factor can only be good. This middle path is evident in Richard Davidson’s (2012) framework of emotional style, which includes six neurophysiologically distinguished bipolar scales pertaining to traits such as resilience, self-awareness, and attention. Each of these traits in excess can have negative consequences.

Of course, since we’re talking about positive states, traits, and practices, it seems reasonable to assume it’s much harder to overdo it with them (as with drinking water) than it is to overdo it with neutral behaviors (eating food) or negative behaviors (drinking alcohol). Therefore, the overriding message for technologists seems to be one of generous moderation or even cautious indulgence. It would probably be difficult to increase compassion to any detrimental point, but certainly in fostering empathy we might inadvertently foster distress. It’s easy to imagine how practices for self-awareness could lead to self-absorption and self-centeredness, and it has been studied how engagement and flow can spill over into addiction.

Defining the tipping points (and methods for detecting these points) will be an ongoing area for investigation. In the meantime, we can tread more safely in the following ways:

  • Monitor the longer-term effects of our interventions.
  • Embrace an iterative process that evaluates and adapts to feedback and changes in user behavior.
  • Consider ways in which support for increasing one factor can be kept in balance by support for another. For example, extrapolating from research findings in psychology, we may find that efforts to foster other-focused factors such as compassion balance out self-focused factors such as self-awareness. Mindfulness may balance out reflection and goal setting, and mindfulness of negative emotion may balance out support for positive emotion.

The Ecology of Wellbeing?Taking a Holistic View
Although we can only ever be dealing with a thin sliver of those things that influence the wellbeing of people through technology, if our overarching goal is to work on behalf of a social good, we must also consider social responsibility, or the ecology of wellbeing that exists around our development work. For example, if we are promoting hardware and devices, then we should consider a device’s full life cycle and periphery of effects to ensure it’s in congruence with wellbeing goals. After all, what good is a wellness device made in a sweatshop? To this end, we envision multidisciplinary projects that connect different levels of wellbeing-related efforts. For example, a mobile-device project might combine positive-computing design methods and VSD with sustainable industrial design and approaches to HCI for development. Sure, this is ambitious considering we have yet to get slavery out of the computer supply chain,1 but then we did get to the moon, and when it comes to the wellbeing of our species and planet, we think it’s safe to say failure is not an option. A movement toward holistic multidisciplinary approaches has the greatest promise in promoting flourishing from all its angles.

We’ve presented just a few of the ethical dilemmas and issues requiring care associated with doing research on technology for wellbeing, but, naturally, others will have occurred to you as you read through these pages. Each of us, from the unique context of our own professional backgrounds and personal experiences, will be positioned to bring new issues to light. We urge you to do so. Enthusiasm is essential to the energy that drives discovery, but when unbridled it can threaten the credibility of its cause. Happily, the many examples of inspiring work to which we’ve had the privilege of referring in this book are just the tip of an iceberg?we look forward to the growth of this enthusiastic and broadminded research community driven by a desire to make a better future.

The Way Ahead
People talk a great deal about disruption. In our profession, disruption is proof you’ve had an impact?that your work mattered and that you’ve wielded power over the masses. Disruption also tends to be a bit of a cash cow, which adds to its appeal. However, we suspect that the greatest successes in positive computing probably won’t be ushered into the world screaming “Game change!” from every social media corner. They won’t necessarily involve a tsunami of sudden consumption. They might, however, arise gradually and over time as a result of the persistent and passionate work of many, quietly making reparations in the background, slowly changing the shape of experience, softening corners, removing problems until users forget they ever existed, and adding opportunities for kindness, for attention, and for connection until users forget they weren’t there before.

But, of course, even quiet revolutions need funding. The research and academic work we have focused on over the chapters in this book are only one slice of a big pie. For a field of positive computing to flourish and be relevant, we need a broad ecology of stakeholders that includes not only designers, developers, and scientists, but also the policymakers, investors, managers, and entrepreneurs who will bring positive-computing visions to life for the public.

So in place of a long-winded, if heartfelt, rainbow pep talk of a conclusion, we close up with a no-nonsense proposition: How do we fund our work in positive computing and, critically, do so without compromising on human-centeredness?

Informed by a range of both commercial and noncommercial projects that already exist, we introduce here a number of funding models and pathways to economically sustainable positive-computing development that we have witnessed in action today. We discuss them briefly, along with some of the advantages and challenges associated with each in hopes that an overview will help lightly grease the pathway to action for those who want to get this party started (or see it grow).

Funding Positive Computing
Positive Computing as a Science

Until recently the most significant group of “stakeholders” for projects relating to positive computing has been researchers at universities or at publicly funded research organizations that study overlaps between technology, psychology, and wellness. Ongoing work has occurred in psychology as Internet-based mental health promotion, as positive technologies in cyberpsychology, and as a wealth of work within HCI under various umbrellas (see, for example, the “mind and spirit” sessions at the Computer?Human Interaction Conference, the workshops on interaction design for emotional wellbeing, and special journal issues on related themes). Each of these groups is influenced by its unique research interests and culture.

Work in research is by far the best suited to form a solid foundation for wellbeing technology because research work must be validated via established measures, academic peer review, and replication. This process guarantees that, over time, we can accumulate knowledge about design factors that positively influence wellbeing for different groups of people. Academic and scientific research, relatively free from commercial interests, has the advantage of reliability and is well suited to higher-level questions and basic research.

However, the process is a slow and expensive one. Researchers at universities and research organizations are generally funded by national agencies for a specific amount of time (usually one to four years) to run a specific study. More than a year can pass between the point at which an idea is fleshed out and a project launches. Funding-agency policy often requires adherence to the original plan, yet implementation plans entailing technology can easily become outmoded in six months’ time. As such, these kinds of projects are best suited to exploring aspects more persistent or universal than technological implementation details usually are.

Positive Computing as a Commercial Industry

When it comes to innovating at a faster pace and with finer granularity and in ways that are focused on immediate application, industry has the advantage. Commercial products for personal development have been popular in the West since at least the 1970s, particularly in the form of self-help books. The validation of such behavioral bibliotherapies was patchy back in the 1970s (Glasgow & Rosen, 1978), and it would seem that not much evaluation has occurred since then. Mobile technology has made the concept digital, and many of the apps we have already mentioned from large companies and from a significant number of start-ups are commercializing products aimed at helping people track and improve their physical and sometimes emotional wellness. According to a report by the Pew Research Center (Fox & Duggan, 2013), 69 percent of US adults track their (or a loved one’s) health, 21 percent of them using some form of technology (e.g., smartphones and sensors). There seems sufficient evidence to assume there is a profitable interest in these types of technologies among consumers and the companies that are investing in them.

Of course, the allure of wellbeing technology as a potential goldmine is not without its dangers. Among today’s most familiar models of technology product is the “free” service. In this scenario users are not so much the customers as the “products.” Advertisers and businesses able to profit from users’ profile information pay for the service. At the end of the day, a company needs to generate revenues and so “monetize” their users.

The monetization of users is rather different from the relationship between, say, a therapist and his clients or a doctor and her patients. In contrast, both these types of relationships and the motives that drive them are better understood by those involved and are even regulated by legislation in order to protect patients from malpractice. If technology companies get involved in our wellbeing, who will protect us from profit-based misconduct? Are there viable models for a positive-computing company, or will positive computing in industry simply fade into more forms of marketing?

Positive Computing as a Public Good

Government organizations as well as nonprofit organizations funded by private or public means have a unique position that sits somewhere in between research and commercial venture. They are often capable of moving at commercial speed, they develop and test for specific, on-the-ground applications intended for a wide audience, but they don’t have the same conflict of interest that arises from a tension between revenue making and human needs. By way of example, the Young and Well Cooperative Research Centre, an $80 million Australian initiative that we mentioned in previous chapters, has goals that bridge applied research and dissemination of the innovation produced. It is jointly funded by the Australian government, nonprofits, for-profit companies, and universities.

As another example in this sector, the Inspire Foundation, a nonprofit organization dedicated to mental health prevention and promotion among young people, designs and develops a variety of technologies for its work. With branches in the United States, Ireland, and Australia, it is funded through donation and sponsorship. Their User Experience teams partner with clinical and research psychologists and psychiatrists and engage in participatory codesign with the young men and women who are their users. Critical as this sector is to moving efforts forward, progress would surely be limited if positive-computing projects could be pursued only through nonprofit- or government-funded means. Enter social business.

Positive Computing as Social Enterprise?the Fourth Sector

The past few years have seen the emergence of an exciting new development within business that has come in response to the need for sustainable self-funding businesses that don’t place revenue optimization as their overriding priority. The social enterprise is described by Nobel Peace Prize winner and social business advocate Muhammad Yunus as “a non-dividend company created to solve a social problem. Like an NGO, it has a social mission, but like a business, it generates its own revenues to cover its costs. While investors may recoup their investment, all further profits are reinvested into the same or other social businesses.”2

At the time of writing, the Social Enterprise Alliance3 boasts a membership of more than 900 social enterprises, service providers, investors, corporations, public servants, academics, and researchers. According to a recent article in The Guardian, “Social enterprises are now part of the fabric of British life. Up and down the country they are tackling social problems and improving communities, people’s life chances and the environment, and then reinvesting their profits back into the business or the local community. They are a growing, exciting and vibrant part of the mix in today’s jobs market.”

Examples include everything from innovating in the recycling industry to preserving local history and culture, providing early childhood education, and solving hard problems in disadvantaged communities. Even large corporations are contributing by helping these enterprises get off the ground and gain momentum. And there is a certification body to monitor the integrity of the social enterprise promise.4

A recent Wikipedia definition of the term social enterprise makes the link to positive computing explicit: “A social enterprise is an organization that applies commercial strategies to maximize improvements in human and environmental wellbeing, rather than maximising profits for external shareholders.”5 This growing sector of industry seems not only too long awaited and incredibly sensible, but also particularly well suited as a funding model for work in positive computing. The crossover model has the potential agility and self-reliance of a business with the applied community-driven approach of a nonprofit combined with a freedom from many of the burdens and conflicts of interest posed both by profit-driven business and by the nonprofit need to continually seek funding. No endeavor is free from interests and biases, but in a decade or two we may look back and find that progress in positive computing owes much to the social enterprise.

Benefits of Wellbeing
In addition to the conditions and factors of wellbeing, it’s also worthwhile to look at the consequences of it, not only for their own sake, but because they can be important tools for justifying positive-computing initiatives. The list of follow-on benefits can be helpful in getting support for positive computing from those for whom greater happiness is not enough reason to make or fund change.

There are myriad positive follow-on effects to increased wellbeing that appear in other aspects of a person’s overall life experience, and although some of these effects are wholly intuitive, others may surprise.

For example, among the more familiar is evidence that happiness seems to boost the immune system and thus reduces incidents of illness (O’Leary, 1990). Inversely, stress increases the chance of illness over both the short and the long term. As such, work in wellbeing can be viewed as a kind of preventative health-care measure in addition to an overall aid to treatment.

More surprising, perhaps, is the finding that developing emotional resilience in school children, in addition to increasing their wellbeing, improves their academic performance (Durlak, Weissberg, Dymnicki, Taylor, & Schellinger, 2011), a consequence that might very well help motivate stakeholders to invest more time and funding into wellbeing efforts in education.

Similarly, many workplaces invest in wellbeing programs based on the understanding that happier workers perform better on the job and engage better with their team. This relates to the research that shows positive emotions increase creativity and open-ended thinking.

It has also been acknowledged that wellbeing promotion prevents mental health crises that contribute to societal problems such as crime and substance abuse. Supporting psychological wellbeing allows people to reach their potentials in all walks of life, and the “side effects” can be impressive.

The Positive-Computing Project
As a researcher/practitioner team, we are keenly aware that both a scientific underpinning and a realistic set of professional best practices are necessary for positive-computing projects to thrive. But how do we move these two sides of the equation toward a common goal to produce a sustainable product?

One challenge is the different work processes and time scales that exist between research and practice, and we have presented them in figure 12.1. Whereas scientists are accustomed to the process on the left-hand side, practitioners tend to follow processes like the one on the right.

Figure 12.1
Figure 12.1
We have made an effort to highlight the commonalities between the two, but the differences remain substantial. As mentioned earlier, a scientific project is likely to be scoped a number of years in advance. Needless to say, predicting the nature of available technologies that far ahead is terribly difficult, if even possible.

In contrast, those working for a company or nonprofit organization will have project timelines measured in months at most, and the greatest challenge is often having enough time to gain an up-to-date understanding of the area and then measure its actual impact over the longer term. Just measuring impact, if one includes the follow-up evaluations at 3, 6, or 12 months, will often call for a period of time longer than the length of many projects themselves.

Therefore, positive computing will likely require new approaches that allow developers to measure the impact of new technologies more accurately as well as new approaches to funding that allow researchers greater flexibility to adapt to technology change. Perhaps there is a sweet spot to be found through cross-sector collaboration.

Some of the organizations we work with include in their project proposals both traditional user research deliverables such as personas and participatory design findings and positive-computing variables such as initial wellbeing evaluations and a related psychology literature review to support the project design.

Despite the challenges, an increasing number of projects continues to emerge from multiple sectors all the time. We highlight some of these projects together with emerging work in multiple fields at positivecomputing.org.

. . .

One thing unique to humans is our relentless tendency to reshape and redesign our world. We don’t leave things well enough alone. We continually optimize. We have used this incredible ingenuity for both dramatically good and not so good results. Interest in positive computing seems to reflect a growing collective consensus that the impact we have as a species has not been as socially clever or insightful as it needs to be to improve things in the long term. Instead, we have sometimes confused money with happiness and “biggering” with bettering (as Dr. Seuss put it in The Lorax), and we have neglected the bigger picture, the broader community, and the planet itself (a.k.a. “everything”). As a result, we seem now to be conceding that to make optimum progress for optimum sustained happiness and with minimum disaster and suffering, we’ll need to turn to principles of sustainable wellbeing for ourselves, our communities, and our planet. Upon these efforts rests the future of technology.

Buddhists have a core practice called metta that involves actively wishing that all beings be well and happy. We asked one Buddhist monk how we could genuinely wish happiness for even our greatest enemies, even the really terrible people, such as mass murderers and torturers. He pointed out that if these people were genuinely well and happy, they would not be terrible. The idea that things such as violence and cruelty, in addition to other types of suffering such as anxiety or depression, are born of ill-being, seems to make a study of wellbeing a clear imperative, a necessary contribution to the solving of all human-derived social problems, and a compliment to so many of the other endeavors in this direction. Perhaps by attending to wellbeing through technology, we are in some small way spreading this intention?that all beings everywhere be well and happy?and in so doing, we are improving (one iteration at a time) the way we as humans change our world.

Notes
1. Find out how many slaves work for you at slaveryfootprint.org.
2. From Muhammad Yunus’s social business website (yunussb.com).
3. See www.se-alliance.org.
4. For this certification body, see socialEnterpriseMark.uk.
5. “Social Enterprise,” Wikipedia, http://en.wikipedia.org/wiki/Social_enterprise, accessed February 10, 2014.

References

  • Brynjolfsson, E., & McAfee, A. (2011). Race against the machine. Lexington, MA: Digital Frontier Press.
  • Corporates can ensure social enterprises mean business. (2013). The Guardian, August 7. Retrieved from http://www.theguardian.com/social-enterprise-network/ 2013/aug/07/big-businesses-support-social-enterprises.
  • Davidson, R. J., & Begley, S. (2012). The emotional life of your brain: How its unique patterns affect the way you think, feel, and live-and how you can change them. New York: New American Library.
  • Diener, E., & Diener, M. (2009). Cross-cultural correlates of life satisfaction and self-esteem. In E. Diener (Ed.), Culture and well-being: Collected works of Ed Diener (vol. 38, pp. 71-91). The Hague: Springer.
  • Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students’ social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82(1), 405-432.
  • Elsass, P., Carlsson, J., & Husum, K. (2010). Spiritualitet som coping hos tibetanske torturoverlevere (Spirituality as coping in Tibetan torture survivors). Ugeskrift for Laeger, 172(2), 137-140.
  • Fox, S., & Duggan, M. (2013). Tracking for health. Washington, DC: Pew Research Center’s Internet & American Life Project.
  • Friedman, B. (1996). Value-sensitive design. Interaction, 3(6), 16-23.
  • Glasgow, R. E., & Rosen, G. M. (1978). Behavioral bibliotherapy: A review of self-help behavior therapy manuals. Psychological Bulletin, 85(1), 1-23.
  • Norman, D. A. (2007). The design of future things. Philadelphia: Perseus Books.
  • O’Leary, A. (1990). Stress, emotion, and human immune function. Psychological Bulletin, 108(3), 363-382.
  • Rogers, Y., & Marsden, G. (2013). Does he take sugar- Moving beyond the rhetoric of compassion. Interaction, 20(4), 48-57.
  • Rotman, D. (2013). How technology is destroying jobs (pp. 28-35). MIT Technology Review. Cambridge, MA: MIT.
  • Schimmack, U., Radhakrishnan, P., Oishi, S., Dzokoto, V., & Ahadi, S. (2002). Culture, personality, and subjective well-being: Integrating process models of life satisfaction. Journal of Personality and Social Psychology, 82(4), 582-593.
  • Sin, N. L., & Lyubomirsky, S. (2009). Enhancing well-being and alleviating depressive symptoms with positive psychology interventions. Journal of Clinical Psychology, 65(5), 467-487.
book/positive_computing/12_caveats_considerations_and_the_way_ahead.txt · Last modified: 2016/07/11 22:47 by hkimscil

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki