Check this out: Research methods on the Pokémon Go

Maybe the Pokémon Go craze has already peaked, but that does not mean we should stop thinking about what might be learned from it. What may Pokémon Go tell us about e.g. offline-online integration, in-game and through-game affordances or socialisation-through-gamification? And, importantly, how might we study Pokémon Go as such?

pokemon-go-144717

The issue of the methodological opportunities and challenges that Pokémon Go poses has been  addressed by Clark & Clark (2016). They conclude that we may begin to understand the supercomplexity of the social intervention that is Pokémon Go by using mixed methods creatively.

Mixed Methods

Keep-calmThe Journal of Mixed Methods Research defines mixed methods as “research in which the investigator collects and analyses data, integrates the findings, and draws inferences using both qualitative and quantitative approaches or methods in a single study or program of inquiry”.

In many texts on mixed methods, this type of research is presented as a way to make peace between two “adversaries”: the supporters of quantitative vs. the supporters of qualitative research. The argument is that during the last century these “adversaries” have engaged in a so-called “paradigm war”. On one side are the quantitative purists who articulate assumptions about research that are in line with what we often label positivist philosophy: social observations should be treated as entities in much the same way that physical scientists treat physical phenomena and the observer is separate from the entities that are subject to observation (Johnson and Onwuegbuzie 2004)  . Here, any scientific inquiry should be objective, with the aim at making time- and context-free generalizations, where real causes of scientific outcomes can be deemed reliable and valid (Gulbrandsen, 2012, p. 48) . On the other side we have the qualitative purists who reject positivism and argue for a rage of alternatives, such as constructivism, idealism, relativism, humanism, hermeneutics, or postmodernism. Though the anti-positivists differ among themselves in many aspects, they all argue for the existence of multiple and constructed realities, as opposed to the singular reality of positivism. And as such, they all argue that the observer and the observed cannot be separated because the (subjective) observer is the only source of the ‘reality’ that is to be observed (Guba, 1990). Beyond this, they also share the stance that time- and context-free generalizations are neither desirable nor possible, that research is value-bound, hence making it impossible to differentiate causes and effects (Johnson and Onwuegbuzie 2004).

During the 1990’s a growing number of scholars started pointing out the inadequacy of the strict quantitative-qualitative division, arguing that the so-called “incompatibility thesis” (that qualitative and quantitative research paradigms cannot and should not be mixed) (Howe, 1988), is faulty. Instead, these scholars argue, there should be a third way, and they started promoting mixed method research as a new research paradigm that could point in this third direction. In particular, they argue that although the two paradigms often portray themselves as opposites, they actually share basic agreements on several points (Phillips and Burbules, 2000); they both use empirical data to address research questions, they both aim to minimize confirmation bias and invalidity, and they both attempt to provide justifiable claims about human activities and the environments in which they unfold. The middle road, then, according to Johnson and Onwuegbuzie (2004), is to acknowledge that what appears objective can vary across individuals because what we observe is affected by our background knowledge, theories and experiences. Observation is, in other words, not a direct window into “reality”, and will thus not provide final proof. BUT this does not mean that all is relative; rather, what we obtain is probabilistic evidence.

So, why use mixed methods? Well, in short, because it allows you to overcome shortcomings of the individual methods (qualitative and quantitative) and to break down the confines of traditional perspectives (Gulbrandsen, 2012, p. 48). First, and foremost, by mixing methods you will be more likely to avoid the limitations of purely quantitative or qualitative studies. Quantitative studies are often criticized for not including context and for not providing the participants with a voice, and qualitative studies are often discounted for potential researcher biases, smaller sample sizes, and lack of generalizability (Miller et al., 2011). Mixed methods can include context and participants’ voices and still be neutral and generalizable. Secondly, mixed methods research makes triangulation possible (i.e. seeking convergence and confirmation of results from different methods studying the same phenomenon), hence also allowing the investigation to be informed by the findings from one method when utilizing the other.

Figure-1-A-Matrix-of-Mixed-Methods-Design-Johnson-and-Onwuegbuzie-2004-p-22

How to use mixed methods? Well, there are two basic approaches: concurrent or sequential. The first implies that you conduct both the qualitative and the quantitative research simultaneously. The second implies that you first conduct one (e.g. quantitative) and then, based on the findings from the first, conduct the second (e.g. qualitative).

In a review of the field of mixed methods, Tashakkori and Creswell (2007, p. 208),  found that there are three dominant ways of doing mixed method research.

  1. Here researchers create separate quantitative and qualitative questions, followed by an explicit mixed methods question. For example, if a study involves concurrent quantitative and qualitative data collection, this type of mixed question could ask, ‘‘Do the quantitative results and the qualitative findings converge?’’. If a study is more sequential, the question might be ‘‘How do the follow-up qualitative findings help explain the initial quantitative results?’’ or ‘‘How do qualitative results explain (expand on) the experimental outcomes?’’
  2. Here researchers create an overarching mixed research question, which is then later broken down into separate quantitative and qualitative subquestions to answer in each strand or phase of the study. This is more frequent in concurrent studies than in sequential ones. Although this overarching question might be implicitly present, sometimes it is not explicitly stated. An example is Parmelee, Perkins, and Sayre’s (2007) study exploring ‘‘how and why the political ads of the 2004 presidential candidates failed to engage young adults’’. The authors followed this implicitly stated question with three specific subquestions: ‘‘How does the interaction between audience-level and media-based framing contribute to college students’ interpretations of the messages found in political advertising?’’, ‘‘To what extent do those interpretations match the framing found in the ads from the 2004 U.S. presidential election?’’ and ‘‘How can political ads be framed to better engage college students?’’. As another example, in a concurrent design, a mixed methods question might be ‘‘What are the effects of Treatment X on the behaviors and perceptions of Groups A and B?’’ Consequently, the component questions that are drawn from the overarching mixed question might be ‘‘Are Groups A and B different on Variables Y and Z?’’ (the quantitative strand) and ‘‘What are the perceptions and constructions of participants in groups A and B regarding treatment X?’’ (the qualitative strand).
  3. Here researchers create research questions for each phase of a study as the study evolves. If the first phase is a quantitative phase, the question would be framed as a quantitative question or hypothesis. If the second phase is qualitative, the question for that phase would be framed as a qualitative research question. This is found in sequential studies more than in concurrent studies.

Blue Ocean Strategy

The concept of Blue Ocean Strategy was presented by W. Chan Kim and Renée Mauborgne, professors at INSEAD, in an article published in Harvard Business Review in 2004 and more thoroughly in their book Blue Ocean Strategy published in 2005.

Here’s a video they made, briefly explaining what it is all about:

In short, they argue that the global market place of the 21st Century consists of red and blue oceans. Red oceans represent the known market place (meaning all current industries, companies etc.). Much in line with how Porter (as presented in chapter 2) describes the market, here industry boundaries are defined and accepted, and the competitive rules of the game are known. As such, an organization’s strategy in a red ocean is all about how to outperform its competitors, with the overall aim of increasing market share. Kim and Mauborgne argue that these oceans quickly get crowded, which in turn means that the prospects for profits and growth are reduced. To sustain themselves in the marketplace, practitioners of red ocean strategy hence focus on creating competitive advantage, most often by analysing what their competitors do and then aim to do it better. Here, seizing a larger market share is understood as a zero-sum game, in which one organization’s seizure is accomplished at the expense of another. Following the logic presented in chapter 2, cost and value are seen as trade-offs, and the organization hence has to choose either a cost or differentiation position. The only way to win is to “attack and kill” others, hence, the term “red oceans”.

Blue oceans, to the contrary, describe the unknown market place (meaning any industry, company, product line etc. that does not already exist). A market place void of competition, because there simply are no competitors. In such a market, demand is shaped and created by the supplier and no competitive rules are yet set, which means, according to Kim and Mauborgne, that there are lots of opportunities for growth and profit. Why? Because this market is without given boundaries or industry structure, allowing the organization (the supplier) to set the rules.  As such, Kim and Mauborgne take, what they call, a reconstructionist view. That is, they argue that structure and market boundaries only exist in the minds of management (they are constructions, not “God-given”), and are hence open to reconstruction. The idea is that by acknowledging the construction of the market, and hence not allowing oneself to be limited by that construction, but rather reconstruct it, organizations can tap into undiscovered demand.

How? Well, Kim and Mauborgne argue that this can be done by shifting focus from supply to demand, from solely focusing on competition to a focus on innovation. In order to do so, organizations must pursue the otherwise conceived as mutually exclusive strategies of differentiation and low-cost simultaneously. In doing so, competition is rendered irrelevant, because the organization expands the demand side, rather than the supply side. Which in turn allows the organization to play a non–zero-sum game, with high profit possibilities.

Though popular, the concept has received a fair share of criticism. For instance, in Holt and Cameron’s book Cultural Strategy (2010), it is argued that while Kim and Mauborgne present the concept as new, many of its elements have been presented and covered elsewhere (e.g. the theory of Six Sigma). In addition, the case study on which the blue ocean theory is based has been problematized (both method and selection), as has Kim and Mauborgne’s failure to address strategic communication as a vital part of an organization’s success with innovation (see e.g. herehere, here and here. In short, though we find the name and the idea catchy (which also explains its popularity), we do not think it is entirely novel, nor do we think that it is developed enough to stand alone.

Theories of persuasion

When was the last time you were persuaded by someone? That is, made up your mind about something, changed your opinion on a matter or did one thing rather than another because of what was communicated to you? Our guess is that these questions turn out to be more difficult to answer than what might be expected. Although we are constantly influenced by the flows of communication in which we engage, the exact moment and cause of persuasion usually eludes us. Was it a forceful argument, the authority of the communicator, the emotions stirred in us? Classical rhetoric suggests that persuasion arises from a combination of all of the above. These three forms of appeal are termed logos (appeal by reason ethos (appeal by character) and pathos (appeal by emotion), respectively. Persuasion, the ancients tell us, arises if and when these three are combined in an appropriate manner, making a communicated utterance persuasive. This understanding of persuasion begins with the communicator and his or her intention to persuade; it sees persuasion as the planned effort on the part of the speaker to shape the message in such a way as to make it convincing. Having the intention to persuade someone and using all the means available, however, is not the same as succeeding in this endeavour. An utterance may be ever so beautifully crafted, its reasoning may be impeccable, the communicator may be just the right person to deliver the message – and yet the communication may fail utterly in having the desired effect on the audience. So, what is persuasion? Here are three possible answers.

First, we should not necessarily give up the classical mode of explanation just because actual efforts at persuading are not always effective. Aristotle, for instance, clearly saw that being able to ‘see the available means of persuasion’ is not the same as actually persuading; he was concerned with the crafting of the message, not with its actual effect. And in many ways this is still as good as it gets from the communicator’s point of view. We can try as best we may to analyse the situation, understand our audience, attune our reasoning and style of presentation to the situation at hand, but once the communication is out there, it is also out of our hands. This is the reasoning behind Lloyd F. Bitzer’s (1968) idea that rhetorical situations call for fitting responses. A rhetorical situation, as Bitzer defines it, consists of an exigence, an audience, and a number of constraints. The exigence is that which calls forth the intention to persuade, i.e. rhetorical discourse; it is an ‘imperfection marked by urgency’, something that ought to change and can be changed by means of communication. The audience is the group of people who are able to correct the imperfection; those who have the ability to make the necessary change and who are also open to be persuaded by the communicator to do so. The audience, then, is not anyone who might happen to stumble upon the communication, but only those individuals (or groups) who are or can become mobilized as mediators of change. Finally, constraints are all those elements of the situation that must be considered if the communication is to succeed; e.g. the audience’s prior knowledge about and attitude towards the topic at hand, the communicator’s personality and authority (in relation to the topic and the audience), other communicators who have similar or different opinions on the matter, the circumstances in which the communication is to take place (the medium and the genre). The constraints, then, are many and varied; they can generally be divided into those aspects of the situation, which the communicator has little or no chance at changing, but must take into account (e.g. the procedure for making a decision, the opponents’ arguments, the general norms and values of the audience), and those that can be shaped directly by the communicator (e.g. through the selection of a certain argumentative strategy or the adoption of a particular communicative style). Bitzer’s final argument is that if and when a communicator analyses these three elements correctly, he or she will deliver a fitting response – that is, an utterance that holds persuasive potential.

However, Bitzer’s position has been heavily criticized for being both deterministic and functionalistic. Richard E. Vatz (1973) offers one of the earliest and most influential articulations of this critique. Vatz basically turns Bitzer’s argument on its head, stating that situations do not determine persuasive efforts, nor do such efforts function by being fitted to situations. Instead, it is persuasive efforts that create situations, establish exigences, call forth audiences. This is the second answer to the question of persuasion: it is the creation of meaning by communicators. Here, a main issue becomes the identification between the communicator and the audience; persuasion can (only) happen when there is common ground, when the communicator and the audience create meaning in similar ways. We can return to the classics for an explanation of this process. In the words of Cicero:

quote-if-you-wish-to-persuade-me-you-must-think-my-thoughts-feel-my-feelings-and-speak-my-marcus-tullius-cicero-93-54-52

This may still sound somewhat like a fitting (or rather, fitted) response, but the fit is now with an audience rather than with a situation. And audiences, as e.g. Edwin Black (1970) has argued, can also be shaped; they may even be constituted in and through communication (Charland, 1987). This second answer, then, views persuasion more as a process of creating common meaning and less as an intentional effort on the part of the speaker.

This takes us to a third possible answer, namely that persuasion is inherent to the process of communication rather than a property of speaker and/or audience. Again, we can find traces of this answer in classical rhetoric. Most notably, Gorgias saw speech as all-powerful, using the story of Helen in the Iliad as an example of how human beings can be overcome by communication:

…if persuasive discourse deceived her soul, it is not on that account difficult to defend her and absolve her of responsibility, thus: discourse is a great potentate, which by the smallest and most secret body accomplishes the most divine works; for it can stop fear and assuage pain and produce joy and make mercy abound.

Whereas persuasion in the Aristotelean sense is a rational exercise in finding the best reasons that may or may not convince an audience, Gorgias sees it as a passionate process; one in which the persuaded part becomes fully and unwillingly immersed. However, Gorgias seems to assume that the communicator is not passionately involved, but rather to blame for the manipulation of the audience’s emotion. This is both the attitude that for centuries gave rhetoric a bad name and it is a position that does not stand to reason: if communication were this powerful, then how can communicators themselves avoid its force? Would not the manipulator be as open to manipulation as others? Or, conversely, if one were able to manipulate, would that not also mean being able to see through other people’s manipulation?

A more appropriate answer, and one that takes all three options into account, then, is that persuasion is the process of bringing speakers, audiences and situations into being in such ways that common meanings are formed. This means that persuasion is both within and beyond the reach of speakers and audiences; it is a force that cannot be controlled entirely by either. Communicators, on the one hand, are not free to persuade as they intent. Audiences, on the other hand, cannot choose freely to remain unaffected by communication. Persuasion is both a driver and an outcome of the communicative process.

Neuromarketing

‘It s(m)ells like fresh bread’

Recent advances in the field of neuromarketing have raised awareness of the ways in which consumers can be influenced by sensory stimuli that they are not necessarily aware of – or that they react to before making cognitive sense of. Such insights provide empirical backing to the theoretical premise of what has been labelled the ‘affective turn’ within the social sciences and the humanities (see Clough, 2008 for an overview). Namely that, to simplify the point somewhat, ‘the skin is faster than the word’ (Massumi, 1995). We experience affective intensities before we can describe them as emotions – and we react on our affectively triggered instincts before we know, let alone can justify, what we do.

These points are not in themselves novel, but today marketers have more sophisticated means of putting them to use. For instance, a supermarket may dispense the smell of freshly baked homemade bread in its aisles to increase sales of its absolutely odourless, mass-produced toast.  Or, even more cunningly, the supermarket could place its in-store bakery near the entrance so as to whet customers’ appetites, since hungry shoppers are heavy shoppers (Ashford, 2015).

In a broader sense, just as Marcel Proust famously was prompted ‘in search of lost time’ by eating a madeleine cake, the smell of bread may transport consumers to sweet memories of homely comfort. These may also, as we pass the bakery time and again, come to be associated with the store. And once the supermarket has caught the scent of money, why not move on to the other senses?

Neuromarketers have found that taste testing reduces customers’ sense of risk-taking just as touch is often used to validate a product (e.g. add weight to a product to indicate its sturdiness, seriousness, quality), likewise colour-coding (e.g. blue for trust, green for relaxation) and other visual stimuli (pictures of fresh fruit or models making eye-contact) can influence our shopping behaviour and, more generally, sounds (energetic music) can put us in the right mood (Genco, Pohlmann & Steidl, 2013).

Even if customers are not, or only vaguely, aware of all these sensory stimuli, they more likely than not shape each trip to the local supermarket decisively, just as they may be brought to bear, more generally, on our experience with brands (Lindstrom, 2005). Even brands, which do not have the same intuitive link to the senses as supermarkets, can profit greatly by working on and with the senses – just think of the crisp smell of a new pair of sneakers or that strangely satisfying sound of turning on a computer.

Luring as it may be, neuromarketing is not unproblematic. First, there is the ethical issue. Do we really want marketers to be messing around at the liminal zones of our consciousness – and beyond? Second, neuromarketing may seem soundly based in scientific advances, and the combination of marketing tools and brain scans does provide impressive backing for claims to effectivity. However, affect is not the same as effect. Or, in plainer terms, the route from stimulus to response is not as direct as the above account might suggest. While the model of decision-making that we espouse in Strategizing Communication firmly breaks with the idea of rational choice, we are equally uneasy with the ‘emotional determinism’ of neuromarketing. Decisions, we propose, are much more complicated processes in which sensory impulses do play a key part, but in which conscious cognition is also involved. The real potential, then, lies in finding ways of combining the two.

Models of budgeting

Return on investment (ROI) seems to be the mare of the strategic communicator. With financial executives constantly concerned that they are not getting enough bottom-line bang for the communicative buck, the burden of proof is often on the communications professional. Whilst the insecurity of what might be lost by not communicating can sometimes warrant expenditures in the here-and-now, harder evidence is usually needed to secure long-term funding.

The problem, then, is one of effect. It is often difficult to prove the (economic) effect of specific communication initiatives, but it is possible to establish a general connection between expenditures and profitability at the level of the over-all communication strategy. Let us look at this general connection before considering the available models for actually determining the right level of expenditures and, hence, establishing the communication budget.

customer equity

The marketing management scholars Roland T. Rust, Katherine N. Lemon and Valerie A. Zeithaml (2004) argue that there is a connection between a firm’s spending on and economic return from marketing efforts at the strategic level. They prove the point by looking at customer equity relative to expenditure, showing that an increase in the over-all budget will also increase each customer’s lifetime value. Thus, they argue that to get a general idea of the ROI of marketing communications, we should not only look at increased sales, but take such issues as brand perception and brand loyalty into account as well. They apply the model to a set of empirical cases, proving that increased spending resulted in increased customer equity in each case.

Strategic communication, obviously, is not equal to marketing communication, but given the inclusion of indirect effects relating to general brand value, we may assume that Rust, Lemon and Zeithaml’s argument applies to communications efforts more generally. However, we may also assume that the positive effect of increasing communication budgets does not go on indefinitely, but rather takes the shape of an S-curve, where increased spending does not take immediate effect, but where each increase will have a relatively large impact once the budget is of a certain size. If one continues to spend more, however, the return will gradually peter out until one reaches the point at which the ROI of extra spending will be zero or, indeed, negative.

Advertising_Curve

The exact saturation point is likely to be highly contextual and can probably only be located empirically, meaning that budgets should constantly be adjusted as organizational goals, market situations and other contextual factors change the demand for and/or restraints on communications efforts.

In the absence of a reliable and stable measure of the optimal communication budget, organizations have employed various models for establishing workable budgets. The Spanish professor of marketing J. Enrique Bigné (1995) provides a useful review of seven such methods:

  • The arbitrary approach
    • Setting the budget arbitrarily may not sound like the most strategic choice and, indeed, this method is not held in high esteem. However, in situations of great uncertainty it can be the only viable route. In such situations, arbitrary budgeting allows for maximum flexibility and adaptability, but it also means one will have to rely on ‘gut feelings’ rather than strict analysis, and it means effects are difficult to predict, let alone measure.
  •  Affordability
    • Affordability is a slightly more sophisticated model than the arbitrary one in so far as spending is now judged against what the organization can actually afford. However, this means increased conservatism and inflexibility as focusing solely on what is currently affordable does not take into account what might be gained from increased investment in communication. If one only spends what one can afford, one may lose out on growth opportunities, but this may be the only viable route for start-ups and small companies until growth has actually set in.
  • Use of previous year’s budget
    • For established organizations in stable markets, using the previous year’s budget to establish the current one may be an attractive alternative to affordability. One already knows what is needed and that it is affordable. However, the assumption that the present (and future) will be like the past, is constantly proven wrong in today’s communications landscape. Further, the model is not very useful if the organization sets new goals nor when its market situation changes.
  • Percentage of sales
    • The percentage of sales method has long been the most common tool for establishing over-all budgets. This is an easy and reliable method for establishing the budget top-down. It is more flexible than the model of using the previous year’s budget, yet guards against over-spending. Still, the method is quite conservative, especially if the set percentage is based on the sales of the previous year. To allow for a change in strategic goals, one may set the percentage in relation to projected sales, but this incurs the risk of not reaching the new goals.
  • Competitive parity
    • The principle of this method is that one should spend as much on communication as one’s competitors. Rather than setting the budget relative to the previous year’s spending or based on a percentage of (previous or projected) sales, then, this method looks to the environment for an indication of adequate expenditure. Taking the actions of others into account is important, but competitive parity only works if one can find out what competitors are actually spending, if the competitors know what they are doing and if all actors in a market have the same objectives. It is quite unlikely that any, let alone all, of these criteria are ever fulfilled.
  • Share of voice
    • Share of voice also begins with an analysis of what competitors are doing, but sets goals relative, rather than equal to this. The starting point is a decision on how ‘loud’ the organization should be in comparison to other actors in a market, followed by an analysis of what it will take to get the desired share of voice. This method is problematic in two respects; first, share of voice does not equal market share and, second, in today’s media landscape it is increasingly difficult to control who gets to speak how much – and speaking is not the same as being heard. One can no longer ensure a share of voice through paid media exposure. Instead, one has to partake in a process, the costs of which are hard to set – and the effects of which are impossible to predict.
  • Objectives and tasks
    • This leaves us with the objectives and tasks method in which the budget is built bottom-up based on the specific communication tasks deemed to be necessary to reach set objectives. This is in many respects the most sophisticated model as it actually links the return of the communication with the investment needed. Thus, one may use the objectives and tasks method to determine the cost of initiatives aimed at, say, increasing sales and then calculate whether the return is larger than the investment. However, such estimates provide no guarantee that the tasks will actually fulfil the objectives. Further, if all tasks are to yield a return, the model becomes a restriction rather than a help as it limits communication initiatives to those that can be deemed directly profitable. Finally, establishing a full communication budget based on the objectives and tasks model is an extremely laborious process. In sum, this method is the most appropriate for campaigns and other identifiable communication initiatives, but it can hardly stand alone at the level of the communication strategy.

None of the existing models, then, are perfect, meaning that most organizations are likely to use a combination of two or more methods – and rightly so. First, some top-down tool for setting the general budget is necessary (e.g. through the percentage of sales methods); second, a bottom-up method of establishing the cost of specific initiatives is also needed (thus, objectives and tasks should be considered); third, taking the communication efforts of competitors into account is also important (meaning some notion of what is needed to gain the desired share of voice is required); fourth, other environmental factors could change the situation rapidly and must constantly be monitored (some degree of arbitrariness, then, must always be accepted).