Category Archives: Method

Check this out: Research methods on the Pokémon Go

Maybe the Pokémon Go craze has already peaked, but that does not mean we should stop thinking about what might be learned from it. What may Pokémon Go tell us about e.g. offline-online integration, in-game and through-game affordances or socialisation-through-gamification? And, importantly, how might we study Pokémon Go as such?

pokemon-go-144717

The issue of the methodological opportunities and challenges that Pokémon Go poses has been  addressed by Clark & Clark (2016). They conclude that we may begin to understand the supercomplexity of the social intervention that is Pokémon Go by using mixed methods creatively.

Mixed Methods

Keep-calmThe Journal of Mixed Methods Research defines mixed methods as “research in which the investigator collects and analyses data, integrates the findings, and draws inferences using both qualitative and quantitative approaches or methods in a single study or program of inquiry”.

In many texts on mixed methods, this type of research is presented as a way to make peace between two “adversaries”: the supporters of quantitative vs. the supporters of qualitative research. The argument is that during the last century these “adversaries” have engaged in a so-called “paradigm war”. On one side are the quantitative purists who articulate assumptions about research that are in line with what we often label positivist philosophy: social observations should be treated as entities in much the same way that physical scientists treat physical phenomena and the observer is separate from the entities that are subject to observation (Johnson and Onwuegbuzie 2004)  . Here, any scientific inquiry should be objective, with the aim at making time- and context-free generalizations, where real causes of scientific outcomes can be deemed reliable and valid (Gulbrandsen, 2012, p. 48) . On the other side we have the qualitative purists who reject positivism and argue for a rage of alternatives, such as constructivism, idealism, relativism, humanism, hermeneutics, or postmodernism. Though the anti-positivists differ among themselves in many aspects, they all argue for the existence of multiple and constructed realities, as opposed to the singular reality of positivism. And as such, they all argue that the observer and the observed cannot be separated because the (subjective) observer is the only source of the ‘reality’ that is to be observed (Guba, 1990). Beyond this, they also share the stance that time- and context-free generalizations are neither desirable nor possible, that research is value-bound, hence making it impossible to differentiate causes and effects (Johnson and Onwuegbuzie 2004).

During the 1990’s a growing number of scholars started pointing out the inadequacy of the strict quantitative-qualitative division, arguing that the so-called “incompatibility thesis” (that qualitative and quantitative research paradigms cannot and should not be mixed) (Howe, 1988), is faulty. Instead, these scholars argue, there should be a third way, and they started promoting mixed method research as a new research paradigm that could point in this third direction. In particular, they argue that although the two paradigms often portray themselves as opposites, they actually share basic agreements on several points (Phillips and Burbules, 2000); they both use empirical data to address research questions, they both aim to minimize confirmation bias and invalidity, and they both attempt to provide justifiable claims about human activities and the environments in which they unfold. The middle road, then, according to Johnson and Onwuegbuzie (2004), is to acknowledge that what appears objective can vary across individuals because what we observe is affected by our background knowledge, theories and experiences. Observation is, in other words, not a direct window into “reality”, and will thus not provide final proof. BUT this does not mean that all is relative; rather, what we obtain is probabilistic evidence.

So, why use mixed methods? Well, in short, because it allows you to overcome shortcomings of the individual methods (qualitative and quantitative) and to break down the confines of traditional perspectives (Gulbrandsen, 2012, p. 48). First, and foremost, by mixing methods you will be more likely to avoid the limitations of purely quantitative or qualitative studies. Quantitative studies are often criticized for not including context and for not providing the participants with a voice, and qualitative studies are often discounted for potential researcher biases, smaller sample sizes, and lack of generalizability (Miller et al., 2011). Mixed methods can include context and participants’ voices and still be neutral and generalizable. Secondly, mixed methods research makes triangulation possible (i.e. seeking convergence and confirmation of results from different methods studying the same phenomenon), hence also allowing the investigation to be informed by the findings from one method when utilizing the other.

Figure-1-A-Matrix-of-Mixed-Methods-Design-Johnson-and-Onwuegbuzie-2004-p-22

How to use mixed methods? Well, there are two basic approaches: concurrent or sequential. The first implies that you conduct both the qualitative and the quantitative research simultaneously. The second implies that you first conduct one (e.g. quantitative) and then, based on the findings from the first, conduct the second (e.g. qualitative).

In a review of the field of mixed methods, Tashakkori and Creswell (2007, p. 208),  found that there are three dominant ways of doing mixed method research.

  1. Here researchers create separate quantitative and qualitative questions, followed by an explicit mixed methods question. For example, if a study involves concurrent quantitative and qualitative data collection, this type of mixed question could ask, ‘‘Do the quantitative results and the qualitative findings converge?’’. If a study is more sequential, the question might be ‘‘How do the follow-up qualitative findings help explain the initial quantitative results?’’ or ‘‘How do qualitative results explain (expand on) the experimental outcomes?’’
  2. Here researchers create an overarching mixed research question, which is then later broken down into separate quantitative and qualitative subquestions to answer in each strand or phase of the study. This is more frequent in concurrent studies than in sequential ones. Although this overarching question might be implicitly present, sometimes it is not explicitly stated. An example is Parmelee, Perkins, and Sayre’s (2007) study exploring ‘‘how and why the political ads of the 2004 presidential candidates failed to engage young adults’’. The authors followed this implicitly stated question with three specific subquestions: ‘‘How does the interaction between audience-level and media-based framing contribute to college students’ interpretations of the messages found in political advertising?’’, ‘‘To what extent do those interpretations match the framing found in the ads from the 2004 U.S. presidential election?’’ and ‘‘How can political ads be framed to better engage college students?’’. As another example, in a concurrent design, a mixed methods question might be ‘‘What are the effects of Treatment X on the behaviors and perceptions of Groups A and B?’’ Consequently, the component questions that are drawn from the overarching mixed question might be ‘‘Are Groups A and B different on Variables Y and Z?’’ (the quantitative strand) and ‘‘What are the perceptions and constructions of participants in groups A and B regarding treatment X?’’ (the qualitative strand).
  3. Here researchers create research questions for each phase of a study as the study evolves. If the first phase is a quantitative phase, the question would be framed as a quantitative question or hypothesis. If the second phase is qualitative, the question for that phase would be framed as a qualitative research question. This is found in sequential studies more than in concurrent studies.

Neuromarketing

‘It s(m)ells like fresh bread’

Recent advances in the field of neuromarketing have raised awareness of the ways in which consumers can be influenced by sensory stimuli that they are not necessarily aware of – or that they react to before making cognitive sense of. Such insights provide empirical backing to the theoretical premise of what has been labelled the ‘affective turn’ within the social sciences and the humanities (see Clough, 2008 for an overview). Namely that, to simplify the point somewhat, ‘the skin is faster than the word’ (Massumi, 1995). We experience affective intensities before we can describe them as emotions – and we react on our affectively triggered instincts before we know, let alone can justify, what we do.

These points are not in themselves novel, but today marketers have more sophisticated means of putting them to use. For instance, a supermarket may dispense the smell of freshly baked homemade bread in its aisles to increase sales of its absolutely odourless, mass-produced toast.  Or, even more cunningly, the supermarket could place its in-store bakery near the entrance so as to whet customers’ appetites, since hungry shoppers are heavy shoppers (Ashford, 2015).

In a broader sense, just as Marcel Proust famously was prompted ‘in search of lost time’ by eating a madeleine cake, the smell of bread may transport consumers to sweet memories of homely comfort. These may also, as we pass the bakery time and again, come to be associated with the store. And once the supermarket has caught the scent of money, why not move on to the other senses?

Neuromarketers have found that taste testing reduces customers’ sense of risk-taking just as touch is often used to validate a product (e.g. add weight to a product to indicate its sturdiness, seriousness, quality), likewise colour-coding (e.g. blue for trust, green for relaxation) and other visual stimuli (pictures of fresh fruit or models making eye-contact) can influence our shopping behaviour and, more generally, sounds (energetic music) can put us in the right mood (Genco, Pohlmann & Steidl, 2013).

Even if customers are not, or only vaguely, aware of all these sensory stimuli, they more likely than not shape each trip to the local supermarket decisively, just as they may be brought to bear, more generally, on our experience with brands (Lindstrom, 2005). Even brands, which do not have the same intuitive link to the senses as supermarkets, can profit greatly by working on and with the senses – just think of the crisp smell of a new pair of sneakers or that strangely satisfying sound of turning on a computer.

Luring as it may be, neuromarketing is not unproblematic. First, there is the ethical issue. Do we really want marketers to be messing around at the liminal zones of our consciousness – and beyond? Second, neuromarketing may seem soundly based in scientific advances, and the combination of marketing tools and brain scans does provide impressive backing for claims to effectivity. However, affect is not the same as effect. Or, in plainer terms, the route from stimulus to response is not as direct as the above account might suggest. While the model of decision-making that we espouse in Strategizing Communication firmly breaks with the idea of rational choice, we are equally uneasy with the ‘emotional determinism’ of neuromarketing. Decisions, we propose, are much more complicated processes in which sensory impulses do play a key part, but in which conscious cognition is also involved. The real potential, then, lies in finding ways of combining the two.

Models of budgeting

Return on investment (ROI) seems to be the mare of the strategic communicator. With financial executives constantly concerned that they are not getting enough bottom-line bang for the communicative buck, the burden of proof is often on the communications professional. Whilst the insecurity of what might be lost by not communicating can sometimes warrant expenditures in the here-and-now, harder evidence is usually needed to secure long-term funding.

The problem, then, is one of effect. It is often difficult to prove the (economic) effect of specific communication initiatives, but it is possible to establish a general connection between expenditures and profitability at the level of the over-all communication strategy. Let us look at this general connection before considering the available models for actually determining the right level of expenditures and, hence, establishing the communication budget.

customer equity

The marketing management scholars Roland T. Rust, Katherine N. Lemon and Valerie A. Zeithaml (2004) argue that there is a connection between a firm’s spending on and economic return from marketing efforts at the strategic level. They prove the point by looking at customer equity relative to expenditure, showing that an increase in the over-all budget will also increase each customer’s lifetime value. Thus, they argue that to get a general idea of the ROI of marketing communications, we should not only look at increased sales, but take such issues as brand perception and brand loyalty into account as well. They apply the model to a set of empirical cases, proving that increased spending resulted in increased customer equity in each case.

Strategic communication, obviously, is not equal to marketing communication, but given the inclusion of indirect effects relating to general brand value, we may assume that Rust, Lemon and Zeithaml’s argument applies to communications efforts more generally. However, we may also assume that the positive effect of increasing communication budgets does not go on indefinitely, but rather takes the shape of an S-curve, where increased spending does not take immediate effect, but where each increase will have a relatively large impact once the budget is of a certain size. If one continues to spend more, however, the return will gradually peter out until one reaches the point at which the ROI of extra spending will be zero or, indeed, negative.

Advertising_Curve

The exact saturation point is likely to be highly contextual and can probably only be located empirically, meaning that budgets should constantly be adjusted as organizational goals, market situations and other contextual factors change the demand for and/or restraints on communications efforts.

In the absence of a reliable and stable measure of the optimal communication budget, organizations have employed various models for establishing workable budgets. The Spanish professor of marketing J. Enrique Bigné (1995) provides a useful review of seven such methods:

  • The arbitrary approach
    • Setting the budget arbitrarily may not sound like the most strategic choice and, indeed, this method is not held in high esteem. However, in situations of great uncertainty it can be the only viable route. In such situations, arbitrary budgeting allows for maximum flexibility and adaptability, but it also means one will have to rely on ‘gut feelings’ rather than strict analysis, and it means effects are difficult to predict, let alone measure.
  •  Affordability
    • Affordability is a slightly more sophisticated model than the arbitrary one in so far as spending is now judged against what the organization can actually afford. However, this means increased conservatism and inflexibility as focusing solely on what is currently affordable does not take into account what might be gained from increased investment in communication. If one only spends what one can afford, one may lose out on growth opportunities, but this may be the only viable route for start-ups and small companies until growth has actually set in.
  • Use of previous year’s budget
    • For established organizations in stable markets, using the previous year’s budget to establish the current one may be an attractive alternative to affordability. One already knows what is needed and that it is affordable. However, the assumption that the present (and future) will be like the past, is constantly proven wrong in today’s communications landscape. Further, the model is not very useful if the organization sets new goals nor when its market situation changes.
  • Percentage of sales
    • The percentage of sales method has long been the most common tool for establishing over-all budgets. This is an easy and reliable method for establishing the budget top-down. It is more flexible than the model of using the previous year’s budget, yet guards against over-spending. Still, the method is quite conservative, especially if the set percentage is based on the sales of the previous year. To allow for a change in strategic goals, one may set the percentage in relation to projected sales, but this incurs the risk of not reaching the new goals.
  • Competitive parity
    • The principle of this method is that one should spend as much on communication as one’s competitors. Rather than setting the budget relative to the previous year’s spending or based on a percentage of (previous or projected) sales, then, this method looks to the environment for an indication of adequate expenditure. Taking the actions of others into account is important, but competitive parity only works if one can find out what competitors are actually spending, if the competitors know what they are doing and if all actors in a market have the same objectives. It is quite unlikely that any, let alone all, of these criteria are ever fulfilled.
  • Share of voice
    • Share of voice also begins with an analysis of what competitors are doing, but sets goals relative, rather than equal to this. The starting point is a decision on how ‘loud’ the organization should be in comparison to other actors in a market, followed by an analysis of what it will take to get the desired share of voice. This method is problematic in two respects; first, share of voice does not equal market share and, second, in today’s media landscape it is increasingly difficult to control who gets to speak how much – and speaking is not the same as being heard. One can no longer ensure a share of voice through paid media exposure. Instead, one has to partake in a process, the costs of which are hard to set – and the effects of which are impossible to predict.
  • Objectives and tasks
    • This leaves us with the objectives and tasks method in which the budget is built bottom-up based on the specific communication tasks deemed to be necessary to reach set objectives. This is in many respects the most sophisticated model as it actually links the return of the communication with the investment needed. Thus, one may use the objectives and tasks method to determine the cost of initiatives aimed at, say, increasing sales and then calculate whether the return is larger than the investment. However, such estimates provide no guarantee that the tasks will actually fulfil the objectives. Further, if all tasks are to yield a return, the model becomes a restriction rather than a help as it limits communication initiatives to those that can be deemed directly profitable. Finally, establishing a full communication budget based on the objectives and tasks model is an extremely laborious process. In sum, this method is the most appropriate for campaigns and other identifiable communication initiatives, but it can hardly stand alone at the level of the communication strategy.

None of the existing models, then, are perfect, meaning that most organizations are likely to use a combination of two or more methods – and rightly so. First, some top-down tool for setting the general budget is necessary (e.g. through the percentage of sales methods); second, a bottom-up method of establishing the cost of specific initiatives is also needed (thus, objectives and tasks should be considered); third, taking the communication efforts of competitors into account is also important (meaning some notion of what is needed to gain the desired share of voice is required); fourth, other environmental factors could change the situation rapidly and must constantly be monitored (some degree of arbitrariness, then, must always be accepted).

Data-mining

Encyclopædia Britannica defines data mining as “knowledge discovery [through] the process of discovering interesting and useful patterns and relationships in large volumes of data. The field combines tools from statistics and artificial intelligence with database management to analyse large digital collections, known as data sets.” Or put slightly differently, the term describes the process of extracting insights and knowledge from large data sets, that is the processing of information, e.g. available and extracted (mined) from social media platforms.

We will soon provide you with more on this topic, but until then, check out this introductory article by Chen et al. (1996). Also, check out the SIGKDD, the Association for Computing Machinery’s Special Interest Group on Knowledge Discovery and Data Mining,  and its publications on data mining.

SAP studies

By Ursula Plesner

It should come as no surprise that Strategy-as-Practice is not just a set of principles for doing research, but has also produced countless empirical studies of practice. They range from detailed conversation analytic and ethnomethodological studies to longitudinal approaches.

At one end of the continuum, Dalvir Samra-Fredericks is a proponent of close-up observation of strategists’ talk-based interactional routines. To go beyond the prescriptive strategy schools and arrive at an understanding of how strategists ’think, behave and feel’, he suggests doing ethnographic studies with a focus on talk. In a study of a manufacturing company (Samra-Fredericks, 2003), he presents a very fine-grained analysis of strategists’ real-time deployment of relational-rhetorical skills and links these micro episodes to strategic outcomes on a macro level.  His analysis documents the moments where one strategist succeeds at creating the foundation for strategic directions – in specific moments, the strategist shapes the attention of others and creates the facts on the basis of which they act. The strategist does this through question and query, through the display of appropriate emotion, and through the use of metaphors and history. Samra-Frederick analyzes interruptions, choice of words, tone of voice, and other elements of talk, and argues that all these types of linguistic evidence document the way that strategy is shaped though persuasion.

Another corner of the Strategy-as-Practice research has looked into how strategy tools (for instance concepts or models) are used by practitioners. In strategic management, tools are developed and applied to ensure competitive advantage, but from a Strategy-as-Practice perspective, it is more interesting to look at how these tools are used in practice (see e.g. Jarzabkowski & Kaplan, 2015 or Jarzabkowski, Spee & Smets, 2013). To take one example, Paroutis, Franco & Papadopoulos (2015) studied how managers interact visually with strategy tools during workshops. The researchers participated in a six-hour workshop and analyzed video data. They chose to analyze just one workshop in depth to closely examine group interactions – and the video method allowed them to study micro-behaviors and interactions that they consider key to understanding strategy practices. The aim of the specific workshop was to create a shared understanding of the organization’s strategic context, and to support this process, a particular tool was put to use. This was a computer system allowing participants to collectively create a ‘strategy map’ on a common screen, based on contributions from the individual laptops of each participant in the workshop. Examining the video material, the researchers first identified the strategic themes presented in the workshop and then examined the types of meaning negotiation and visual interaction associated with the themes. They could observe how the tool could both constrain and enhance visual interactions during the workshop and used the conclusion of the study to argue for more attention to how workshop participants interact visually around tools in order to develop more reflexive strategy practices.

powerpoint-presentation1The focus on tools has also been extended beyond the single episode. In 2011, Sarah Kaplan published an article with the title ‘Strategy and PowerPoint: An Inquiry into the Epistemic Culture and Machinery of Strategy Making’. As the title indicates, Kaplan studies strategy as linked to culture and knowledge practices, and in this particular article she reports on a study of how PowerPoint has become a dominant element in strategy practices. Kaplan carried out a large ethnographic study in a single organization. Through 8 months, she observed daily project activities, conducted 80 interviews, observed team meetings, participated in teleconferences and got access to emails. Although the goal of the study was relatively broad – understanding strategy making as knowledge production – PowerPoint emerged as a pressing theme. She observed how PowerPoint – as a technology and a genre – was able to mobilize conversation and knowledge production in specific ways. PowerPoint worked to structure conversations both during strategy meetings and outside them. Basically, PowerPoint created spaces for discussion, simply because strategists needed to use them in specific ways when they drafted and presented strategies. The fact that PowerPoints are modular implied that they allowed for recombinations and adjustments of various kinds of material, and the fact that they could be shared among a wide range of actors and edited by a document owner made them a central site for negotiation of meaning. Another example of looking into a specific tool and its use over time can be found in Martin Giraudeau’s study of strategic plans in practice. Giraudeau shows that by examining strategic plans, i.e. opening them up, reading their contents and studying how business actors use them, it becomes possible to see them as specific visual and textual representations of contexts and strategies that in practice enhance strategic imagination (Giraudeau, 2008).

At the other end of the continuum, we see more macro oriented, longitudinal studies of strategy-making. When the City of Sydney embarked on a strategy project resulting in the Sustainable Sydney 2030 report, Martin Kornberger and Steward Clegg followed the strategy-making process through a two-year period, from 2006 to 2008. The researchers set out to investigate not only how strategy was practiced, but also what kind of knowledge it was based upon and which power effects it had. They analyzed written documents produced as part of the strategy process, they conducted interviews with the core team involved in the strategy-making process, and they attended public events, strategy workshops and strategy meetings. They analyzed texts, transcriptions and notes by posing the questions; ‘how are different forms of knowledge mobilized in the strategy process’ and ‘what performative impact does strategy have’. Their analysis details how a city administration learns the strategy lingo, how economic language becomes the dominant voice in practicing strategy, and how strategy mobilizes people by inspiring them to ‘think big’. The analysis illustrates that strategy is also an aesthetic phenomenon – a storytelling endeavor to create ‘big pictures’ that are more convincing than technocratic planning discourses. The study contributes with knowledge about how strategy practice becomes performative over time through constituting particular subjects and objects. It offers a perspective on strategy as a sociopolitical practice aiming at mobilizing people, marshalling political will and legitimizing decisions (Kornberger & Clegg, 2011).

As we see, empirical studies from the Strategy-as-Practice tradition expose multiple aspects of doing strategy – often with a focus on either discursive interactions or interactions around material objects or conceptual tools. These are studied through various methods, which are often qualitative.

Netnography

By Julie Uldam

Drawing on ethnographic methods such as participant observation, netnography was coined as a methodological term by American professor of marketing Kozinets during his thesis work in the mid 1990s. Netnography has been most prominent in consumer and marketing research, examining consumer preferences as they are expressed in bulletin boards and social media platforms such as Twitter (Arvidsson and Caliandro, 2016; Kozinets, 2002, 2011). However, netnography has also been adopted in other fields such as media studies where Postill and Pink (2012) have developed the approach so as to sensitise it to ‘digital socialities’ and the interplay between the online and offline in activists’ uses of social media platforms.

Netnography is arguably distinct from related digital methods such as digital ethnography and online participant observation in that it provides a particular framework for analysis (Snee et al., 2016, see Hine 2000 for virtual ethnography as an example of another framework with particular procedures and focal points), including ethical reflections on covert and overt research (see Uldam and McCurdy for a discussion of covert and overt participant observation in online and offline contexts). The adaptation and development of netnography demonstrates the usefulness of the (developed) approach for uncovering the dynamics of interactions between different societal actors, facilitating research beyond the confines of media-centric approaches and a focus merely on technological affordances. These potentialities of netnography makes it a useful approach for studying the role of digital media in strategic communication, especially when strategy is seen as an on-going process influenced by multiple actors as in Guldbrandsen and Just’s perspective. However, further development of netnography is necessary in order to sensitise the approach to the analytics of the power relations that underpin the possibilities for different actors to influence communication, online and offline.