title
stringlengths 6
301
| keywords
sequencelengths 0
23
| instruction
stringlengths 73
261k
|
|---|---|---|
Unpacking the combined effects of job scope and supervisor support on in-role performance
|
[
"Supervisor support",
"In-role performance",
"Job characteristics theory",
"Job scope",
"Social support theory"
] |
Summarize the following paper into structured abstract.
Introduction: Organizations of various sectors are not giving proper attention toward the importance of social context in highly challenging working environment for employees. The social context in highly challenging work settings plays a vital role in shaping employees' proficiencies and behaviors. Social context at work captures the interpersonal interactions that develop as employees perform their jobs, roles and tasks (Chou, 2015; Grant and Parker, 2009). The relational perspective of work design focuses on how the social aspects of a job combined with job roles and/or tasks advance an employee's understanding of the job and how this understanding can impact organizational outcomes (Grant and Parker, 2009). In this regard, social exchange theory is a very useful approach for integrating relational or social work context variables into JCM. The JCM posits that five core job characteristics make jobs intrinsically motivating and satisfying and encourage the achievement of high performance (Hackman and Oldham, 1980). Despite the popularity and continuous use of JCM in job enrichment literature, the model has received multiple criticisms from other scholars (Grant et al., 2010).
Theoretical background and hypotheses: The JCM framework of Hackman and Oldham (1975) includes the five job characteristics that influence work-attitudes and behaviors: skills variety, task identity, task significance, autonomy and feedback. They also suggested that GNS, a trait associated with internal energy achievement orientation, acts as moderator in the job characteristics/outcomes relationship. Although JCM continues to enjoy strong support among scholars for the proposed direct relationship between core job characteristics and outcomes, there is mixed evidence confirming the moderator effects of GNS or other related variables such as personality (e.g. Xie and Johns, 1995). We will first briefly review the research on the relationship between job characteristics and job performance and then discuss the proposed moderating effects of supervisor support in shaping this relationship.
Method: Data collection and sample
Discussion: Overall, there were two hypotheses and these hypotheses were confirmed. More specifically, our results pertaining to the hypothesized relations can be summarized as follows; all predictions related to direct relations of job scope with in-role performance, interaction effects of job scope and supervisor's support were confirmed.
Conclusion: This study proposed that the relational context of jobs, particularly in the form of social support by supervisors, can motivate employees to perform their duties more productively. It also highlights how the structure of an employee's work with the manifestation of social support plays a critical role in shaping the employee's relationships with their supervisors. This study highlights that both job design and in-role performance have a strong link with relational contexts that can motivate employees to achieve their stated targets. Thus, this research improves our understanding of how social context at the workplace can make a difference for employees and their organizations.
|
Family firms, board structure and firm performance: evidence from top Indian firms
|
[
"India",
"Family firms",
"Corporate governance",
"Firm performance"
] |
Summarize the following paper into structured abstract.
1. Introduction: Family businesses are the most dominant among publically traded firms across the world (Shleifer and Vishny, 1986; Burkart et al., 2003; Anderson and Reeb, 2003; La Porta et al., 1999). In Continental Europe, about 44 per cent of publicly held firms are family-controlled (Faccio and Lang, 2002). In the USA, the equity ownership concentration is modest; among the Fortune 500 firms, around one-third is family firms (Anderson and Reeb, 2003). The concentration of ownership was found to be higher in other developed nations (Faccio and Lang, 2002; Franks and Mayer, 2001; Gorton and Schmid, 1996). Family businesses dominate many developing economies with about two-third of the firms in Asian countries owned by families or individuals (Claessens et al., 2000). In India, around 60 per cent of the listed top 500 firms are family firms (Chakrabarti et al., 2008). These family firms hold large equity stakes and more often than not have family representation on the board of directors. Family equity stake in Indian firms may be divided across individual holdings by promoters and their family members, privately held firms and through cross-holdings from other listed group businesses. The control and influence exerted by family firms may lead to performance difference with the non-family firms (Anderson and Reeb, 2003).
2. Potential benefits of family firms: Controlling blockholders like family can have potential benefits and competitive advantage. The extent literature on family firms focuses mostly on the agency problem. The problem in widely held firms includes limited ability to select reliable agents, monitor the selected agents and ensure performance (Fama and Jensen, 1983). An effective monitoring can improve firm performance and reduce agency cost (Fama and Jensen, 1983; Jensen and Meckling, 1976). The principle agent conflict associated with widely held firms may be reduced by concentrated blockholders, specifically the family blockholders, as they have a significant economic incentive to monitor the management (Demsetz and Lehn, 1985). Specifically, the substantial intergenerational ownership stake and having a majority of their wealth invested in a single business, the family firms have strong incentive to monitor the management. The lengthy involvement of family members in the business, in some cases spanning generations, permits them to move further along the learning curve. This superior knowledge allows the family members to monitor the managers better (Anderson and Reeb, 2003). Also, this long-term presence of family members in their firm leads to longer investment horizons than other shareholders and may provide an incentive to invest according to market rules (James, 1999). This willingness of family firms to invest in the long-term project leads to greater investment efficiency (Anderson and Reeb, 2003; James, 1999; Stein, 1988). Another advantage of having a long-term presence of family firms is that the external suppliers, dealers, lenders, etc., are more likely to have favorable dealings with the same governing bodies, owing to the long-term dealings and reputation, than with nonfamily firms. This sustained presence of family necessitates having their reputation maintained (Anderson and Reeb, 2003).
3. Potential costs of family firms: Family firms are also said to be fraught with nepotism, family disputes, capital restrictions, exploitation of minority shareholders and executive entrenchment, all of which adversely affect firm performance (Allen and Panian, 1982; Chandler, 1990; Faccio et al., 2001; Gomez-Mejia et al., 2001; Perez-Gonzalez, 2006; Schulze et al., 2001, 2003). Expropriation of minority shareholders by large shareholders may generate additional agency problem (Faccio et al., 2001). Concentrated shareholders, by virtue of their controlling position in the firm, may extract private benefits at the expense of minority shareholders (Burkart et al., 2003). Capital expenditure can be affected by the families' preference for special dividends (DeAngelo and DeAngelo, 2000). Employee productivity can also be adversely affected by family shareholders acting on their own behalf (Burkart et al., 1997). Family firms are found to show biases toward business ventures of other family members resulting in suboptimal investment (Singell, 1997). Family shareholders tend to forgo profit maximization activities owing to their financial preferences which often conflicts with those of minority shareholders (Anderson and Reeb, 2003).
4. Research design: data, variables and analysis: 4.1 Sample
5. Findings and discussions: 5.1 Summary statistics
6. Conclusion: In this study, we analyze the interaction effect of the family firms and board governance factors on firm performance in a sample of top publically traded Indian firms. We contribute to the growing literature on family firms by providing a multi-year analysis on the influence of board structure on firm performance in a family firm vis-a-vis nonfamily firm in the Indian context. In this study, we endeavor to expand our understanding of corporate governance and to shed light on the impact of the proportion of shareholding, family representative directors and having a professional CEO in large family firms in India. The result of the panel data analysis shows that the interaction variable (family firms x board score) has a statistically significant negative associated with firm performance measured by both Tobin's q and ROE. This is consistent with the results by Garcia-Ramos and Garcia-Olalla (2011) in the European context. Our result suggests that the incremental effect of the board index score, for family firms with respect to nonfamily firms, is having a negative impact on the firm performance.
|
Sustainable organisational learning - a lite tool for implementing learning in enterprises
|
[
"Evaluation",
"Organizational learning",
"Competence development",
"Learning technology",
"Learning transfer"
] |
Summarize the following paper into structured abstract.
Introduction: Unstable and dynamic societal and business environments are making it more imperative than ever that organisations have the ability to operate its learning processes in order to optimise competence development, performance and innovation (Brinkerhoff, 2006). Research-based knowledge on the subject has taught us that for enterprises to optimise the implementation of learning in practice, they need to take care of elements related to the learner, the learning situation and the organisational setting where new knowledge and/or competences are to be integrated and used.
Understandings of learning transfer: Today's learning research focuses on how to understand, design and make learning activities, formal as well as informal, an integral part of the existing organisational HRM and management system and practice. A classical theoretical understanding of learning in organisations and workplaces departs from an idea of learning as a matter of designing a learning and practice situation characterised by mutually shared or identical elements. This behavioural - and later developed within a cognitive frame - theory of transfer and sharing of knowledge typically settled in the common elements theory on shared traits between the learning and practice environments or symbolic representations. Later, Baldwin and Ford (1988) advanced a model that has had an immense influence on theoretical and empirical research contributions within the field of learning studies in adult learning studies and human resource. The distinction between three broad dimensions, the learner's characteristics, the design of the learning situation and the work environment settings, has guided research- and practice-oriented contributions within the organisational and workplace learning field for decades.
A lite tool for creating sustainable organisational learning: The studied and described tool of this paper (named SuperInsight) is an evaluation and development-oriented organisational learning technology devised with the aim of strengthening a sustainable integration and anchoring of new knowledge and/or competences in practice. The evaluation tool is deliberately designed as a lite version of an evaluation tool in order to avoid, as we often experience, bulky and highly bureaucratized (red tape) procedures when it comes to working with measurement and integration of learning activities in a practice setting. The purpose of the lite tool is two-folded. First, the lite tool aims to reinforce enterprises ability to keep track of, on a running basis, the impact of learning activities in practice based on real-time data. Second, the lite tool intends to support an organisational learning process among the participating enterprises based on continuous input and feedback on the quality of the learning process combined with unique and specific insight on where to intervene - if needed - in order to optimise the learning transfer process and result.
Illustrations from a case study: Analysis of data from a case study in a large Danish enterprise from the telecommunications industry conducting leadership and sales training shows that participants who are in the red category, usually are drowning in operations and do not have the possibility to focus on learning implementation on new knowledge or competences in a work context. A characteristic comment from a participant in the red category is: "I will start to work with the new knowledge, when my department get more resources" or "We are firefighting all the time". Here, the role of the closest leader or accountable HR staff is to support the participant in prioritising their work tasks to facilitate the actual use of new knowledge or competences - if the organisation truly wants to be serious about their approach to organisational learning. The analysis shows that constraints in employing new knowledge and competences are not related to the actual course/training activity, nor personal characteristics, but to the concrete work processes and tasks.
How to strengthen sustainable organisational learning - lessons learned and recommendations for practice: In this paper, we argue for a double effect from using and deploying the described lite evaluation tools that connect more clearly to the real-time learning processes than a classical "baseline-midline-endline" effect evaluation offer. We see in our case study a positive effect on the individual level using new knowledge and/or competence in a work context. In addition, we see a positive effect on an organisational level. Enterprises that employ process evaluation tools with pulse measures learn to develop, create, integrate and share new knowledge and competences, thus creating a foundation for organisational learning through new routines and practices. In the following section, we will elaborate on lessons learned from our case study. Further, we will present recommendations for practice.
Conclusion: We have asked how enterprises are to arrange its learning processes in order to optimise the integration and creation of sustainable organisational learning. Based on a learning evaluation tool that makes a processual real-time evaluation of learning implementation of new knowledge and/or competences, we explored from the case study data how this type of tool influences learning processes and results in organisations.
|
The Irish wine market: a market segmentation study
|
[
"Ireland",
"Wines",
"Market segmentation",
"Brands",
"Marketing"
] |
Summarize the following paper into structured abstract.
Introduction: The Irish wine market has experienced unprecedented growth in the last from 15 to 20 years. From 1990 to 2007, total wine sales in Ireland have more than quadrupled, increasing from 1.7 to 7.6 million cases. In the 13 years between 1994 and 2007, wine's proportion of the Irish alcohol market more than doubled from 8 per cent to 17.9 per cent (Wine Development Board, 2007). Growth in wine consumption is forecasted to continue with a growth of 15 per cent expected by 2012 (Euromonitor, 2008). As the wine drinking culture in Ireland is relatively new, the segmentation of the market and brand positioning is in its infancy. Further study into segmentation is required to improve the profitability of the industry, and to develop choice and the accessibility of wine for Irish consumers. The specific purpose of the paper is to examine how the Irish wine market may be effectively segmented for improved brand positioning in Ireland. Thus, the paper aims to determine the key trends in the Irish wine market, examine the state of marketing in the wine industry, evaluate different approaches to segmenting the Irish wine market, and develop profiles of the resulting segments.
The Irish wine market: The Irish wine market has experienced remarkable growth with the number of wine drinkers in Ireland doubling since 1990 and with over five times as much table wine being consumed in 2007, as was consumed in 1990 (WDB, 2007). The increased consumption of wine in Ireland over the last 15 years is attributed to the improved accessibility, affordability and branding of wine (Moran, 2002). To emphasise the significance of the growth in wine consumption in Ireland, the level of growth in wine buying is compared with growth in the overall food and beverage sector. Practically all wine bought in Ireland is imported. Between 2000 and 2004, wine sales (and therefore imports) increased by 56 per cent (WDB, 2004), while the overall imports of the food, beverages and other animal products category increased by only 18 per cent in the same time frame (CSO, 2006).For a marketer assessing the Irish wine market, equally important to the growth in consumption, is the huge shift in the type of wine preferred by the Irish wine drinker. Specifically, there is a notable shift towards New World wines, with diminishing preference for Old World wines (WDB, 2007). New World wines refer to wines from regions outside of Europe. Prominent New World wine producing regions include South Africa, Australia, New Zealand, Chile, Argentina and California. Old World countries refer to European countries with a long history of wine production, such as France, Italy, Germany and Spain (Fielden, 1994). Up to 1990, the majority of wine consumed in Ireland was Old World wine, and accounted for 94 per cent of the market (WDB, 2007). Since 1990, there has been a steady shift in demand towards New World wine, which in 1990 accounted for 6 per cent of the market and in 2007 held a 71 per cent market share (WDB, 2007). Historically, French wines were the market leader in the Irish wine market, but since 2001, Australia now has the largest market share, accounting for 26 per cent of wine consumed in Ireland (WDB, 2007). In 2006, the top ten wine brands in the Irish still light wine market accounted for nearly 25 per cent of total sales. These top ten wine brands and their respective brand shares are Jacob's Creek (3.2 per cent), Blossom Hill (2.7 per cent), Rosemount (2.6 per cent), E&J Gallo (2.5 per cent), Wolf Blass (2.5 per cent), Hardys (2.3 per cent), Concha Y Toro (2.3 per cent), Long Mountain (2.2 per cent), Santa Rita (2.1 per cent) and Carmen (1.9 per cent), all of which are from New World countries (Euromonitor, 2008). Understanding the shift in the country of origin preference is important as it represents an important shift in preference for style, taste, brand, price and other wine variables. Testing country of origin preference is, therefore, an essential element in this Irish wine market study.
Marketing of wine: The production of wine is a specialised area, and the wine industry has traditionally adopted a production-focused mindset with the complexities of viticulture and vinification having occupied the attention of specialists in the area (Thomas and Pickering, 2003). Bruwer et al. (2002) note the agricultural basis of the wine value chain, and the industry is often criticised as employing mass marketing campaigns (Gluckman, 1990; Spawton, 1991a; Hall and Winchester, 1999; Bruwer et al., 2002).According to Thomas and Pickering (2003), the marketing of wine is in its infancy, relative to the long history of wine making and wine drinking. Aggressive marketing was uncommon in the industry with vineyard operators relying on the strength of their reputation to compete in the marketplace (Hall, 2004). In the early 1990s, an interest in branding emerged in the industry as a method of coping with changes in distribution and the growth of wine retailing. In 1991, the European Journal of Marketing dedicated an issue to the marketing of wine, where Spawton (1991b, c, d, e) through a series of articles, provides an insight into the state of wine marketing. This special issue served as an introduction to marketing for the industry, with the purpose of illustrating the advantage and necessity in developing a customer mindset.There is a realisation in the industry that its future is geared to meeting the expectations of the wine consumer. That has contributed to the growing importance of wine marketing within the industry (Spawton, 1991b, p. 6).Gluckman (1990), in a frequently cited article, presents a number of challenges facing the wine marketer in branding wines. Most notably, a move towards own brand labelling by retailers reinforces the necessity for strong wine brands. This need for improved branding of wine has prompted the undertaking of research in wine markets. In an industry which is relatively new to marketing, and in the Irish wine market, which has seen tremendous growth and transformation, there is a need for greater understanding of the market dynamics. Market research in general, and market segmentation in particular, has a potentially pivotal role to play in assisting wine marketers to position their wine brands effectively.
Role of market segmentation: Weir (1960, p. 95, as cited in Yankelovich, 1964) provides the following description of what a market is, and more importantly, what it is not:"The market" is not a single, cohesive unit; it is a seething, disparate, pullulating, antagonistic, infinitely varied sea of differing human beings - every one of them as distinct from every other one as fingerprints; everyone of them living in circumstances different in countless ways from those in which every other one of them is living.This description of a market is a colourful representation of popular marketing thought on the composition of markets in the late 1950s and 1960s. Smith (1956), in what is considered a landmark article (Reynolds, 1965; Haley, 1968; Wind, 1978; Green and Krieger, 1991; Lin, 2002) introduces a marketing strategy labelled market segmentation, as an approach to competing successfully in the reality of an environment of imperfect competition. The original article by Smith (1956) introduces market segmentation as a strategy. Market segmentation strategy was considered an alternative to product differentiation strategy to deal with diversity in the market. While the initial representation of the market segmentation strategy is based in economic theory, market segmentation developed as one of the most foremost concepts in marketing thought (Wind, 1978; Johnson et al., 1991; Lin, 2002).At a broad level, market segmentation provides a marketer with a clearer focus on customer needs, and thereby aids decision making for improved competitive advantage (McDonald and Dunbar, 1992; Croft, 1994; Kotler and Keller, 2003). While Croft (1994) highlights market segmentation as aiding decision making in general, Yankelovich (1964) specifies what exactly segmentation analysis can achieve. In identifying groups of customers with similar needs, a marketer has the information required to target the most profitable group with the most potential. With this knowledge a marketer can develop product lines and promotion activities, choose advertising media, advance positioning of offerings and improve timing of advertising to appeal to the segment of the market whose needs possess the greatest profit potential.A critical decision to be made in conducting segmentation research is choosing an appropriate segmentation base. A segmentation base is the criteria used to divide the defined market into groups of consumers with similarities. At the most basic level a market can be split up according to the profiles of the consumers. Variables such as demographics, geographic location of consumers and the socio economic class to which they belong, are considered profile segmentation bases. The behavioural segmentation category includes bases such as usage occasion, benefits sought, perceptions and beliefs, while the psychographic bases category includes lifestyle and personality variables as a means for identifying groups of consumers with similarities. The more abstract and less concrete the information required for the segmentation base, the more difficult it is to measure responses and their link with behaviour. In choosing a segmentation base for a wine market study there are a number aspects to the market which need to be considered. Literature on wine consumer behaviour focuses on two areas; the factors influencing wine consumer behaviour; and the wine consumer's purchasing decision-making process. An effective wine segmentation study would be one which aids understanding of these two areas and aids marketers in evaluating how stimuli, such as brand positioning strategies, influence wine choice.According to Bruwer et al. (2002), wine markets have been segmented using all the bases identified above. For the purpose of an Irish wine market segmentation study, behavioural segmentation with an involvement basis proves a suitable choice as it is an approach, which yields insight into consumer behaviour, but is not overly difficult to measure (Lockshin et al., 2001). Employing a behavioural segmentation base allows for the decision making process and the influencing factors of the Irish wine consumer to be tested and makes the process and the factors the basis for splitting up the market into meaningful and actionable segments.A key consideration in exploring the wine consumer decision-making process and consumers' evaluation of alternatives, is that wine attributes represent intrinsic and extrinsic cues for the consumer. Sanchez and Gill (1998) illustrate how consumers have preferences according to the bundle of benefits they are seeking. The challenge in understanding these preferences is the large number of wine attributes which exist, and therefore, the greater number of possible bundles of benefits that are present. Wine attributes include: brand name, producer, grape variety, blend of grape varieties, vintage, region of origin, price, label, bottle type, cork type, bottle size, colour of wine, style of wine and level of alcohol. Due to the large number of wine attributes, wine consumers have a wider range of considerations in making purchasing decisions. Examining the hierarchy of importance of wine attributes to Irish wine buyers is a central consideration in segmenting the market.
Methodology: The research design is primarily descriptive in nature as similar investigations into other wine markets have been descriptive in design (Orth et al., 2005; Johnson et al., 1991; Hall, 2004; Bruwer et al. Li and Reid, 2002; Johnson, 2003; Thomas and Pickering, 2003). Due to the descriptive nature of this research, a quantitative approach to primary data collection is most suitable. Quantitative data is appropriate for determining and understanding the behaviours and characteristics of a large sample of wine drinkers. Specifically, survey data collection was undertaken, with a questionnaire collection instrument administered through a personal interview. The questionnaire was two pages in length with 15 tick box questions. The questionnaire posed questions to gather data on four topics: volume of usage, buying preference, product involvement, and demographic information.As an accurate sampling frame was unavailable for the population of the 1,451,000 wine drinkers in Ireland (WDB, 2004), non-probability sampling was undertaken. The sampling type was convenience sampling, as wine buyers were approached at the point of purchase. Convenience sampling has been employed in previous wine segmentation studies, namely, Australian wine market research by Hall (2004) and Bruwer et al. (2002). To ensure the sample was as representative of the population as possible, a large sample size of 300 was chosen and the questionnaire was administered in a variety of outlets to gather information from wine drinkers with wide ranging involvement levels.The fieldwork took place over three weeks in June 2006, in eight wine selling outlets in Galway City and County. This approach is similar to other wine segmentation studies (Bruwer et al., 2000; Hall and Winchester, 1999) where the fieldwork was limited to one region of the market being researched. Research by Bruwer et al. (2000) in segmenting the Australian wine market using a wine-related lifestyle approach is based on fieldwork conducted in Adelaide, while Hall and Winchester's (1999) findings, confirming empirically segments in the Australian wine market, are derived from questionnaires administered in Melbourne. The fieldwork locations for this research consisted of four supermarkets, two off licences and two wine shops. The supermarkets and off licences were in both Galway City and Galway County locations, and the wine shops are situated in Galway City. In total, 316 questionnaires were collected, and after nine were removed for failing a screening question or being incomplete, 307 questionnaires were coded and inputted into SPSS for analysis.
Findings: There are two sets of findings resulting from the primary research: an overall wine sample analysis and a segment analysis. The overall wine sample is composed of 64 per cent female respondents and 36 per cent male respondents. In terms of age group, over 75 per cent of the sample is aged between 25 and 54 years. The consumer behaviour data reveals the average Irish wine drinker buys seven bottles of wine per month, spending EUR10.57 a bottle, with an average monthly spend of EUR80. Wine is usually bought in a standard size bottle of 75 cl (95 per cent) in either a supermarket (42 per cent) or off licence (35 per cent) and is mostly red wine (43 per cent). Wine is most frequently consumed when dining at home (50 per cent ), followed by when dining out (20 per cent). The five most important product attributes when buying wine are: price per bottle, style of wine (e.g. fruity), region of origin (e.g. Burgundy) and brand name (e.g. Jacob's Creek). The most popular wine is Australian wine, followed by Chilean, French and South African wine.A comparison of the overall wine sample findings with the Wine Development Board's (2004) national statistic shows the characteristics of the research sample are similar to the characteristics of the national market. The sample findings for the country of origin preference are notably similar to the WDB (2004) statistics (see Figure 1). One exception is US wine, which is preferred by just 3 per cent of the sample, but 13 per cent of the national statistics. The similarities between the two sets of statistics suggest the sample data findings on consumer behaviour are representative of the population.Similarly, Figure 2 compares the age category of the sample finding with the age categories of the WDB (2004) population. With the exception of the "65 or older category", the age profile of the two sets of data are similar. Discrepancies between the current research and the WDB (2004) findings may be explained by the time lapse between the two studies, or by the WDB research population being wine consumers, while the current research population was wine buyers.Segment analysis
Conclusion: The research examines how the Irish wine market can be effectively segmented to improve brand positioning. The increases in consumption of wine and increases in preference for wine from New World countries are key trends in the Irish wine market. Consumer behaviour, particularly involvement in wine purchases, and the importance of wine attributes, are necessary considerations in a wine market segmentation study. In terms of relevance, substance, and accessibility, a k-clustering segmentation design, with a behavioural basis including an involvement variable, proves to be an appropriate approach to segmenting the Irish wine market. The profiles of the three resulting segments: casual wine buyer, value seeking wine buyer and wine traditionalist, are sizeable, accessible, relevant and actionable.The profiles developed as a result of the primary research, provides wine marketers with an insight in Irish wine consumer behaviour. Specifically, marketers are provided with accessible and sizeable segments, with meaningfully distinctions and similarities drawn between them. Brand positioning can be improved by ensuring the brand communicates and emphasises the product attributes, which the targeted segments values the most when choosing wine. The demographic information and the buyer behaviour data provide marketers with points of access to their target market. The involvement base, when used in conjunction with other behaviour variables, proves effective in producing sizeable, accessible and actionable segments. A limitation of adopting a behavioural basis in conducting the segmentation is the highly descriptive nature of the resulting data. Examining behaviours give an insight into how consumers act, but fails to take into account the underlying motivations and rationale for consumer actions. The use of more complex segmentation bases, such as value systems and lifestyles would wield a richer understanding of the Irish wine consumer. A second suggestion for future research is an empirically tested wine market behavioural segmentation study, to confirm the findings in this research at a national level.In answering the research question, the Irish wine market can be effectively segmented, with a k-cluster design, with a behavioural basis. Effectively segmenting the Irish wine market requires more than the involvement variable, and calls for other behavioural variables, including the importance of product attributes and country of origin preferences to be included in the segmentation process.Opens in a new window.Figure 1
|
Consumers' utilization of reference prices: the moderating role of involvement
|
[
"Internal reference price",
"External reference price",
"Market‐based reference price",
"Involvement",
"Reference price utilization",
"Consumer behaviour",
"Prices"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: There is ample empirical evidence in support of the important role that reference prices (both internal and external) play in determining consumers' evaluations of posted prices (for comprehensive reviews of the many operationalizations of reference price, see Briesch et al., 1997; Mazumdar et al., 2005). Collectively, the body of evidence strongly supports the conclusion that consumers rely on (internal and/or external) reference prices to make judgments about perceived value, perceived fairness and perceived expensiveness that, in turn, influence purchase intentions. The inescapable conclusion for practitioners is that they must pay attention to understanding, measuring and managing consumers' reference prices to ensure that their prices are evaluated favorably.
Motivation and scope: The motivation for this paper comes from recognizing two main gaps in the literature. First, despite multiple definitions of reference price, prior research has not adequately addressed heterogeneity in the types of reference prices that are evoked and the ways in which consumers might process available price information in relation to multiple reference prices. This paper considers how one specific source of heterogeneity, involvement, influences consumers' utilization of (multiple) reference prices. Research on consumer information processing (see Bettman, 1979) suggests that several individual differences (e.g. involvement with the product and/or the purchase) may influence the type of information considered and the extent to which consumers are willing to elaborate on available information. This research attempts to fill a gap in the literature by investigating the role of involvement in the context of reference prices.Second, there have been very few attempts to examine the inter-relationship between different reference prices and their combined influence on consumers' evaluations. Previous research (Chandrashekaran and Jagpal, 1995; Rajendran and Tellis, 1994; Mayhew and Winer, 1992) has examined the simultaneous and independent influences of internal and external references on consumers' choices. However, questions about the inter-relationships between different types of reference prices remain. This study attempts to extend current thinking to include sequential effects of reference prices on consumers' evaluations. It is generally established in the literature that consumers may utilize some external reference prices to adjust internal reference prices, which in turn affects evaluation. However, some research (e.g. Rajendran and Tellis, 1994) has also shown that consumers may utilize certain external reference prices in conjunction with internal reference price. Hence, there is a need to validate the process empirically. In addition, the interrelationship between specific internal and external references prices along with the role of involvement has not been empirically investigated.In summary, this study proposes to extend the literature by attempting to answer two questions:1. Do all consumers evaluate a given offer against the same set of reference prices?2. Do all consumers utilize internal and external reference prices in the same way (i.e. using the same underlying process)?
Review of internal and external reference prices: When consumers evaluate retail prices, they may do so against several reference prices. Of these, some are internal and others are external. The term "internal reference price" (henceforth referred to as IRP) has generally been used to refer to price information held in consumers' long-term memories. For example, Winer (1986) proposed that consumers use past price information to form expectations of what the retail price is likely to be on the next purchase occasion. As described in the literature, these price expectations represent internally held reference prices, and they are based on past price information that must be retrieved from consumers' long-term memories during the purchase occasion. In fact, most empirical studies have based their measures of internal reference price on such a temporally generated construct (Kalwani et al., 1990; Mayhew and Winer, 1992; Rajendran and Tellis, 1994; Briesch et al., 1997). The literature offers compelling empirical evidence that such (memory-based) internal reference prices are crucial in predicting consumers' evaluations (of retail prices) and subsequent choice behavior.In contrast to internal reference price, external reference prices (henceforth referred to as ERP) refer to comparative (competitive) price information that is available in the purchase environment, i.e. prices that consumers are likely to encounter during the search process. Thus, consumers do not have to expend cognitive resources in trying to retrieve information on current market prices from their long terms memories. This information is likely to be obtained during the search process and utilized directly from working memory. Here, researchers have identified several possible references including normal (typically charged) market price (Urbany and Dickson, 1991) and the lowest market price (Biswas and Sherrell, 1993). These and other similar studies have shown that such market-based references are important predictors of consumers' choices.It is important to note the conceptual distinction between IRP and ERP. While IRP refers to price information that is stored in and retrieved from consumers' long term memories, ERPs are derived from price cues that are either present or evoked at the time of purchase. Specifically, ERPs may represent consumers' beliefs about current market prices that may be formed during the search associated with the purchase process. Such beliefs may include highest price, lowest available price and normal/average market price. Note, however, that there is likely to be significant heterogeneity (due in part to differing levels of involvement) in the extent to which consumers search for the lowest/best price. Such selective and differential exposure is likely to result in differences across individuals in their perceptions of ERPs.
Consumers' utilization of reference prices: It is well established that reference price is a complex, multi-faceted construct, and that reference prices are important in explaining consumers' brand choices (Briesch et al., 1997; Mazumdar et al., 2005). In addition, consumers are likely to use more than one type of reference price to evaluate an offer (Mayhew and Winer, 1992; Rajendran and Tellis, 1994; Chandrashekaran and Jagpal, 1995; Shirai, 2003). However, we know considerably less about the process by which consumers may combine/integrate multiple reference prices. Even less is known about individual differences in consumers' utilization of multiple reference prices.Prior research has acknowledged that different consumer groups (segments) are likely to evoke and use different sets of reference prices (Winer, 1986; Mazumdar and Papatla, 2000). Indeed, prior research has demonstrated that several factors lead to differences in the references used to evaluate a posted price. For example, reference price utilization and price perceptions have been linked to frequency of purchase (Thomas and Menon, 2007), product type (Chandrashekaran and Jagpal, 1995; Lowe and Alpert, 2007) and price consciousness (Alford and Biswas, 2002; Palazon and Delgado, 2009). In summary, prior research supports the claim that different segments may evoke and utilize different sets of reference prices, and suggests that it may not be appropriate to assume that all consumers utilize the same types (and number) of reference prices.As emphasized by Lowe and Alpert (2007) and by Mazumdar et al. (2005), despite some research on the topic, we do not know much about how consumers integrate multiple reference prices. More importantly, we need a better understanding of the factors that may determine which type(s) of reference price(s) are evoked and utilized when making price-related judgments. Along those lines, this research proposes to investigate how the level of involvement leads to differences in the number and types of reference prices that are used in the evaluation process. More specifically, it focuses on how consumers combine and utilize internal reference price (operationalized as expected price) and market-based external reference prices (operationalized as subjective knowledge of normal and lowest market prices)[1]. Consistent with established theory, this research acknowledges that consumers may adjust their internal reference prices based on their perceptions of market-based references. However, the conceptualization adopted here (explained later) also allows for consumers to evaluate posted prices directly against market-based (external) references.Regarding the relative impacts of the two types of reference prices on consumers' evaluations, there is some empirical evidence (Mayhew and Winer, 1992) suggesting that external reference prices may be more influential than internal reference prices. However, that conclusion might be premature because previous research has not paid attention to the moderating roles of individual level variables, for example involvement, that are likely to influence how consumers choose to allocate their cognitive resources - this is likely to impact the relative emphasis that consumers place on the two types of reference prices when evaluating a posted price.
The role of consumers' involvement: Consumers' involvement with a product class has been shown to influence consumers' information processing strategy (Bettman, 1979; Bloch et al., 1986) by affecting the extent of search, the type of information sought, the relative importance of different types of information (e.g. price versus product attribute information), and the motivation to process available information. High involvement is generally associated with product knowledge (familiarity) and the motivation to search for and use relevant information. On the other hand, low involvement has been associated with low product/price knowledge (lack of familiarity), and the lack of motivation to engage in detailed processing of information.Although there is substantial research dealing with how involvement affects consumers' use of product-attribute information and on how involvement affects consumers' responses to marketing communications (advertising), research in the (reference) pricing context is relatively limited. Available evidence reveals that such variables as involvement, prior knowledge, and price consciousness moderate consumers' internal reference prices and their acceptance of retail prices. These variables have been found to affect consumers' internal standards (Kosenko and Rahtz, 1988; Lichtenstein et al., 1988; Biswas and Sherrell, 1993), the widths (i.e. upper and lower limits) of their price acceptance regions (Lichtenstein et al., 1988; Rao and Sieben, 1992), and, finally, their confidence in their reference price estimates (Biswas and Sherrell, 1993). However, we know less about how such factors are likely to influence the ways in which consumers may combine/utilize multiple reference prices. The current study focuses on one such factor, i.e. involvement.
Hypotheses: This research explores the notion that different reference price utilization strategies unfold for consumers with high and low involvement. The idea that involvement affects which decision route consumers are likely to follow is not new. Extant research on consumers' information processing has shown that involvement affects the way consumers acquire, store, retrieve, and use relevant information to make decisions (Bettman, 1979; Bloch et al., 1986). Compared to those who are not involved, involved consumers are more motivated to search for relevant information. They also possess better knowledge about products and prices than consumers who lack involvement. Therefore, these consumers are more likely to possess well-defined internal standards than low involvement consumers (see also Chandrashekaran and Jagpal, 1995). Consequently, they are likely to be more comfortable evaluating retail prices against their internal (memory-based) reference prices. In contrast, less involved consumers are not as motivated as their involved counterparts to engage in extensive search behavior. In addition, they not likely to engage in detailed processing of available (price) information. As a result, their internal standards are not likely to be well defined, and they are not confident about acting on this information (Chandrashekaran and Jagpal, 1995).Consistent with such theorizing, Biswas and Sherrell (1993) found that high-involvement consumers are indeed more confident of their internal reference price estimates than low-involvement consumers. Therefore, there is some empirical evidence to support the expectation that the level of involvement is likely to moderate consumers' utilization of reference prices.Consistent with the notion that consumers are likely to possess reference prices that are brand specific (Briesch et al., 1997), a brand's own previous prices may be considered to be the most relevant information on which to base a reference price for the brand. Thus, it may be reasonable to expect high involvement consumers' evaluations to be most closely associated with a summary of a brand's past prices, i.e. the expected price construct. Briesch et al. (1997) empirically compared five different model formulations from the literature and confirmed that a model using a temporally generated, brand-specific reference price is the best predictor of consumer choice. However, the authors did not examine individual differences in the utilization of past price information.As discussed earlier, low involvement individuals are by definition not equipped with the knowledge to evaluate price on the basis of relevant factors. Recall that these consumers are not expected to internalize (past) price information, and thus cannot be expected to have well-defined (reliable) price expectations. Therefore, these individuals are likely to evaluate a brand's current price on the basis of more readily accessible perceptions they may have formed of current market prices.In summary, high- and low-involvement consumers are expected to follow different processes in evaluating the same objective price information. Highly involved consumers base their evaluations on internally held beliefs. However, because they are exposed to market price information in the course of their search, it is likely that they utilize this information to update (fine-tune) their internal standards to be consistent with current market information. In contrast, low involvement consumers are not motivated to devote cognitive resources to storing price information in their long-term memories and, therefore, are less confident about their knowledge of past prices. Consequently, they are likely to rely on immediately available external cues (i.e., market-based reference prices) to make their evaluations (as opposed to first updating their internal reference prices and, subsequently utilizing them to evaluate posted prices).More formally, it is expected that:H1. High-involvement consumers evaluate retail prices only against their price expectations (i.e. internal reference price).H2. High-involvement consumers utilize market-based external references only to update their internal beliefs (expected prices).H3. Low involvement consumers evaluate retail prices against one or more market-based (external) reference prices.H4. Low involvement consumers do not utilize internal reference prices (price expectations) in evaluating retail prices.Figure 1 presents the hypothesized model on the processes by which market-based and truly internal reference prices affect consumers' evaluations.
Data collection: Two hundred undergraduate business majors attending a large Northeastern state university participated in this study. The data required for this study was collected in four stages, over a two-week period. The entire data collection task was designed in the form of an experiential exercise.Stage one: initial measures
Analysis: The first task was to split the sample into high and low groups based on their involvement with the product category (jeans). A median split of the distribution of PII scores (median involvement=103) yielded high and low involvement groups of approximately equal sizes (nhigh=111 and nlow=114). In addition, the mean involvement scores in the two groups (mean=119.26 and SD=10.50 in the high-involvement group versus mean=83.03 and SD=16.87 in the low involvement group) were significantly different (t=19.27, p<0.001). Finally, an examination of the mean knowledge (familiarity) scores in the two involvement groups (mean familiarity=5.99 and 4.75 for high- and low-involvement groups, respectively) confirmed the premise that, in this product category, high-involvement consumers are significantly more knowledgeable than low-involvement consumers (t=7.91, p<0.01).To test the hypotheses in this study empirically, correlation matrices in each group were analyzed using the structural equations methodology (interested readers may refer to Joreskog and Sorbom, 1988). Figure 1 shows the model structure specified in both groups.The conceptual model shown in Figure 1 allows for the possibility that market-based reference prices (lowest price and normal price) may affect evaluations directly (along with expected price), or indirectly by first influencing expected price, which, in turn, affects how the offer is evaluated. Therefore, the model enables us to investigate the process by which consumers evoke, combine and utilize internal and market-based reference prices. Recall that this is an improvement over Rajendran and Tellis (1994), who examined how both temporal and contextual reference prices influence consumers' evaluations simultaneously. In relation to Figure 1, the hypotheses presented here predict that:* g11, g12, and b21> 0; and g21 and g22=0 in the high-involvement group; and* b21, g11, and g12=0; and g21 and g22>0 in the low-involvement group.As discussed earlier, consumers' evaluations were measured using three scale items corresponding to their perceptions of the value of the offer, the attractiveness of the deal, and their willingness to buy the item at the stated price. It could be argued that the concept of perceived value is related to consumers' perceptions of acquisition utility (AU), while their perceptions of the deal are related to transaction utility (TU) as defined by Thaler (1985). According to Thaler, these are conceptually different from purchase intentions (behavioral responses). However, principal components factor analysis resulted in a single factor and did not reveal conceptual differences between the three measures.
Results: Table I compares the two involvement groups on their reference price estimates and evaluations of the offer. It is interesting to note that there is no significant difference in reference price estimates across the two involvement groups (this point is also discussed later in the manuscript). Furthermore, there are no major differences between the two groups in their evaluations of the offer, except that high involvement consumers perceive the deal to be moderately more attractive than low involvement consumers (p<=0.10)[4].However, one cannot conclude on the basis of these results alone that high- and low-involvement consumers are homogeneous, thus justifying pooled analyses. Although consumers' final evaluations are important, it is crucial that we examine the process (route) by which high and low involvement consumers arrive at the same destination. In addition, consumers' final evaluations are not central to this study. Rather, the primary objective of this study is to examine heterogeneity in consumers' utilization of market-based and truly internal reference prices to evaluate the same objective price information. Indeed, although consumers do not differ significantly in their final evaluations of the offer, this paper seeks to uncover heterogeneity in reference price utilization, i.e. the means by which high and low involvement consumers reach the end (their final evaluations).Table II shows the estimated models in each group along with several fit indices. It is clear that the specified model structure fits the data well in both groups (kh2 with six df=9.59, p=0.143 in the high involvement group; and kh2 with six df=6.09, p=0.413 in the low-involvement group). In addition, the structural models explain a significant proportion of the variance (R2=0.51 and 0.53 in the high- and low-involvement groups respectively). For both groups, the RMSEA (Steiger, 1985) and other fit indices indicate satisfactory model fit. All indicated paths are of the predicted sign, and are significant at a=0.01. All non-significant effects, i.e. paths that are not significantly different from zero, are indicated as "NS".From Table II, it is clear that consumers who are highly involved with the product evaluate the retail price against a single internal reference price, expected price (b21 is significant at p<=0.01), but do not utilize any market-based references to make evaluations (note that g21 and g22 are not significantly different from zero). These findings strongly support the hypothesis (H1) for consumers' utilization of truly internal reference prices under high involvement.H2 suggests that involved consumers are likely to use market-based references primarily to update their internal references. Consistent with this hypothesis, Table II shows that normal price (a market-based reference price) has a significant positive effect on the internal standards of involved consumers (g12=0.727 is significant at a=0.01). This finding is consistent with the view that involved consumers process price information sequentially, i.e. an estimate of the normal/average market price is used to update (fine-tune) the internal standard, which, in turn, is used to evaluate the retail price. Contrary to expectation, these consumers do not utilize their perceptions of the lowest market price (g11 is not significant). Thus, overall, H2 is partially supported.In contrast to high-involvement consumers, those in the low-involvement group do not use their internal standards to evaluate the retail price (b21=0). As shown in Table II, these consumers base their evaluations of the retail price on market-based references (g21=0.257 and g22=0.211 are both significant at a=0.01). Thus, H3 and H4 are supported. In conjunction, these findings support the conclusion that low levels of involved facilitate simultaneous utilization of several market-based reference prices to evaluate retail prices[5].It is interesting to note that highly involved consumers utilize only normal price, but not their perceptions of lowest price, in formulating their price expectations. Clearly, further research is required to be able to offer a convincing explanation to this observation. One possible explanation may be that these consumers may generally be more aware that the observed lowest price (a single data point) is likely to be a temporary promotional offer that does not reflect the "normal" price for the product/category. Consequently, these consumers may discount this information and rely to a greater extent on more comprehensive information, for example overall estimate of normal/average price. Urbany and Dickson (1991) found that consumers' estimates of normal prices are reasonable surrogates for their reference prices. However, the authors did not investigate individual differences in consumers' perceptions of normal/average prices. It is likely that one segment (e.g. high-involvement consumers) has more accurate estimates of normal prices than another segment (e.g. low-involvement consumers). Hopefully, future research will investigate and shed some light on this and other similar issues dealing with individual differences in price perceptions and evaluation.In summary, the results reveal that although the two types of consumers (high- and low-involvement) may have similar reference prices and may even evaluate an offer similarly, they do so via distinct routes, i.e. by utilizing different types of reference prices. Specifically, whereas highly involved consumers rely solely on their internal standards to evaluate retail prices, those who are less involved with the product category utilize several market-based references to judge the attractiveness of retail prices. A general discussion of the results along with implications and limitations of the study follows.
Discussion: This study intended to draw attention to the heterogeneity in consumers' utilization of multiple reference prices. The results obtained here offer some evidence that involvement affects consumers' utilization of reference prices. Although involvement does not affect the levels of consumers' internal standards or their final evaluations, it plays a significant role in the way consumers utilize these standards to evaluate the same objective price information. Highly involved consumers, who are more knowledgeable about the product class, evaluate offers against a single internal standard (expected price), which supports previous researchers' (e.g. Winer, 1986; Kalwani et al., 1990) in their use of expected price as a reference price to explain consumers' choices. However, low-involvement consumers do not use expected price to evaluate offers. Rather, they use two market-based references (lowest price and normal price) to determine the overall value of an offer. Thus, involvement affects the number and types of reference prices used in the evaluation process.The results are also consistent with information-processing theory (Bettman, 1979), which advocates that high involvement encourages a deeper and more complex processing strategy than low involvement, which supports less complex processing and the use of easily available information. Highly involved consumers retrieve price information stored in long-term memory and fine tune this internal standard (expected price) using current market price information (estimated normal/average market price). Thus, high involvement supports a hierarchical model in which market-based and truly internal reference prices are used sequentially. This conclusion is consistent with the underlying process implied in vast amount of research on reference price formation. In contrast, low-involvement consumers do not possess the motivation to engage in such a complex evaluation process. They simply utilize the more readily available market-based references and evaluate offers in a single step.It is only appropriate to point out that some of the results obtained here are inconsistent with previous research findings and call for further investigation. For example, Lichtenstein et al. (1988) found that consumers' price acceptability level (i.e. internal reference price level) is positively related to involvement. However, no such differences were found here. One possibility is that the results are being moderated by consumers' knowledge in this category. Rao and Sieben (1992) found that the upper and lower limits of consumers' price-acceptance regions increase with knowledge up to a point, and then levels off. It is likely that, given the nature of product category used in this study (jeans), most consumers were familiar with the product. Indeed, an examination of the mean knowledge scores revealed that both high- and low-involvement consumers lie in the upper half of the scale (means for high- and low-involvement groups=5.99 and 4.75 on seven-point scales), which may account for the lack of difference in their reference prices. Furthermore, product knowledge was assessed using a single item that measured subjects' familiarity with the product category. Additional research is needed to investigate this issue further.Another possible limitation of this study is the omission of several internal and market-based reference prices (e.g. fair price and highest price) that have been mentioned in the literature. It might be useful for future research to include more definitions of reference price to investigate whether the premise of this research holds. Finally, it may be useful to examine the roles of other moderating factors (e.g. gender, product experience, price consciousness, etc.).This research has important implications for both marketing research and practice. It is important for researchers to identify other sources of heterogeneity and their impact on consumers' construction and utilization of reference prices. The finding that high- and low-involvement consumers are different in the types of reference prices used and in the way they use this information to evaluate offers strongly supports the need for using different model structures to represent these sub-populations. From a practical standpoint, this study underscores the importance of including individual differences in strategies designed to affect consumers' perceptions of retail prices. For example, marketers might segment the market on the basis of reference price utilization and design different strategies to obtain optimal results. It is hoped that this study will stimulate further research in the area so that we may gain a better understanding of how consumers process and respond to retail pricing strategies. Such knowledge will undoubtedly be useful in designing more effective and efficient marketing strategies.
|
E-campaigning versus the Public Official Election Act in South Korea: Causes, consequences and implications of cyber-exile
|
[
"South Korea",
"Politics",
"Legislation",
"Elections",
"Internet",
"Social networking sites",
"E‐campaign",
"Election law",
"Cyber‐exile",
"YouTube",
"Network analysis"
] |
Summarize the following paper into structured abstract.
Introduction: In South Korea, restrictions on political speech surrounding elections are more stringent than in many other countries. The Public Official Election Act (hereinafter POEA, enacted in March 1994 as a result of integration of four different election laws corresponding to different layers of public administration) contains a number of provisions prohibiting campaign activities that would be standard practice in most democratic countries. According to Article 59, for example, official campaigns are allowed only during a period from the day following the closing date of candidate registration to the eve of the election. This amounts to 23 days in the case of presidential elections, and 14 days for elections of legislators and local governors. Even within this brief designated period, campaigns are subject to tight protocols prescribed in Articles 58 ("Definition of election campaign"), 60 ("Persons barred from election campaign"), 68 ("Campaign sashes"), 92 ("Prohibition of election campaign using motion pictures, etc."), 98 ("Restriction on use of broadcast for election campaign"), 99 ("Prohibition of election campaign by internal broadcast, etc."), and 100 ("Prohibition of use of recorders, etc."), to name a few.Having a more direct effect on individual voters are Article 90 and Article 93, which ban the display or distribution of election-related paraphernalia in the 180 days prior to an election. Section 1 of Article 93 states that "no one shall distribute or display advertisements, letters, posters, photographs, documents, drawings, printed materials, audiotapes, videotapes or the like that convey endorsement of or opposition to a candidate or a party" during the 180 days.The already overarching scope of Article 93(1) has been further stretched since the National Election Commission (NEC) and the Public Prosecutors' Office started to apply it to the online context in 1996. Tracking down the author of an online source and enforcing this provision is practically easy in Korean cyberspace, in that all Korean users are law-bound to verify their real identities (by providing their resident ID numbers) when joining major online services. Thereby, numerous bulletin board entries, blog posts, viewer comments on news sites, and user-generated content on Web 2.0 platforms have resulted in legal ramifications ranging from fines to imprisonment.The 2007 presidential election
Literature review: when YouTube meets politics: Given the mainstream popularity of Web 2.0 services in recent years, current literature offers extensive discussion regarding the political implications of the increasing practice of digitally mediated social networking and content sharing. Specific interest has been shown in the question of whether such practice increases voter turnout or other forms of political participation in the traditional sense of the term. Research findings have, however, been mixed so far; while some consider that the use of Facebook and other social networking sites (SNSs) for political purposes is a significant predictor of general political participation (e.g. Vitak et al., 2011), others suggest that reliance on SNSs is not necessarily related to the increase in political participation (e.g. Baumgartner and Morris, 2010; Zhang et al., 2010).This broad discussion itself is inconclusive and is in need of further exploration, but for the purposes of this paper, we decided to focus on the political activity on and through YouTube. This commercially successful video-sharing site often features among SNSs in journalistic and scholarly accounts (boyd and Ellison, 2007), and this fact appropriately captures some, but not all, of its characteristics. It indeed provides a range of social networking features, such as "Comments", "Favorite", "Share" and "Subscriptions". However, it also stands apart from other SNSs, due to it being a unique hybrid form of social media and mass media. Despite social networks that underlie or emerge from its use (Lange, 2007b; Rotman and Golbeck, 2011; Sifman, 2011), YouTube originally has a character of "a media distribution platform, kind of like television" (Burgess and Green, 2009, p. 3). Burgess and Green (2009, p. 3) argue that "YouTube's ascendancy has occurred amid a fog of uncertainty and contradiction around what it is actually for." It is, so the two authors' argument goes, a new breed of businesses, or an example of what Weinberger (2007) calls "meta businesses", where old media heavyweights and amateur content creators form a curious cohabitation (Burgess and Green, 2009, pp. 4-5).Given that its format induces user participation, YouTube along with other Web 2.0 models revitalized the early internet idealism for better democracy (Bruns, 2008; Jenkins, 2006; Weinberger, 2007). As Marwick (2007) also pointed out based on a content analysis of news coverage, YouTube has been portrayed in a rather celebratory tone in the mass media for its democratic (or democratizing) potential. This kind of portrayal has been supported by a handful of anecdotal cases. During the 2006 midterm elections in the USA, a YouTube-publicized gaffe, later called the "macaca moment", cost Republican Senator George Allen (Virginia) his re-election bid, despite his huge initial lead in the opinion polls over the Democratic challenger, Jim Webb. This result was attributed to a video clip showing Allen making racially discriminatory remarks, targeting one of Webb's volunteers, during a campaign tour. The incident quickly became a widely publicized issue once the footage was posted on YouTube, which effectively led to Allen's defeat (Sidarth, 2006). Since then, the term "macaca moment" has been used to refer to "high-profile candidate gaffes that are captured on YouTube, receive a cascade of citizen views and contribute to some substantial political impact" (Karpf, 2010). A further example of such an incident took place in the 2007 Finnish national elections (Carlson and Strandberg, 2008, p. 171).The discussion of the political implications of YouTube intensified around the 2008 US presidential election because that was when Web 2.0 applications started to be incorporated into the mainstream campaign repertoire in the USA. Then-candidate Barack Obama received considerable media attention for his online presence, which outshone that of the rival candidate Hillary Clinton during the primaries and later that of his Republican opponent John McCain. Obama's campaign made heavy use of Facebook and YouTube to engage the attention of younger voters (Young, 2008), securing record campaign contributions mainly through online donations (Cooper, 2008; Carpenter, 2010).The 2008 election is widely considered as having been a pivotal moment in the US campaign history. Carpenter (2010) and Ricke (2010), for example, pointed out that YouTube, particularly its joint projects with PBS ("Video Your Vote") and with CNN ("The CNN/YouTube Debates"), afforded more room than ever for ordinary voters to participate in the campaign process and consequently served as "an instrument of 'checks and balances'" (Carpenter, 2010, 223). Some other scholars focused on how individual candidates made use of YouTube to deliver their messages in this particular election. Church (2010) examined leadership discourse by analyzing YouTube clips, featuring 16 candidates in the race, and suggested that given the emergence of what he termed "the postmodern constituency" (Church, 2010, p. 138), as well as the unfiltered nature of the medium (Church, 2010, p. 139), voters' focus shifted from candidates' political experience to their character. Duman and Locher (2008) examined Barack Obama and Hilary Clinton's YouTube campaigns and highlighted how the two presidential hopefuls attempted to create and uphold a conversational allusion through their videos.In regards to YouTube, another growing body of literature is concerned with the patterns of user interaction in this "viral marketing wonderland" (Burgess and Green, 2009, p. 3), and the methodological implications of exploring them. Three themes were identified for our research design. First, some studies have focused on what motivates users to share their videos on YouTube in the first place, ranging from perceived usefulness to interpersonal norms (e.g. Yang et al., 2010). Second, from a similar yet more specific perspective, some have established that YouTube is not only a source for information or entertainment for individual purposes, but also a new terrain for social interaction involving acts of "co-viewing" (Haridakis and Hanson, 2009), video responses (Adami, 2009), peer comments (Lange, 2007a; Jones and Schieffelin, 2009), and links to Facebook Walls (Robertson et al., 2010). In this sense, Chu (2009) went further to argue that YouTube plays a role as a "cultural public sphere".That said, others have taken a cautionary stance regarding YouTube's capacity to function as a public sphere. For example, Hess (2009) suggested there are certain obstacles to public deliberation on YouTube, such as the platform's dismissive and playful atmosphere. Moreover, Carpentier (2009) analyzed 16plus, a YouTube-like online platform provided by the north Belgian public broadcaster VRT, and suggested that the users may not be as appreciative of additional means of participation as expected. Lange (2007a) found that video-based communication on YouTube is no less hostile than faceless text-based communication. Blitvich (2010) added that impolite comments, although sometimes strategically employed, are likely to result in polarization.Chadwick (2009) argued that deliberative processes and "thick" citizenship cannot be the sole yardstick for the assessment of the functioning of e-democracy in the Web 2.0 era. However, if YouTube proves to be an unviable site for discussing serious political issues, it is important to ask why that is so. Answers to this question would advance our understanding of its political potential and how to harness it.
Research questions: As established in the previous section, YouTube is a uniquely interesting environment for political communication. What merits further attention is how this global platform intersects with local political and cultural dynamics. This is an underexplored line of inquiry, especially in the existing literature, which has been predominantly based on incidents from Western polities. With the aim of contributing to filling this lacuna, we address in the present paper the following three questions.1. What were the salient features of the discussion surrounding the YouTube clip, and how did that discussion develop during the campaign period?2. What were the salient features of the discussion surrounding the Daum clip, and how did that discussion develop during the campaign period?3. To what extent and how did the two discussions differ, and what lessons can be drawn from the differences (if any) in terms of harnessing the political energy of Web 2.0?
Research design: Data collection
Results: Patterns of interactions among users
Discussion: Q1. What were the salient features of the discussion surrounding the YouTube clip?
Conclusion and future research: The dialectics between global media and local contexts have indeed been well documented (e.g. Turner, 2005), and the internet has complicated the matter even further. Contrary to the general description of the internet as a supranational network of computers, what we have is, in boyd's (2006) words, "all sorts of local cultures connected through a global network, resulting in all sorts of ugly tensions."What is unique about the case studied in this paper is, however, that users turned to the global space in order to circumvent a local political conflict, not the other way around - hence the neologism "cyber-exile". During the 2007 presidential election, Korean voters used YouTube to share election-related information because the laws of their country prohibited them from doing so on domestic websites. YouTube provided them with a higher level of anonymity than Korean sites. This incident of cyber-exile illustrates the tension between internet-mediated grassroots political activity and the authorities' restrictive interpretation and application of existing laws - the POEA in this case - to curtail the activity (see also Lee, 2011).However, the discussion surrounding the YouTube video clip was brought "back" to the Korean cyberspace. Korean users are generally known for having a strong preference for local services (Lee, 2009, 312), but more importantly, YouTube could not provide a suitable environment for this particular discussion. Dialogs within YouTube's comment facility, among Koreans as well as non-Korean users, were often off the topic, and in some instances, unexpectedly turned into "racist flaming".Zuckerman (2010) analyzed various tools for circumventing the internet censorship worldwide and suggested that circumvention cannot be a long-term solution. His argument is focused largely on technical aspects, but it still leads to important questions of how, and to what extent, we then should go "beyond" circumvention.We, the authors of this paper, do not intend to advocate the "walled garden" model for online forums, as opposed to the wild of YouTube, nor do we wish to suggest there should be zero government intervention. Our findings indicate that users can circumvent local regulations via the internet and other digital communication technologies (and probably more will do so), but subsequent discussions are likely to become fragmented as a result. Despite the specifics of the case studied, the significance of the present paper lies in the fact that epitomizes the tension between "old" laws and "new" media. Moreover, our findings clearly demonstrate that an innovative circumvention attempt on the user's part is not enough to harness the potential of online discussion for measured, sustained discourse of the issue at hand.This study has an important limitation. We conducted the analysis on a real-time basis as the event unfolded, and therefore the need to compare YouTube and Daum arose only during the later stages of data collection. As a result, a point-to-point comparison was not feasible, which invites further research.Noteworthy is that on December 29, 2011, the Constitutional Court of South Korea declared the unconstitutionality of the NEC's extended application of Article 93(1) of the POEA to social media, particularly Twitter. With the next presidential election scheduled for December 2012, future research should examine the impact of this court decision on the long-awaited relaxation of restrictions on election-related online communication in the country.
Acknowledgements: An earlier version of this paper was presented at the Oxford Internet Institute's 2010 conference, and some of the findings (answering different research questions) were published in a Korean-language journal. The authors are grateful to Ae-Jie Bae for her assistance in data collection.
|
Reframing integration: Information marginalization and information resistance among migrant workers
|
[
"Intermediaries",
"Migrants",
"Integration",
"Information behaviour"
] |
Summarize the following paper into structured abstract.
Introduction: In this study, I present the findings from a qualitative investigation that explored the other side of the issue of integration of migrants; that is, the views and perceptions of information intermediaries working with migrants in Israel about the integration process. The notion of integration has been the focus of academic debate in recent years and has been defined in different ways (Ager and Strang, 2008; Cheung and Phillimore, 2014; Farach et al., 2015; Gilmartin and Migge, 2015; Harder et al., 2018; Phillimore, 2012). In its broadest sense, integration means the process or transition by which people who are relatively new to a country become part of society (Rudiger and Spencer, 2003). Integration is achieved when "people, whatever their background, live, work, learn and socialise together, based on shared rights, responsibilities and opportunities" (Ndofor-Tah et al., 2019) while keeping a measure of their original cultural identity (Threadgold and Court, 2005). Integration is being characterized today as a multidimensional and multidirectional (Harder et al., 2018) process that encompasses "access to resources and opportunities as well as social mixing" involving adjustments by everyone in society (Ndofor-Tah et al., 2019).
Theoretical direction and literature review: Social integration of migrants
Methodology: Population of the study
Findings: The content analysis of the interviews revealed three major themes: information marginalization includes data that describe the different factors that keep migrants at the social margins and the ways that this marginalization is reflected in their information seeking of everyday life information; information resistance includes data that describe the ways by which migrants hold off or rebuff accessing and receiving information; overcoming resistance includes data that reveal the ways by which migrants and social mediators try to overcome information resistance and ultimately information marginalization (see Table II).
Information marginalization: This theme has two main categories: elements of information marginalization that hinder migrants' social integration, and Ager and Strang's (2008) markers and means: three indicators of integration viewed through the perspective of information marginalization.
Elements of information marginalization: The categories comprised in this theme related to Ager and Strang's (2008) facilitators: language and cultural knowledge and safety and security. Findings from the content analysis showed that participants did not view these elements as facilitators of integration, but rather as factors in the migrants' lives that hinder their integration into Israeli society. This theme is comprised of four subcategories: lack of cultural knowledge, lack of language proficiency, living in an unsafe and insecure environment, and discrimination.
Information resistance: The second theme in this study is information resistance. Content analysis of the interviews revealed that migrants resist information as a defensive behavior in response to the unstable and unsafe situation they face in their host country and to their lack of cultural knowledge. This theme includes two subcategories: secrecy and disinformation.
Overcoming information resistance and marginalization: This theme consists of two categories that describe the efforts of social mediators to find new, relevant ways to communicate with migrants as well as the role that social connections play in overcoming information resistance and marginalization.
Discussion: This study presented a new and more nuanced understanding of the process of integration of migrants by re-examining Ager and Strang's (2008) framework from an informational perspective told by the intermediaries who work with these populations. Findings from the content analysis revealed that the process of integration is shaped by the inability of both migrants and the institutions in the host society to close the cultural and social gaps that ultimately result in information marginalization.
Conclusion: Phillimore (2012) wrote, "Integration implies the development of a sense of belonging in the host community, with some renegotiation of identity by both newcomers and hosts" (p. 3). Findings from the study showed that it is in this oftentimes failed renegotiation that information marginalization emerges. By allowing intermediaries to articulate their views and opinions, I was able to understand how personal, cultural and social factors impacted the lives of migrants, and distanced them from the sources of information and support they need to feel at home in their new country. Findings showed that for integration to be successful, it should be the result of an effort by both migrants and local institutions/structures to reach a middle ground of understanding and compromise. This study extending Gibson and Martin's (2019) notion of information marginalization to encompass both sides of the equation of marginalization and situating it in the social, economic and contextual conditions that create information poverty.
|
Narratives of (in)active ageing in poor deprived areas of Liverpool, UK
|
[
"United Kingdom",
"Liverpool",
"Elderly people",
"Social policy",
"Poverty",
"Deprivation",
"Active ageing",
"Narratives"
] |
Summarize the following paper into structured abstract.
Introduction: I see they've been killing cats again [...] "Better bring mine in, then.
Background to the case study: Demographic change has stimulated reviews of concepts of ageing, with "active" ageing emerging as an important focus (WHO, 2002; Walker, 2010; DESA, 2011). Activity in older age may be limited by ageism, poverty in its many guises including ill-health (Howse et al., 2011) and policy and resource constraints which inadequately support older people's welfare (Dean, 2012; Lymbery, 2012). Disparities between poverty and wealth prompt examination of social justice and fairness (Harvey, 1973; Rawls, 2001; Barry, 2005; Dorling, 2011). A minority of older people are wealthy; others rely solely on the state retirement pension; others have occupational pensions and some savings, but these may be eroding relative to living costs (HC, 2009, 2012). Internationally and nationally, the elderly are likely to be among societies' poorest (Price, 2006; Walker, 2009; McKee, 2010; DWP, 2012; DESA, 2011).Phillipson (1998) explores changing ideology and social policy relating to ageing post-1945 with ageing's demands becoming individualised in a postmodern setting and generational interdependence being questioned. The complexities of work-retirement transitions, age discrimination and the changing roles of older people in work and society are meriting critical attention (Phillipson, 2004c, a; Macnicol, 2005). Coleman et al. (1993) illustrate negative calculations of age-related costs and changing elder support social networks, including family. Factors converge: global recession; neo-liberal policies of out-sourcing welfare to the private sector and voluntarism; and post-modern discourses of choice and consumerism (Walker, 2005; Coote, 2009). In different cultural settings, older people may find such socio-political change difficult (Moffatt et al., 2011; Miles, 2009).Concepts of ageing
The study areas and people: Most participants were primarily dependent on the state retirement pension. Additionally, ageing in poor neighbourhoods confers multiple disadvantages (Scharf et al., 2003) and the study areas are among the most deprived in England (CLG, 2010b; LCC, 2004-2011). During the study period, 2002-2007, Anfield, Clubmoor, Everton, Speke-Garston[3] and Tuebrook-Stoneycroft were among the poorest wards in Liverpool on measures including household income and income deprivation affecting older people, both pointers to other deprivations. Despite improvement since 2003, significant areas in wards remain within the poorest 1-5 per cent in England (LCC, 2011): most (89 per cent) of Everton is within the poorest 1 per cent.High levels of poverty and deprivation persist in Liverpool, the once "proud second city of empire" (Belchem, 2006, p. 9). Table I illustrates the ranking of major cities in England on the Index of Multiple Deprivation (IMD) (LCC, 2011). Between 2007 and 2010, Liverpool remained the most deprived major city in England, in depth (concentration) and geographical extent of deprivation.Liverpool's overall weak IMD ranking suggests that low ranking areas within the city are considerably deprived (Table II). Socio-economic polarity within Liverpool is also evident. Among the study wards and Church, a comparator ward, Everton ranks first (worst) in Liverpool on most measures. Within other study wards, such as Tuebrook-Stoneycroft, there are small areas of relative affluence but deprivation and ill-health are more evident; both have been associated with poverty (Howse et al., 2011). Years of male life expectancy range from Anfield (72.9) to more affluent Church (83.8), with Liverpool at 78.8 years and England, 82.0 years. Worklessness, rates of benefit claim, low incomes and weak educational performance suggest current and potential future deprivations.
The research methodology: Between 2002 and 2007, ACL commissioned studies in five deprived wards to inform policy-making. Liverpool PCT, part of the NHS, contributed informants. The work focussed on older people's needs, aspirations and barriers to active ageing.Participants and informants
Active ageing: participation in social and health activities: There are other factors, but lots of exclusion (from social and health activities) results from low incomes [...] though people may be too proud to admit it [...] income maximisation among older people is not only empowering, it can be a life saver. It means less anxiety and depression and more [...] fresh food, keeping warm, getting out [...] (Key informant).This observation was illustrated by comments such as: "keep-fit sessions cost a quid!" (widow aged 60); and "you've to spend PS25 now on a week's shop to get free delivery!" (elderly couple). The problem here was not cash flow, but long-standing forced economy. The following sections explore participants' understanding of "active ageing", patterns in their health and social activities and the implications of the findings for social policy.Active ageing: meanings
The home and active ageing: Satisfaction with "home" was important to well-being and active ageing. It could be the springboard to outside engagement, a place from which to contact the outside world through reading, telephones, computers in some cases, and somewhere to welcome family and friends. The home could be the repository of memory, an essential feature of personal security and identity, indeed as one participant commented, "my home is me".Home could signify loneliness, however, especially if mobility and health were difficult. It could be insecure within, if coping were problematic, and outside in troubled neighbourhoods. Across the wards, between 12 and 18 per cent of participants said they felt insecure in their homes, mainly through fear of burglary. Visits from family and friends were "cheaper than going out" but lack of maintenance, cleaning and adequate heating could constrain the reception of visitors. Home help services enabled many to remain in their homes, but there was fear of elder abuse and reluctance to allow help from strangers.Reaction to sheltered accommodation varied, depending mainly upon the previous home's characteristics, health, and whether a partner was alive. Loss of independence could sadden but for others, security and less "worry" compensated. The worries included burglary, "I can open my window now for fresh air"; organising budgets and home maintenance. Some older residents felt less lonely and happier. Others were ambivalent, liking the greater security, but disliking being observed, within others' structures of time, place and communal living. Some participants were currently homeless, primarily because of poverty and poor health. Homelessness was commonly preceded by intense difficulty, in relationships, mental and physical health and finance, although housing associations appeared generally more supportive than private landlords.
Conclusions: "(In)active ageing" relates first to the contrast between narratives of older person as burden in relation to our observations of the pivotal role of many older people in family survival, working part-time, providing family care, or both. Some participants joked about their "inactivity", then gave accounts of selective disengagement, from post-employment "busy-ness" and "day-time television", to re-engagement in following interests: pigeons, local history and gardening were three examples. The title also recognises the barriers to active ageing, significantly built from lifetime inequity, in families and neighbourhoods.Lie et al. (2009) comment that older people's volunteering cannot substitute for funded, sustained and coherent care systems. In the UK, the Government's Big Society project (2010) has been criticised on a number of grounds: as rhetoric and cover for spending cuts (Corbett and Walker, 2012); contributory to the dismantling of the welfare state (Bone, 2011); and illustrative of short-term-ism (Ware, 2012). The "Big Society" is a wholly inadequate response to the needs of the deprived neighbourhoods we have studied. Here, the two major barriers to active ageing are long-term ill-health and disability underpinned by poverty. As Table I has indicated, Liverpool's deprivation is severe and persistent, so measures to effect improvement need to be well-supported and long-term.
Our minimum agenda for concrete action in the UK would include: * The setting of a national minimum income for active and healthy living (O'Sullivan and Ashton, 2011).* Ensuring that incomes at least met this standard.* Emplementing the Marmot proposals (2010) for a "fair society" with reductions in health inequity from the beginning of life to the end.* Restoring the retirement pension - Retail Price Index link.* Establishing a National Care Service to address one of the major needs of older people and their carers.* Establishing a clear and effective focus within government for oversight and development of policy affecting older people.* Ensure that urban and rural areas are "friendly" to older people (ILC, 2011), which also means that they would be friendly to all.* Providing core funding for a range of opportunities in neighbourhoods for active participation by older people (Deeming, 2009).Lymbery (2012) points out that, especially in recession, an adequate governmental response to older people's needs is unlikely. We share that view. The conditions in the study areas are evidence of lack of social justice and the position is likely to deteriorate with further cuts in spending. Many informants were angered at the likely prospect of deterioration of already inadequate provisions in the wards, which is a fair response. In disseminating knowledge of Liverpool's situation, we hope to contribute to a growing body of knowledge and challenge about ageing in places characterised by these levels of poverty and deprivation.
|
Children's perceptions of obesity as explained by the common sense model of illness representation
|
[
"Qualitative research",
"Children (age groups)",
"Obesity",
"Individual perception"
] |
Summarize the following paper into structured abstract.
1. Introduction: Childhood obesity is a growing global health concern, with physical, emotional and social consequences frequently persisting into adulthood (Daniels et al., 2005; Must and Strauss, 1999; Doak et al., 2006; Reilly et al., 2003). Wang and Lobstein (2006) report that the obesity epidemic seems particularly prevalent among school age children. Accordingly, the World Health Organisation reports that an estimated 22 million children under five years old are currently obese, and that even developing countries are facing a similar problem. For example, in Thailand, the obesity prevalence rate for children aged five to 12 years old rose from 12.2 percent to 15-16 percent over the course of only two years (WHO, 2008). The rate of childhood obesity also seems to be on the increase, even in developed countries. For example, a WHO survey conducted in the USA revealed that obesity in children aged six to 11 years has more than doubled since the 1960's (WHO, 2008). In Australia, recent figures from the 2004 New South Wales Schools Physical Activity and Nutrition Survey (SPANS, Booth et al., 2006), indicate that 26 percent of boys and 24 percent of girls in the state of New South Wales aged between 5 and 16 years were overweight or obese. This is compared to the year 1985, when 11 percent of all young people aged 7 to 16 were overweight or obese (Booth et al., 2006).Childhood obesity has been shown to be associated with an increase in medical complications such as orthopaedic complications (Dietz, 1998), pulmonary consequences such as sleep apnea, asthma and exercise intolerance (Dietz, 1998; Ebbeling et al., 2002), and cardiovascular consequences, such as hypertension, dyslipedemia, chronic inflammation and blood clotting tendencies (Ebbeling et al., 2002). Children who are obese also face increased risks of non-alcoholic fatty liver disease (Daniels et al., 2005) and Type 2 diabetes (Daniels et al., 2005; Ebbeling et al., 2002).In addition to these physical consequences of obesity, children who are obese have also been shown to experience more psychosocial disturbances such as decreased social interaction, depression, impulse control problems and decreased perceived cognitive and athletic ability (Ells et al., 2006). Furthermore, Franklin et al. (2008) report a finding that obese boys and girls appeared to have lowered self concepts, with girls in particular experiencing significantly lower perceived social acceptance than their normal weight counterparts (Franklin et al., 2008). Accordingly, normal weight children's attitudes towards obese children or obese silhouettes have been found to be negative (Bell and Morgan, 2000) and unfavourable (Staffieri, 1967).Research has shown that accurate and developmentally sensitive understandings of chronic illness in children is associated with enhanced satisfaction, an improved emotional state, a better quality of life and, most importantly, better compliance with treatment (Veldtman et al., 2000). For children this understanding is contingent on a number of factors such as developmental levels and illness experience. It is a widely replicated and supported view that children's perceptions of illness vary in both content and depth, particularly as a child's understanding of their illness cognitions appear to be linked to their developmental level (Bibace and Walsh, 1980). Thus, attributions that children make about their own illness may be completely different from that of an adult or parent.For example, in the obesity/overweight literature, it has been found that adults understand a wide number of possible causes for obesity. These include junk food advertising during children's television viewing, genetic and endocrine factors, environmental factors, imbalance between energy taken into the body and energy expended, as well as increased calorie intake through diet and sedentary lifestyles (Saelens and Daniels, 2003). In an Australian study of lay persons' understanding of childhood obesity, Hardus et al. (2003) found that the public had similar sophisticated perceptions of the causes and preventions of obesity. Over half of the adults surveyed endorsed a view that media promotion of unhealthy foods and over consumption of fast food were the main contributors to childhood obesity.Despite this, it appears that the only aspects of children's views of obesity that have been investigated are perceived causes In a rare study examining childhood conceptions of the cause of obesity, Johnson et al. (1994) found that elementary school-aged children (in grades 1, 3 and 5) lacked much factual knowledge and held many misconceptions. For example, although children could identify "junk" foods as a cause, their definition of junk food included items like bread, pasta, potatoes, bananas and chicken. Many were confused about how fat, meat, salt and cholesterol led to obesity. In fact some thought that cholesterol was a type of food. Only 7 percent identified lack of exercise as a cause of obesity though many had no idea how exercise kept one from getting fat, or how a sedentary lifestyle could make a person fat. Although 4 percent mentioned calories as a cause of obesity, children did not know what calories were. Other causes of obesity identified were drinking too much milk or water and mother to unborn child transmission (Johnson et al., 1994). Thus, factual knowledge seemed to be limited, with children not fully comprehending the content of information, or how certain factors work to contribute to obesity.With regards to examining childhood conceptions of disease, an accepted, holistic and current framework is the common-sense model of illness representation (CSM; Leventhal et al., 1980). The CSM is primarily designed as an explanatory framework for the way people perceive and react to disease threat (Leventhal et al., 2003; Conner and Norman, 2005). It is hypothesised that these "lay views" of illness are formed along five dimensions, or interpretive schemas (Hagger and Orbell, 2003). The five dimensions of the CSM of illness representations are:1. Identity. Individual statements, beliefs, and labels for the illness. Leventhal et al. (2003) posit that this domain is critical as it joins feelings of vulnerability to general symptoms.2. Cause. Perceived causative agents of the disease, which may include genetics, lifestyles or infections.3. Timeline. The perceived duration of the illness.4. Consequences. The perceived impact that the illness has on overall lifestyle.5. Control/cure: Refers to perceived efficacy of treatment (Hagger and Orbell, 2003) as well as perceptions of how preventable, controllable or curable a condition is (Leventhal et al. 2003).Empirically, the CSM has been validated as being applicable to many illnesses, as it was developed primarily to apply to the health field (Leventhal et al., 2003). When applied to children, the CSM forms a good alternative to the Piagetian framework which is frequently used in the study of children's perceptions of illness (Eiser, 1989). First, the CSM addresses more domains of illness than simply causative factors, a criticism made of applications of the Piagetian model (Eiser, 1989). Secondly, the CSM allows for these five schema structures to be reshaped and develop in breadth as new information is gained from experience and other sources (Leventhal et al., 2003), a notion especially pertinent to children as they age and become more sophisticated in their cognitions.Empirically, previous studies have found that the CSM can be used to effectively describe children's perceptions of illnesses. For example, Goldman et al. (1991) reported that responses given by children aged four to six about the common cold tapped into the five domains articulated by the CSM. In addition, Paterson et al. (1999) also discovered that seven to 14 year olds' discussions on the common cold and asthma could also be conceptualised in terms of the illness representations model.
2. Aims: The aim of the current study is to examine children's understandings of obesity. The current study aims to explore childhood perceptions of the identity, cause, timeline, consequences and cures/control of obesity.
3. Method: The current study recruited a total number of 33 children. Participants were as follows: 12 children aged seven to eight (36.4 percent), 13 were aged nine to ten (39.4 percent) and eight were 11-12 (24.2 percent). Age groups correspond to primary school grades 4, 5 and 6 respectively and are the group for which childhood obesity has shown the greatest increases over time (Behn and Ur, 2006; Wang and Lobstein, 2006). There were 20 boys in the sample. The final sample represented nine overweight/obese children (27.3 percent) and 24 normal weight children (62.7 percent). Children were recruited through the Catholic Schools system as well as a sample who were seeking appetite awareness training in Sydney, Australia. Children in this latter option were included in the study on the basis if their body mass indices fell within the childhood "overweight" or "obese" range defined by Cole et al. (2000). Children were interviewed using a semi-structured interview protocol which was designed to reflect the five dimensions of the CSM. Ethics approval for this study was granted by the University Human Ethics committee as well as the CEO of an Archdiocese of Catholic Schools in Sydney, Australia.The final sample is shown in graphical form in Figure 1.3.1 Materials
4. Results: 4.1 Identity
5. Discussion: In the current study, children of primary school age were interviewed on their views of obesity as guided by the five dimensions of Leventhal et al.'s (1980) common sense model of illness representations. In general, it was found that children were quite knowledgeable in all areas.The most obvious identity feature of obesity depicted by the children was a large stomach. If the negative connotations of stereotyping are ignored, this is encouraging as children may be referring to a large waist circumference, which is associated with the most health complications in obesity (Wardle et al., 2008). However, the other identity features of obesity listed by the children were also primarily negative in nature. Such findings are consistent with previous research that have found negative societal attitudes of overweight people found in adults (Bell and Morgan, 2000; Pingitore et al., 1994; King et al., 2006; Hebl and Xu, 2001), and children (Hill and Silver, 1995; Staffieri, 1967).Children correctly identified consumption of junk food, overeating and a non-engagement in activity and exercise as prominent causes of obesity, reminiscent of findings by Eiser et al. (1983) whereby children were able to identify being "healthy" as exercising, having a balanced diet and being energetic in general.Almost half of all children interviewed did not mention sedentary behaviour as a cause of obesity. A similar finding was reported by Johnson et al. (1994). This finding has also been reflected in a recent study conducted by the NSW Schools Physical and Nutrition Survey (SPANS) (2004) (see Booth et al., 2006). The study reported that despite there being a recent increase in school-aged children fulfilling recommended exercise requirements over the year 2004-2005, there was also a corresponding increase in children undertaking sedentary activities such as television, videos/DVDs, and electronic and computer games (Booth et al., 2006). This generally poor understanding of sedentary behaviours is of particular interest as sedentary living is one of the major tenets in recent media campaigns by the NSW government to decrease the childhood obesity rate. Perhaps such interventions need to directly address how sedentary behaviours lead to obesity, and encourage children to identify their own patterns of sedentary behaviours.The majority of children in the current sample stated that the timeline or duration of obesity was reliant on people undertaking positive health behaviours. This could also be interpreted as reflective of an internal locus of control whereby the person is seen as being responsible for their own weight loss. As such, it may be that children believe that by either not engaging in these positive behaviours, or, alternatively having failed in weight loss despite these behaviours, that people may have "done the wrong thing". On the basis of their understanding of the causes of obesity, it appears that children seem to believe that the duration is short if people take appropriate action. This internal attribution to weight loss may mean that they fail to capture the multifactorial complexity of the issue.It was noted that normal weight children were more adept than their overweight counterparts at detecting the most severe consequences of obesity. First, it may be possible that the parents of overweight children protect them from such negative consequences, as it is understandable that parents would aim to minimise distress in their children. Second, it may also be that overweight children and their parents underestimate the true risks of obesity, as found by Jeffery et al. (2005).Lastly, the cures/control section of the interview yielded the finding that most children endorsed the idea that exercise was a potent cure for obesity. This finding may be seen as reflecting a general sense of an internal locus of control - in that it is seen as a person's own responsibility to carry out this behaviour to lose weight, and that exercise, being a vigorous, visible activity, is the most obvious behaviour to contribute to this. However, this is also limiting in that exercise is not the only component contributing to healthy lifestyles, and needs to be undertaken in conjunction with other positive health behaviours. Hence, the multiple components of obesity and overweight also seem to be underappreciated by children in the current study. Further interventions should make clear the multifactorial complexity of obesity to children, and further highlight the nature of sedentary behaviours in maintaining this condition.The current study was limited in that most of the sample came from the Catholic education system in Sydney's eastern suburbs, which are more affluent in terms of socioeconomic status and therefore impacts on the study's external validity. Nonetheless, recent figures from the Australian Bureau of Statistics (www.abs.gov.au) indicate that obesity and overweight actually tends to be more prevalent in low income households and living areas of greatest disadvantage. Future research in this area should therefore aim to recruit participants from areas of lower socio-economic status. In addition, a broader sample could also be obtained from public education settings. On a related note, the treatment-seeking sample in the current study also presents with limitations in the sense that results may be biased and not representative of the wider population. With this respect, wider sampling in a number of areas would be a further avenue to consider in future studies.In conclusion, this study has qualitatively examined children's views on facets of obesity as pertinent to the common sense model of illness representations. Results found that children do indeed have well developed perceptions of the identity, cause, timeline, consequences and control/cure of obesity but which nevertheless have some qualitatively different deviations from "expert" or "adult" understandings of these facets. Future obesity prevention interventions should take these childhood perceptions into account when considering improved adherence to regimes.
|
The hegemonic gender order in politics
|
[
"Gender",
"Discourse analysis",
"Politics",
"Critical feminist perspective"
] |
Summarize the following paper into structured abstract.
Introduction: Despite the ongoing increase of women in the top positions of hierarchy, they continue to be underrepresented in politics occupying 19.5 percent of seats worldwide, 22.8 percent in Europe, 22.6 percent in the Americas and 42.0 percent in Nordic countries (Inter-Parliamentary Union, 2017). In 2017, Italy ranks 44th out of 193 in the world classification compiled by the Inter-Parliamentary Union (2017), with a participation rate of Italian women to res public between 28 and 31 percent. Although the number of women elected in central institutions is increased compared to the past, in Italy women are still low represented in politics (Massa, 2013), and the number of women politicians decreases in the hierarchy of power (Fornengo and Guadagnini, 1999; Bonomi et al., 2013). Every president of the Italian Republic has been a man and women who have served in the institutional role of president of the Chamber of the Italian Republic from 1948 to date are 3 out of 14 (Italian Parliament, 2017). This position of women in politics, characterized by both a numerical increase compared to the past and a numerical decrease at the top of the power hierarchy, is well represented by the current government. The XVII Legislature of government, which started in 2013 and close to conclusion, is the legislature with the largest number of women in Parliament, in fact the women account for 32 percent of deputies and 30 percent of senators, but only 16 percent of women cover the roles of leader, president of the commission and the office of the presidency. Analyzing the composition of the Italian Parliament, it is clear that the disparity between men and women in politics is recognizable when roles are more prestigious. This condition is also confirmed at the regional level: women are present in the joint sessions in 29 percent of the councils and hold key positions (departments and offices with greater decision-making power) in 18 percent of the councils, while only 10 percent is president of the region and only 2 percent are leaders of the municipalities in the capital city (Openpolis, 2015).
Gender, power and discourse in politics: The studies on women and politics continue to focus on women' numbers in politics and on sex differences, and with difficulty, they are able to embrace the concept of gender (Kenny, 2007) as a category that structures and makes sense of particular social practices. However, feminist studies have developed complex theories of gender, including both patriarchal and discursive conceptions of power. The power inequality between gender relies on the concept of patriarchy, a system of social structures and practices of domination founded in the subordination of women by men (Walby, 1990). Gender relations are power relations involving formal public structures as politics and private structures as the family. The distribution of power between gender, based on hegemonic patriarchal culture, which associates naturally the female to the house and the male to the labor and political activities, has determined an asymmetry in which the historical forced absence of women from prestigious positions, such as politics, would involve a vicious circle whereby the greater female participation in domestic work has become cause and consequence of their exclusion (Walby, 1990). Through the internalization of gendered norms produced routinely in discourses of everyday life, this gap of power between men and women has become invisible, misrecognized and recognized as legitimate and natural (Bourdieu, 1991), contributing to the consolidation of "hegemonic masculinity" that preserves, legitimizes and naturalizes men power and, consequently, women subordination (Connell, 1987, 2016; Connell and Messerschmidt, 2005). Discourse, therefore, plays a significant role in the reproduction of dominance, which is considered as an exercise of social power by elites, institutions or groups that have the effect of increasing social, political, class and gender inequality (van Dijk, 1993, 2011). Social power is enacted, reproduced or legitimized by the discourse of dominant groups or institutions (van Dijk, 1996).
Methodology: This study is based on 30 biographical interviews conducted with local Italian politicians, 15 of whom were men, 15 women. We chose to involve local politicians because figures on the number of women in local Italian institutions are lower than those at the national level (ISTAT, 2017).
Who is to blame for the gender gap in politics?: Data analysis show how the dominant gender order is performed and reinforced through the discursive practices of the politicians interviewed. The participants tell their experience as politicians and produce discourses that clarify the norms that regulate the political field. These discourses are part of a wider social context in which choices, roles and careers are gendered.
Conclusions: This paper explores the topic of the gender gap in politics through a discourse analysis of a group of Italian politicians and shows the patterns whereby a dominant gender order is constructed and reproduced. Discourse analysis discloses some interpretive repertoires used by men and women to confirm and reinforce the hegemonic gender order.
|
The role of frequent engagement in alliances in firm likelihood to patent: First wave alliances in UK bio-pharmaceuticals
|
[
"Innovation",
"Pharmaceutical industry",
"Strategic alliances",
"Patents",
"Biotechnology",
"C33",
"M10",
"O32",
"D74"
] |
Summarize the following paper into structured abstract.
1. Introduction: As the popularity of strategic alliances is increasing and such alliances are becoming an integral component in business development, attention in the research community has moved towards an exploration of their role in firm performance and innovation. Formal and informal interactions with external actors have long being argued to play a fundamental role in firm innovation (Von Hippel, 1988; Frankort et al., 2012; Rice et al., 2012; Colombo et al., 2011; Demirkan, 2018). However, it is accepted that alliances carry coordination costs and risks of misappropriation, and these frequently diminish chances of success or preclude full acquisition of anticipated benefits (e.g. De Man and Duysters, 2005; Gkypali et al., 2017; Faems et al., 2010). A substantial body of literature finds a positive relationship between the extent of alliances and firm innovation performance, whilst other work identifies diminishing and even negative returns (Hoang and Rothaermel, 2005; Sampson, 2005; Laursen and Salter, 2006; Rothaermel and Deeds, 2006). As a result, a growing strand in the literature has examined the factors enabling firms to generate and capture value from alliances. One such factor is alliance experience accumulation: as firms develop greater experience in managing alliances they become better at coordinating cross-organisational tasks and knowledge flows (e.g. Sampson, 2005). Another factor that can improve performance in alliances is developing formal and codified processes and routines for alliance management (e.g. the establishment of dedicated alliance functions), which are argued to capture, or to be reflective of, firm-specific alliance management capabilities (Kale et al., 2002; Kale and Singh, 2009, 2007; Heimeriks and Duysters, 2007; Sampson, 2005; Schreiner et al., 2009; Di Guardo and Harrigan, 2016; Shukla and Mital, 2018). Whilst these contributions offer useful insights, much the greater part of current theorising has been constructed via the use of cross-sectional data. Longitudinal explorations are scarce, thus we lack a nuanced understanding of the type of changes that occur within firms over time with respect to enhanced innovation potential from engaging in alliances.
2. Theoretical background: alliance experience and alliance capabilities: To explain heterogeneity with respect to firm's abilities to benefit from alliances, the alliance literature resonates strongly with knowledge and capability-based theories of the firm (e.g. Kogut and Zander, 1992; Helfat and Peteraf, 2003). Here, we adopt a perspective that is informed by both evolutionary theory of the firm (Nelson and Winter, 1982) and dynamic approaches to the resource-based view (RBV) (Helfat et al., 2007; Helfat and Peteraf, 2003). These approaches take a dynamic view of organisational development, emphasising the role of experience and knowledge accumulation in supporting improved management and coordination of organisational tasks and activities. Given their dynamic and longitudinal orientation, they are closely aligned with our own analytical approach, informing our exploration of the roles of alliance experience accumulation and frequent engagement in alliances in organisational learning and enhanced innovation.
3. Alliance experience and firm returns from alliances: Firms with greater experience can draw from a greater pool of situations about what has worked in practice, when making decisions and inferences with respect to the performance of organisational practices (Levitt and March, 1988; Argote et al., 1990). Alliance experience (cumulative number of alliances) improves firms' abilities: to manage and coordinate alliances, to improve coordination of inter-organisational relationships and joint tasks, to form efficient arrangements for knowledge sharing, to deal effectively with unforeseen contingencies and to identify ways to overcome and resolve inter-partner conflict (Anand and Khanna, 2000; Sampson, 2005; Belderbos et al., 2015; Rothaermel and Deeds, 2006). Due to the link between alliance experience and organisational learning, experience is seen as a fundamental antecedent to both alliance and alliance portfolio capabilities (Wang and Rajagopalan, 2015; Kale and Singh, 2009; Shukla and Mital, 2018). Literature suggests that firms may not be in a position to benefit from learning from experience and superior coordination of alliances, when facing power asymmetries and resource dependence in alliances. Conflict is more frequent in such alliances which affects value creation and capture, especially for the weak partner as they are at a comparative disadvantage (Diestre and Rajagopalan, 2012). Power asymmetries are likely in the bio-pharmaceutical sector, as large pharmaceutical firms may be collaborating with small-dedicated biotech firms and due to their longer commitment to alliances and historic investments in downstream capabilities may be at a comparative advantage in deriving value from alliances (Caner and Tyler, 2013).
4. Frequent engagement in alliances as an antecedent to alliance capability: Deciphering, coding and measuring capabilities is notoriously difficult (Godfrey and Hill, 1995). As a result, the alliance literature has, in the main, relied on identifying alliance management practices (e.g. alliance functions) as a way of documenting alliance capabilities (Kale et al., 2002; Kale and Singh, 2007, 2009). An exception is found in the work of Rothaermel and Deeds (2006). They explore an inverted U-shaped relationship between cumulative alliance experience and new product development. They argue that, as the inflection point of the inverted U-shaped curve corresponds to the level of experience beyond which firms start experiencing inefficiencies in alliance management, it can reflect the level of their alliance capability.
5. Sample and methods: 5.1 Sample
6. Estimation and results: Table I presents descriptive statistics and bivariate correlations.
7. Tests of robustness[6]: The results suggest that there are diminishing returns to cumulative alliance experience. This can signal that ageing experience contributes less to current outcomes and that recent experience may have a higher contribution (Shukla and Mital, 2018; Sampson, 2005). Following Sampson (2005), we explore the contributions of recent and past alliance experience. We develop a set of variables capturing alliance experience between 1 and 6 years prior to our year of observation. So, for example, in 2001 alliance experience of one year corresponds to the number of alliances initiated in 2000 and alliance experience of two years corresponds to those initiated in 1999 and so on. None of these variables appear to be significant, with the exception of alliance experience of four years prior to the year of observation, which appears with a negative and significant sign. Therefore, the diminishing returns to cumulative alliance experience identified in our paper cannot be attributed to decreasing contributions of distant experience. Our results most likely reflect that firms, by just forming more alliances and learning from experience how to improve alliance management and coordination, cannot experience improved efficiency ad infinitum.
8. Discussion: The paper contributes to the literature on the role of alliances in innovation and alliance capabilities by using a longitudinal analysis which enables capture of the impact of organisational learning in managing and coordinating alliances on innovation. As it is difficult to capture learning, the paper uses a longitudinal approach to capture dynamic changes within firms and it observes learning through its impact on outcome variables such as innovation. The paper adds to a slender body of work exploring antecedents to firm-level alliance capabilities and their impact on enhancing firm's abilities to innovate (for a review see Wang and Rajagopalan, 2015).
9. Implications for management theory and practice: Our research contributes to the literature on the antecedents of alliance and alliance portfolio capabilities and on the conditions that shape superior firm outcomes from alliances, such as innovation. It suggests a need to delve into the antecedents to alliance capabilities, a thin body of research and to identify nascent factors that provide potential foundations for alliance capability development (Helfat and Peteraf, 2003), shifting attention away from the role played by alliance management practices (such as dedicated alliance functions) that have dominated existing research. This is particularly important as firms may establish such practices during the advanced stages of the alliance capability development process (Kale and Singh, 2009), and as such they may not appropriately reflect the foundational stages of alliance capability development. Here, we echo calls to delve deeper in understanding alliance management capabilities and the antecedents to alliance portfolio capabilities (Wang and Rajagopalan, 2015), especially in the context of alliances involving a higher learning potential (Heimeriks, 2010). Moreover, recent research shows that codification of alliance learning and systematic approaches to alliance management contribute to efficient partner selection and alliance termination, but may restrict flexibility and adaptability which are important for efficient management during the course of the alliance (Heimeriks et al., 2015; Wang and Rajagopalan, 2015).
|
Smartphones and wine consumers: a study of Gen-Y
|
[
"Wine",
"M-commerce",
"Social Networks Systems",
"Y-generatio"
] |
Summarize the following paper into structured abstract.
Introduction: "Omnichannel" has become a buzzword in retail for good reason. New technologies, such as mobile devices and social media, combined with better data, bring the long-time dream of a unified cross-channel shopping experience within reach. In practice, however, most retailers still fall short of achieving this vision, especially as it applies to Generation Y ("Gen-Y" - those people born between 1980 and 1991) and their use of smartphones. Mobile users are increasingly accessing social media using mobile devices, whether via browsers or apps. A study by Adobe (2013) among mobile users in the USA, Canada, UK, France and Germany found that most had accessed social networks using a mobile device, ranging from 94 per cent for those 18-29 years of age to 75 per cent of those 50-64 years of age. In fact, Facebook was the second most visited Web site/application that was accessed by smartphones and was the top smartphone app in the USA in August 2013 (ComScore, 2013). In countries, such as Italy and Germany, penetration rates of mobile phones exceed 100 per cent, with some consumers owning more than one mobile phone (Kaplan, 2012). Mobile phones and devices are increasingly used in conducting mobile commerce (Venkatesh et al., 2003, Ngai and Gunasekaran, 2007). Reputation is one of the reasons why customers rely on particular Web sites, apps or Web-apps, and some studies have investigated trust and reputation issues in a mobile ad hoc network environment (Lax and Sarne, 2008). Newman (2010) found that some 700,000 people view wine-related videos every month; there are over 7,000 wine tweets per day and > 300 iPhone apps for wine. Breslin (2013) estimated that 90 per cent of wine drinkers use Facebook 6.2 hours per week.
Brief presentation of Gen-Y's usage of social media embedded on smartphones: Who is a member of Gen-Y?
Confirmatory study of Gen-Y behavior with social media and m-commerce: Research method
Time and access to the Internet: Among the 190 respondents, 68 per cent of them connect to the Internet using their mobile and 38 per cent spend between 10 and 19 hours a week on the Internet, whereas 41 per cent spend more than 20 hours a week. Nearly all of them (95 per cent) access the Internet every day.
Purchase behavior and influence of peers' recommendations: Only 43 per cent purchase on the Internet at least once a month. They mainly use the Internet to stay in touch with friends and relatives (95 per cent), but also to look for information on a product (69 per cent). They also access discussion groups (29 per cent) and chat rooms (21 per cent).
Wine purchase and consumption: Regarding the wine consumption and habits, 7.4 per cent of our respondents are members of a group dedicated to wine. Wine purchases range from "don't buy wine" (32.6 per cent) to 36.8 per cent "buy wine several times a year". With regards to wine buyers only, they mainly buy it in supermarkets (56.3 per cent) and hypermarkets (19.5 per cent), but also in wine shops (19.5 per cent). They prefer to consume good wines:
Wine purchase and m-commerce: The Gen-Y consumers (56/190) who frequently buy wine (several times a month or more) consider the usefulness of the information regarding a specific wine (3.2/5) as important when looking up information through their mobile. This generation also rates highly:
Discussion, limitations and future research: As novice or potential wine consumers, members of Gen-Y are becoming increasingly significant targets for wine marketers (Mueller and Charters, 2011). This paper looks at the current state of m-commerce, the consumption of smartphones and social media and the transformation of the consumer into an omnichannel shopper. It also examines some responses to this emerging way of doing shopping which enriches the in-store experience with digital integrations, especially in relation to Gen-Y consumers. Armed with smartphones and tablets, wine shoppers go back and forth effortlessly between the real (whether in hypermarkets or supermarkets, convenience stores, wine shops, at the estate/winery or during a wine fair) and digital worlds (through the Internet, mobile app, wine club or by mail order). They are using their phones while in stores to research products and compare prices, and they order online and then pick up in person. At the same time, they consult friends near and far whenever they may find themselves contemplating a purchase, such as a nice bottle of wine. Every day, more of them come to expect a mobile or an "omnichannel" experience.
Limitations and future research: This topic is promising because France AgriMer (2012) shows that wine consumption is composed of 45 per cent of occasional drinkers (once or twice a week) for consumers below the age of 25, and between 25 and 34 years, 50 per cent are occasional drinkers. The Wine Market Council has also published data showing that the millennials (Gen-Y) were consuming 24 per cent of the total volume. Finally, based on AC Nielsen (2011) in March, Internet and mail-order are representing 10 per cent of the total sales in UK. Those data show that the Internet is more and more important, that Gen-Y members are occasional drinkers, and therefore to link those occasional drinkers with wine, it is important to develop pleasant platforms and interactive social networks. The advantage of providing good service on an m-commerce Web site translates into satisfied customers who will become brand advocates: they can refer other people to the wine grower. Internet consumers talk to other consumers about a good customer service experience. For a service, such as the purchase of wine from a particular wine grower who is already selling online and thinking of expanding into m-commerce, providing good customer service is a must. When customers tell other people about a bad experience, they do it on social networks to reach a large audience. This is why social networks must also be taken into account when planning an m-commerce strategy to avoid any negative buzz and therefore leverage a good e-reputation.
|
Towards global music digital libraries: A cross-cultural comparison on the mood of Chinese music
|
[
"Digital libraries",
"Cross-cultural",
"Chinese music",
"Mood perception",
"Music digital libraries",
"Music mood"
] |
Summarize the following paper into structured abstract.
1. Introduction: Music seeking and consumption are no longer confined by the boundaries of country, region or culture today (Lee et al., 2013). Music, as a cultural object, may be perceived differently by people from different cultural backgrounds, imposing a challenge on music digital libraries (MDL) in meeting the needs of a diverse audience (Weissenberger, 2015). Consequently, an increasing number of researchers have started to investigate various cross-cultural issues in the music information retrieval (MIR) and MDL fields. As people often seek music for emotional goals (Lavranos et al., 2015), music mood[1] has increasingly become a popular access point for music information in many MDL and online music services (Hu, 2010; Hu and Downie, 2007). This trend has raised questions regarding the applicability of music mood across cultural boundaries. Probably due to its subjective and context-based nature, music mood perception is often regarded as culture-dependent (Wong et al., 2009). A number of previous studies have compared mood perceptions of music in various cultures by listeners from different cultural backgrounds (e.g. Balkwill and Thompson, 1999; Fritz et al., 2009; Hu and Lee, 2012; Lee et al., 2013; Singhi and Brown, 2014; Egermann et al., 2014). Although specific findings vary, a general trend found in these existing studies is that listeners' perceptions of music mood can be influenced by their cultural backgrounds.
2. Background and related work: 2.1 A brief history and types of Chinese music
3. Mood representation models in MIR: There are mainly two kinds of models to represent mood in music psychology and MIR: categorical and dimensional. The former uses a set of discrete terms (e.g. "passionate," "cheerful") to represent the mood of a piece of music. The most classical model of this kind is Hevner's (1936) adjective circle where eight mood categories placed in a circle, with a set of terms in each category. The dimensional models, in contrast, represent mood with continuous values in a low dimensional space. Different models may have different dimensions, yet valence (i.e. level of pleasure) and arousal (i.e. level of energy) are among the most popular dimensions. The dimensional model used very often in MIR is Russell's (1980) model. The categorical and dimensional models have their own pros and cons. Yang and Chen (2012) summarize that categorical models are more user-friendly as they only consist of terms in natural language; whereas dimensional models are advantageous in terms of quantifying the intensity of moods. Notwithstanding the importance of dimensional models, categorical ones are more suitable for this current study, for the purpose of validating and comparing the values of mood metadata in the context of cross-cultural MDLs.
4. Research design: 4.1 The participant groups
5. Results: 5.1 Characteristics of participants
6. Implications for cross-cultural MDL/MIR design: As advocated by Weissenberger (2015), a flexible organization system is needed for MIR/MDL to meet the needs of different musical traditions. The results of this study have important implications for designing MDL of this kind.
7. Conclusion and future work: This study compares Hong Kong and US listeners' mood perceptions of 29 Chinese music pieces, with the goal of investigating whether and how mood perceptions of the two groups of listeners differ, and how the differences can inform the design of cross-cultural MDL. Music mood was modeled with the MIREX five mood clusters and the results suggested further refinement of this model. The selected music pieces included six genres and styles of Chinese music, ranging from traditional folk music to several sub-genres of C-pop.
|
An integrative model for understanding team organizational citizenship behavior: Its antecedents and consequences for educational teams
|
[
"OCB",
"Educational teams",
"Team innovation"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Today schools operate in dynamic environment, each struggling to gain a competitive edge (Orr and Orphanos, 2011). This environment reinforces the understanding that schools should strive to employ teachers who are willing to go the extra mile, namely to engage in organizational citizenship behavior (OCB). OCBs are behaviors and actions that go beyond the formal role and what is laid down in the teacher's job description, and contribute to the achievement of the school's objectives (Oplatka, 2006). OCB in schools is commonly taken to be an individual phenomenon, namely a personality trait or a social response to the behavior or attitude of superiors or to other motivation-based mechanisms in the workplace (Zeinabadi, 2010). This individual approach seems wanting: teachers perform or fail to perform OCBs not in a vacuum but in an organizational context, which most probably serves to encourage or discourage them (Somech and Drach-Zahavy, 2004). Only in the last two decades have some scholars studied OCB as a team or organizational-level concept (e.g. Ehrhart, 2004). Put otherwise, the individual approach probes the possibility that teacher OCB can be better understood as a team or an organizational feature that thrives in a context. This development is important because the aggregate OCB level, not sporadic actions by some individuals, influences organizational effectiveness (Organ, 1988). Moreover, the several studies that have captured OCB as a collective phenomenon examined either its antecedents (e.g. Hu and Liden, 2011) or its consequences (e.g. Yen et al., 2008). Very few researchers, if any, have explored the mediating role of team OCB in relation to specific antecedents and outcomes. The present research seeks to address this deficiency by examining OCB as a team phenomenon. The first premise of the research model is that it is important to measure OCB as a collective structure. Next, the mediating role of team OCB is investigated. Specifically, the research model posits that the contextual variable of a team's justice climate, and the team's collective psychological state (psychological capital), will be positively related to team OCB. Furthermore, team OCB will be positively related to the outcome of team innovation. The model also suggests that team OCB will mediate the relation between the antecedents and the outcome.
Theoretical background and hypotheses: Teachers' OCB is defined as "[...] teachers voluntarily going out of their way to help their students, colleagues, and others as they engage in the work of teaching and learning" (DiPaola and Hoy, 2005, p. 390). OCBs are essential because formal in-role job descriptions cannot cover the entire array of behaviors needed for achieving school goals (Zeinabadi, 2010). They operate indirectly: they influence schools' social and psychological environment, enhance school effectiveness by freeing up resources for more productive purposes, help coordinate activities within the organization, and enable teachers to adapt more effectively to environmental changes (Sesen and Basim, 2012). Teachers manifest OCB, for instance, by staying after school hours to help a student with learning materials, aid a colleague with a heavy workload, volunteer for unpaid tasks, or make innovative suggestions to improve the school (Somech and Oplatka, 2014).
Method: Sample and procedure
Results: Table I shows the means, standard deviations and correlations for the study variables.
Discussion: The present study explored team OCB from a context perspective. This course is important because to date most studies on OCB in schools have focused on teacher OCB as an individual feature, without regard for its contextual nature (Somech and Oplatka, 2014). Put simply, although OCB is performed by individuals, willingness to engage in OCB is not generated in a vacuum, and the team context most likely serves to encourage or discourage people in respect of exerting these extra-role behaviors (Vigoda-Gadot et al., 2007). In this regard, our findings highlighted the importance of team-level antecedents and outcomes for better understanding how between team differences in OCB are developed and what their consequences are. The results contribute to the educational administration literature in several respects.
|
Decision '08: event marketing or product sampling?
|
[
"Sampling methods",
"Marketing strategy",
"Product trials",
"Target markets",
"Direct marketing"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Recent trends in the evolution of marketing have delivered return on investment (ROI)-driven brand managers to an important crossroads. Should they choose event marketing or consumer sampling? Can it ever be both? If so, under what circumstances?
If sampling results are strongest when the trial opportunity is the greatest for the product category, why are so many brands risking the positive ROI of a targeted, direct-to-consumer sampling program, to invest in event marketing programs?: Most marketers believe they need to do something more to win over today's young adults - the target of most brand samples. Many marketers developing promotion plans hope to make a mark on the business by creating "cool" newsworthy promotions. Some brands choose to sample at events in the hope that interest in the event will deliver additional brand loyalty. Some marketers believe that integrating all elements of the marketing plan will deliver optimal results. This leads them to believe that their sampling results will be strongest in a program that has tied various promotional activities together.
Is the added expense of an event necessary?: More importantly, does it provide a better ROI? Here are some instructive case studies.Situation no. 1
Summary: Even when a targeted, direct-to-consumer sampling program has higher distribution costs, due to using a highly targeted list or having to mail the sample, it is unlikely that an event-sampling program could return a better ROI. That's because sample controls are not as good and trial numbers are likely to be lower, resulting in lower purchase numbers. It is also highly unlikely that any other benefit derived from an event could increase ROI enough to cover the substantial trial and purchase differences.What steps should marketers take to achieve the highest ROI behind product sampling?
|
Strategic megabrand management: does global uncertainty affect brands? A post-9/11 US/non-US comparison of the 100 biggest brands
|
[
"Terrorism",
"Brands",
"Uncertainty management"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.
Introduction: from brand management to megabrand strategies: The objective of this research article is to shed light on the evolution of brand management into a crucial strategic tool for international business operations. On basis of the literature available in this field, we analyze the largest 100 brands (hereafter categorized as megabrands) in terms of ranking and value modifications in the 2001 to 2005 period, a mature globalization period, with the first ranking referring to pre-09/11 findings. The sample and its analysis provide us with significant findings that open crucial questions about US/non-US brand strategy and perceptions, and the future application of global megabrand policies. We then shed light on the causal factors that global terrorism may contain and tentatively propose brand strategy solutions, but do not exclude other causal factors or co-factors that will need further inquiry. Overall, our hypothesis is that brands serve to bring security[1]. Accordingly, if the source of that brand is less secure, then it will be less effective as a brand. This hypothesis needs to be qualified: In particular, would one not expect that the short run reaction to insecurity is to be more, rather than less brand loyalty? The findings of this study have a strong indication that this assumption can be reversed, and we indicate that indirect impacts of global terrorism might be the reason. Further, it is important to show that negative security shifts in the US have been greater than a general increase in malaise in the global markets where the brands are sold. Again, the data indicates that movements in brand value and ranking appear to respond to more than such a malaise.Being a simple "identification tool" at its very start, brand names have become a critical part of a company's strategy. Academic research has shown that one major historic reason for brand success is the diminished risk perceived by the consumer (Roselius, 1971; Kapferer, 1991; Keller, 1998; Riezebos, 2003). McCarthy (1971) highlights the three primary roles of brand:* identification and purchase simplification function;* brand has a projective, symbolic and imaginary function and provides the consumer with a status; and* brand guarantees quality, protection and risk reduction for the consumer by pointing out to its source.For these reasons, companies are willing to consider brands as an important asset of their balance sheets or to invest huge amounts of capital to buy them (Laforet and Saunders, 1994).The power of brands is founded on consumers' aversion to uncertainty. For a long time, consumers made their food buying decisions based only on a product's visual aspect, ignoring its brand name, accepting instead the grocery store owner's opinion as selection criteria (Boyer, 2002). Later on producers introduced clearly visible signals that identified their products and consumers then got used to preferring the signal as opposed to the product visual characteristics (Keller, 1998); that is, brand became more important than the product itself (Riezebos, 2003).Even at the present, perceived risk reduction is the first reason consumers have for choosing a brand and this guides brand management evolution (Kapferer, 2003). When consumers perceive a risk in making a buying decision, they will deploy different strategies for reducing it. Five major risks are considered by consumers:1. Financial risk ("making a bad deal", which increases the importance of the brand compared with the unit price of the product).2. Physical risk (being harmed by the product, especially food products).3. Technological risk (being disappointed by the product performance, it is the risk of functionality).4. Psychological risk (feeling guilty or irresponsible for temptation, especially in impulsive decisions - or associating harm or risk to the brand, either associated to fear or sadness).5. Social risk (what pairs will say or think about choices. Therefore brand is a sign of possession for a community, but also a sign of adherence, of patriotism or of association to or away from particular social issues.)Risk reduction function directly related to the brand has been increased by the macro-economic context, especially after 09/11 because a fragile and complex environment is expected to increase the role the brand has to play in reassuring the consumers' buying decisions. Even though, we later argue that the capacity of brands to link producers and consumers has been rudely challenged. There have been drastic changes of consumption habits in some markets, such as the acceleration of the coming on-stream of the hard-discounters in Europe, with a new approach to the quality - price relationship and the weakening of the brand, of low-cost airlines, and of non-brand textiles from low-cost production. Companies have reacted to these new challenges. This new environment has notably changed the way in which big international companies conceive of their brands. Brand guarantee and its image are shelter points for consumers: normally, the higher the risk the more helpful the brand. Consequently, brands have learned a different way of communication (e.g. emphasizing safety themes, as carmakers do already), to change their relationship with the environment or towards the Third World (e.g., Nike reconsidering its production policy in order to improve its brand image) but also with globalization (e.g. being more respectful of local brands, as Nestle is). Brands also start working on ethical matters (The Body Shop's cosmetics products), fair trade (Malongo coffee) or social responsibility. But one of the major facets of this adaptation of brands and firms to the new situation is the coming on-stream and acceleration of megabrands within companies.
Megabrands?: Traditionally, choosing brand strategies is the focal point for companies, whether they are multinational groups or local companies (Schuiling and Kapferer, 2004). Supposing that a firm has different sources of competition, one of the strategic issues is whether it uses one or several brands. Strebinger (2002) states that one of the most critical problems in branding relates to the management of a mono or multi-brand system while Riezebos (2003) questions whether it is feasible having just a single-brand strategy in the company, with a prime focus on one brand and then developing additional brands from it.The historical development of branding includes some deeply contradictory factors, as shown in Figure 1.This figure visualizes and conceptualizes a company's willingness and need to have numerous products able to meet the different customers' demands as appropriate as possible, to assure their expansion and international development, that is, to counteract all risk of being a single-brand company. Likewise, there is a need for limiting the number of brands because of a second risk: that of a brand overexposure or over usage, including the financial risk of dispersing the investment.The first risk leads companies wishing to develop to buy or launch more brands in order to enter markets, segments or customers inaccessible with only one brand. This may be an "inflationist" process in terms of markets as it leads to create many brands.The second risk takes the same companies in the opposite direction, trying to limit the number of brands in order to maximize investment per brand, thereby making the brands stronger and covering more territory.But this process is intrinsically schizophrenic and raises the question of the strategic equilibrium of branding (Riezebos, 2003). Strategic choices may become brand choices, choices of brand organization or choices about the kind of relationship between brands that a company wants to maintain. One of the purposes of these choices is to maximize the equity of its different brands.As a way to escape from this process, many companies turn to megabranding.At its origins, the evolution of the brand universe towards megabrands comes from big corporations that discovered, in the early 80s, that they could create value by capitalizing on the transnational concepts carried in supranational brands so as to attain maximum return on investment (Kapferer, 2000). This new strategy reduced internal brand management costs and the costs of launching new innovative products. This simple idea has allowed many companies to focus on the strongest brands, or on brands with high growth potential or on highly internationalized brands, and to abandon or minimize all others. Indeed, at the beginning, economic reasons were the main inspiration for this rationalization process: first of all trying to concentrate all human and economic resources on a few brands and, especially, cutting advertising costs related to the launching and maintenance process of multiple brands.The megabrand concept, thus, is a core concern for most leading transnational firms because, as the competitive environment becomes more and more complex, and with a high level of risks of every nature, companies focus on megabrand strategy and attempt to assure their expansion and international development.In the early 1990s many companies did inform the market of their intentions to reduce their brand numbers: The most extreme case being that of Unilever, which planned to reduce from 1,600 down to 400 brands in the 2000-2004 period. Anthony Simon[2], President of Unilever-BestFoods marketing, underlined that "Unilever's objective is to reduce the number of brands in order to make them stronger. Four strategies support this decision: category, segment, channel and geography".In a megabrand strategy, a brand name may be used for horizontal extensions (inside the same price layer, common for mass consumption products) or vertical extensions (in different price layers, common for durable goods). This strategy can be very successful; a well-developed brand can provide a sustainable competitive advantage. To ensure continuous success, the operation of a megabrand strategy demands permanent innovation, strong R&D investment, a communicational style hard to imitate and a brand image not based on the product but on associations and perceptions.Megabrand management changes the focus of marketing to a superior, strategic decision-making level (Baldinger, 1990; Trinquecoste, 1999), as it implicitly involves focusing on the whole company instead of on individual brands (Riezebos, 2003). Both, Juga (1999) and Reynaud (2001) show that by displacing competition to this superior level, competitive advantages become harder to understand (less tangible) and to imitate.The increasing recognition of brands as a source of sustainable competitive advantage stresses the importance of conceptual models about organizational brand strategies (Louro and Cunha, 2001). Therefore, our research goal is to explore the megabranding field and to evaluate its strategic dimension as a new and more complex and durable source of competitive advantage in times of international adversity and the challenges of 09/11-type terrorism.
Research methodology: We have chosen to analyze the evolution of the value of megabrands over a five year period. The sample consists of those brands ranked in "The 100 best global brands" annually by Interbrand corporation for Business Week magazine.Interbrand defined seven criteria (see Appendix) which evaluate brands much in the way analysts value other assets, i.e. on the basis of how much they are likely to earn in the future.To qualify for the list each brand must:* have a value greater than $1 billion;* derive about a third of its earnings outside its home country; and* have publicly available marketing and financial data.For these reasons Interbrand specifies that such heavyweights as Visa, Wal-Mart, Mars or CNN are eliminated from the rankings. Only brands are taken in account (and not parent companies such as Procter and Gamble), and airlines are not ranked because it is too hard to separate their brand impact on sales from factors such as routes and schedules.Despite its limits, this ranking provides a global vision of the value of the main megabrands. This ranking has gained importance over the past years as a main reference for brand strategy. In addition, the assessment and evaluation method has not changed over the past five years. The rankings we refer to were published at the following dates: 6 August 2001/5 August 2002/4 August 2003/22 July 2004/21 July 2005.We present these five rankings in the Appendix. The first ranking refers to the period prior to the 09/11 events. We have, at the same time, conducted in-depth research into the question whether other factors may be responsible for the results we have found. Charts that summarize these findings are also presented in the Appendix, and - while it is certainly impossible to be exhaustive- exclude any major movements, evolutions, malaises or crises that could have the effects found (covering empirical research into size-trend and relative to industry, profitability-trend and relative to others in the industry, industry stage of life cycle, leverage-how vulnerable they are they to taking risks, country of origin to observe movements including characteristics such as access to capital, human resources, competition, and an index of insecurity, movements in scopes of megabrands (global reach, horizontal and vertical branding), or change in type of customers (e.g., services, package goods, durables, business); data chosen for illustration only covers main developments). By including such variables in the analysis, we strive to make it possible to determine that the risk hypothesis can be supported after controlling for other factors that lead to success, identifying other general factors that would lead to shift in megabrand positioning over time. No unexpected development in such has been found.
Research results: The top 100 brands
Implications to brand marketing: Our initial assumption for this research was that international corporations adapted their brand marketing to globalization. We began by reviewing megabrand strategies that were put into effect over three decades, an option chosen by a wide range of companies to secure global, relatively easy and cost-efficient management of brands. We then raised the question of how megabrands evolved over the five years from 2000, with an objective to study the validity of this strategy through the analysis of the value evolution in the ensemble of megabrands worldwideThe data analysis provides strong empirical findings and raises an important set of questions: The value of US top brands worldwide declined significantly after 2001, and over the past rankings of world megabrands, while non-US brands experienced significant expansion over the same period. This evolution is confirmed on all three levels of analysis that we developed: total of 100 leading brands, total of the 20 leading brands, and comparison between the leading ten US and non-US brands.Why is the value gap more significant in the top 20 brands than in the top ten ones? Are second tier brands of this sample more vulnerable, and if so, why? The further we decrease the ranks of top brands in the top 100, the bigger the gap becomes between US and non-US brands, and this to the benefit of non-US ones. Are business cycle trends responsible for this trend? Are these brands are particularly symbolic in terms of nationality and risk perception since 2001 and global terrorism? Will consumers feel uncomfortable with certain brands since 9/11, and if so, what indications could allow us to understand this phenomenon? Is the dot.com burst responsible for this?With these questions in mind, further analysis provides the following indications: The following tables, one with the top 20 evolutions and one with the bottom 20 worst evolutions of brands, only refer to brands for which data is available for all five years.The findings indicate that in the top 20 "best evolution" international mega-brands only eight brands are US American and 12 are non-US, that the two best performers by value are non US (Samsung and Louis Vuitton), that Pepsi Co is the top US brand (interestingly it is the leading one in the US and in terms of brand name the competitor Coca-Cola is part of the bottom 20 brands, though still best known); that 2nd and 3rd best American mega-brands are Dell and Apple brands, and those who one could consider best known as Microsoft and Oracle are in the bottom 20. Does this mean that demand remained constant but strong US image made them fall? Due to the diversity of products and sectors represented, we believe that the dot.com bubble, highly sector-dependent, cannot be causal or solely causal to the megabrand evolutions that we note.Also, currency fluctuations in that time period would rather imply opposite effects.If thus the evolution of brands value over this five year period, of megabrands, is linked to brand nationality, and in this case that of US or non-US origins, this would imply that corporations need to invest in megabrands emanating from different regions. If one considers that US brands may be more sensitive to risk perceptions from global terrorism, by the consumer, than non-US brands, and that this terrorism could be a causal factor, because the data modifies after 2001, then the managerial objective is immunity to the consequences of such events. Given the crucial significance of such cause to strategy, we provide some basis for understanding and potentially resilience.
The perception of threat from 09/11 terrorism as causal factor?: Alexander et al. (1979, p. 4) define terrorism as "the systematic threat or use of violence to attain a political goal or communicate a political message through fear, coercion, or intimidation of particular persons or the general public". We can assume that the citizen and consumer in this general public is, therefore, exposed to stress scenarios that differs from typical scenarios, and therefore alter his or her purchasing behavior.It is widely admitted that with 9/11/2001, terrorism has become more global (Schneckener, 2002). 9/11-type terrorism is characterized by a proximity to western civilization and its psychological impact is reinforced through wide spread media coverage. Contemporary terrorist activities share a number of common features which are inter-related and of a recently resurrected nature: These features include the increasing link of terrorist activity to a quasi-legitimization on basis of allegedly religious motivation, modern business-like leadership structures, asymmetric warfare, and the use of the victim mostly as part of a communication strategy. The objectives of terrorists are to convey a triple message:1. Government is not capable of guaranteeing security of a society or citizen, nor of service or product safety.2. Corporations, investors and travelers are safe nowhere, and that symbols of a country, culture and society that they convey are potential targets for any type of attack.3. Any measure taken against terrorism is insufficient by nature.These messages have a powerful impact on many. Psychological effects (defined above as any of the extremes, from feeling guilt or irresponsible for temptation, especially in impulsive decisions to associating harm or risk to the brand, either associated to fear or sadness) instill uncertainty into the economy, and have been found elsewhere to significantly affect the economic, organizational and governmental environment (Suder, 2004). Given this, we adduce that consumer behavior and corporate strategy may be affected. For instance, just as in times of war, the consumer may adopt a "stocking/storing" behavior for particular types of food and medicines if he/she perceives a terror-based threat.Therefore, we hypothesized elsewhere that a firm's performance under uncertainty and risk of terrorism will be a function of its ability to reduce its vulnerability to terrorist acts through risk analysis and assessment, through shortened supply lines, and a decreased need for economic redundancy (Suder, 2006). This is even more so in the case of 9/11-type terrorism; a terrorism that has globalized and that hits the global activities of firms in addition to those at the location of a strike. In this section, we therefore focus on the question whether top management of megabrands should take into account a corporation's vulnerability to terrorist threat felt by consumers. If a brand has national symbolism - like Coca Cola -then its goods or services are exposed to threat or acts of terrorism. Will the consumer turn away from the brand, or in fact rather increase its faith in it? Our study could be interpreted to show possible link on a quantitative basis by comparative approach. In this case, is a megabrand strategy still a reasonable option?To be deemed reliable, enterprises must be able to keep their brands resilient in the event of a catastrophe. The US airlines that were victims to the hijacking of its planes crashed into the WTC in New York are the first illustration of the psychological impacts of brands that are related to terrorism threat. The symbolic relation to the events though entirely involuntary had dramatic consequences for bith American and United airlines. Also, a tendency of clients to rather fly shorter distances, on separate flights and with non-nationally related aircraft such as low-cost airlines emerged since 9/11 (MacBain, 2003, Tourism Queensland, 2006 et.al.).Markets melt down or freeze with great speed in case of threat or terrorism acts, other markets can rise because consider unrelated to the threat (Suder and Czinkota, 2007). Another example is the reluctance of Londoners to use public transport after the double-attacks of summer 2005; the bicycle-market however boomed almost immediately. The terrible human costs of terrorism are clearly unacceptable to any logic or ethics. Given that terrorism has existed in various forms over history, people, companies and industry now need be knowledgeable about 9/11 type threat and its impacts, and to adapt.
International terrorism and brand marketing: a conceptual framework: International terrorism adds an important determinant to the definition of a firm's brand strategy. As an uncontrollable force in its external environment, terrorism events may lead to direct (mainly physical) or indirect (for instance consumer behavior and brand perception) disruptions. In the preliminary phase of threatened violence, or the following phase of the attack's aftermath (for details of this classification, see Suder, 2004), consumer demand for the firm's goods and services may alter but does not always decline (e.g. the demand for security equipment and services increases); any related disruption to the value chain perceivable by the consumer such as supply difficulties of needed inputs, resources, and services; or government policies and laws enacted to deal with terrorism alter the conduct of brand strategy. Macroeconomic phenomena, and shifts in international relations also modify behaviors. Media plays an important role in the intensification of the related psychological effects. For instance, the political differences between some European states and the USA in terms of the conduct of a war against terrorism, in particular concerning the invasion of Iraq, significantly modified consumer behaviors in the USA towards French and German brands (such as Roquefort cheese, Perrier water, ...and even French fries, solely based on their denomination). In those different dimensions, terrorism threat, act and aftermath affect:* ways of life;* perceptions;* consumption habits of millions of people all around the world; and* the company-client relationship.The responsiveness of consumers to a global threat is particularly high because it is intangible, close- by, and may strike anyone anywhere, in an expression of the "flatness" of the world. The incalculable uncertainty becomes a certainty that terror events happen and society and business adapt. The only certainty is that events will always be symbolic, whether that applies to locations, victims or the relation to the "hated" society. In this society anyone and anything can potentially identify with victims to attacks, whether human or object, whether a site, a product or a group. We therefore assume that this is so for brands in their dependence on perceptions and image.For a corporation, brand strategy and the administration of price shifts, communications, distribution strategies, buyers and suppliers, logistics, import and export are directly exposed to cultural issues, image responsibilities, and consequences of actions. For a consumer, brands have the particular capacity to link producers and consumers who trust in a specific set of quality, service and security "guarantees" linked psychologically to a particular brand. Brand marketing is symbolic and related on confidence, quite at the opposite of fear or panic. The consumer will hence turn to (or turn away) from brands in proportion to the strength that the brand relates to the threat, and expose brand strategy to risks non-related to their good performance.
A study of megabrands as risk-savers: A brand is by definition the symbol of an object or a service, as well as a model of the consumption society (Keller, 1998). One major weakness of the megabrand approach is to expose the company to a major risk: A single brand, a single image. Needless to say, if a problem occurs with this brand the whole company's stability is at stake. But consumers are also citizens and so the brand may be a broader social and economic battleground amongst companies with respect to consumers. For example, brands also represent an important political space were virulent political battles can be fought (Semprini, 1992). Some movements embody or oppose lifestyles symbolised by brands and their influence, sometimes in a very radical way, the consumer's society becomes represented by companies and their brands (Klein, 2002). This contesting opposition must be taken into consideration when developing brands and their territories in order to avoid vulnerability of a single-brand strategy and extreme exposure. Various authors already tackled this notion under the theme of brand capital or brand equity (Farquhar, 1989; Baldinger, 1990; Kapferer, 1991; Aaker, 1992; Keller, 1998). For Aaker (1992), brand capital is a unit consisting of the name and symbolic meaning of a brand that can add or decrease the value of a product or service, and that delivers value to the client and to the firm. An appropriate strategy this reinforces the value of brands while an inappropriate strategy diminishes the value. On basis of our findings and the exposed nexus that one may establish with 9/11, we suggest that megabrand strategy allows corporations to obtain critical size (specially facing the distribution channels), face the growth limits of existing brands, share, soften, and pool certain costs (research, industrialization, marketing) although the megabrand building process is time related and based on a variety of experiences. One can hence suggest that, if here lies the causal link, in post-9/11 megabrands allow for better control of risks by the company and increase the value of brands better if locally or regionally embedded. If megabrand strategy overexposes brands as symbolic for a mode of consumption rejected by or associated to terrorism, then megabrand overexposure diminishes the value of brands, by overexposing firms to risks, band devaluation and increasing company vulnerability.
Conclusion: The findings of this research imply that brand strategy is highly dependent on exterior factors and need to be adapted to those if competitive advantages shall not erode and shift considerably. While the causal link to 9/11 terrorism can not be clearly established, it does appear as one of the sensible explanations or co-factors for the dramatic evolution that were found. These findings in themselves hope to make a contribution to the understanding of megabrand strategy in mature globalization. It appears from our research that brand nationality, and thus brand associations to the various effects of terrorism (victimization or identification), may define the behavior of consumers and have an impact on brands value and ranking. For a future that may have to cope with 9/11-type terrorism, megabrands (except for the very strongest ones perhaps) may therefore not qualify as the best option for companies that wish to reduce risk and immunize brands and performances. If this is confirmed, the firms are well-advised to invest in mega-brands anchored into regions, through a transnational rather than a global strategy.Clearly further research into the potential causalities is needed: Whether terrorism, business cycles, currency issues, the bubble effect, all of these events united or none, international business scholars and practitioners are advised to study these links together, in each sector and market so as improve understandings and capabilities to respond appropriately to the evolution of megabrands in ranking and value since 2001.
|
Future employment selection methods: evaluating social networking web sites
|
[
"Selection",
"Recruitment",
"Social networks",
"Internet"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Within the past few years, the phenomenon of social networking web sites (SNWs) on the internet has exploded into the mainstream. Further, this online information has begun to be used for purposes beyond its intended use. Owing to the vast amount of personal information on these web sites, employers have begun to tap into this information as a source of applicant data in an effort to improve hiring decisions. This study evaluates the use of the SNWs in employment selection. Specifically, can trained judges consistently and accurately assess important organizational characteristics such as personality, intelligence, and performance using only a target's SNW information? In addition, the use of this information may lead to discrimination against applicants, given the wide range of available personal information such as gender, race, age, religion, and disability status otherwise illegal to use when making employment decisions.Social networking web sites
Method: Participants and procedures
Results: The as for the judge ratings of personality were calculated for each of the six ratees. These six as were then averaged for each of the big-five to estimate the overall internal consistency of the scales. In order to assess H1, interrater agreement in the form of average measures intraclass correlation coefficients (ICCs) for the judge ratings are included in Table I. The scaled scores for the big-five personality traits and the single item scores for IQ and performance were evaluated for interrater agreement. The 378 total ratings (63 ratersx6 ratings each) were used to calculate the ICCs. The ICC values were all adequate, ranging from 0.93 for extroversion to 0.99 for conscientiousness and performance. Since ICCs are expected to be higher with a larger number of raters, Table II also includes the number of raters for each characteristic which would be necessary to achieve a 0.50 ICC value. Although there are no guidelines for level of agreement, 0.50 was used in the analyses as it should provide a minimum level of acceptable agreement across judges. The Spearman Brown prophecy formula was used to determine how many raters would be required to obtain an adequate (0.50) ICC value. Based on the 63 raters from this study, it was determined that between two (for conscientiousness and performance) and six (for emotional stability and extroversion) raters would be required to obtain a satisfactory level of interrater agreement.H2 was evaluated by conducting t-tests on score means in order to determine whether or not the means are statistically different from one another. In order to determine which means to test, the true scores (self-reported big-five personality scores, intelligence scores, and GPA) of the six rated subjects were evaluated. For each of the seven characteristics, the individual with the highest true score and the individual with the lowest true score were selected for analysis. Judge mean ratings for these subjects were then compared to determine whether or not raters are able to distinguish individuals high on a characteristic from those low on the same characteristic. This method also allows for evaluation of the direction of the relationship, such that (in addition to evaluating mean differences) the judge rating of the subject with the higher true score should be higher than the judge rating of the lower true score. Results demonstrate that the mean judge ratings for the subject highest on the seven characteristics were statistically different from the subjects lowest on those characteristics. In addition, with the exception of openness to experience, the judges mean ratings were higher for those with the highest true score, indicating the ability of judges to distinguish the traits of conscientiousness, emotional stability, agreeableness, extroversion, intelligence, and performance by evaluating SNWs[6].Post hoc analyses were conducted to determine the impact of intelligence and personality on judge consistency and accuracy. Prior research has demonstrated mixed findings related to the impact of personality traits of the rater on rating accuracy. Ambady et al. (1995) found that less sociable (extroverted) raters were more accurate, while Lippa and Dietz (2000) found that only openness indicated more accurate raters. In addition, narcissistic raters have been found to be less accurate (John and Robbins, 1994), which may be relevant to the big-five since narcissism relates strongly to neuroticism. Finally, intelligence has also been reported to positively relate to rater accuracy (Lippa and Dietz, 2000). In the current study, the 63 judges were asked to take the same intelligence and personality tests as the SNW subjects. The analyses conducted above were then re-evaluated based on high versus low groups based on intelligence and the big-five. Results show no difference in interrater agreement based on these characteristics. However, judges who are more intelligent and more emotionally stable were shown to be more accurate in their judgments. More specifically, when the raters were split into high and low groups based on intelligence scores (the 31 highest scores versus the 31 lowest scores), the high intelligence group significantly and more accurately differentiated between high and low characteristics for conscientiousness, emotional stability, openness, and performance. For example, with all 63 raters combined, the difference between rater means for conscientiousness in Table II is 0.38 (8.03 for the high ratee score and 7.65 for the low ratee score). When assessing high and low intelligence raters independently, the mean difference for the 31 high intelligence raters is 0.61, but only 0.14 for the 31 low intelligence raters. Thus, more intelligent raters seem to be more capable of assessing this trait than less intelligent raters. Similarly, raters who are the most emotionally stable also rate more accurately for conscientiousness, emotional stability, openness, and performance. For example, the mean difference across raters for high and low ratee conscientiousness is again 0.38, but is 0.73 for the 31 raters who are the most emotionally stable and 0.03 for the 31 raters who are the least emotionally stable. These results indicate the potential need for researchers to consider intelligence and emotional stability when selecting individuals who will serve as raters of characteristics such as personality.
Discussion: Based on the large volume of personal information available on SNWs, judges' ratings of the big-five dimensions of personality, intelligence, and global performance were consistent across the 63 raters in this study, demonstrating adequate internal consistency reliability and interrater agreement. In addition, the trained raters were able to accurately distinguish between individuals who scored high and individuals who scored low on four of the big-five personality traits, intelligence, and performance, providing initial evidence that raters can accurately determine these organizationally relevant traits by viewing SNW information.As stated earlier, other rated personality has been shown to predict job performance. Considering that other methods of other-reported personality are unlikely to be viable in an employment selection context, SNW ratings of personality may be a practical approach. Owing to the theoretical and methodological differences between self-reported and other-rated personality, it is likely that ratings of personality via SNWs will provide a context for incremental prediction of job performance beyond the predominant self-report approach. In addition, the differences in context between SNWs and a job interview (i.e. socially desirable responding in the job interview as well as the unique nature of information contained in SNWs) should similarly allow for unique prediction of job performance beyond what can be evaluated through personality assessment in the employment interview. This approach may be particularly valuable since these assessments take only a fraction of the time involved with other selection methods.This study is not without limitations. Although the analyses testing the consistency of the relationships of SNW ratings are based on 378 judge ratings from 63 raters, the analyses testing rater accuracy were conducted by testing for significant differences between the high and low performer on the seven characteristics for only six subjects. Future research should assess accuracy over a larger sample of subjects.We hope that the results of this preliminary study will not be used by organizations to support their use of SNWs in employment selection. Without further validation in a variety of studies, with larger samples and in a variety of organizational contexts, caution should be used when interpreting the implications of this study. This is particularly true given the potential for employer legal liability due to the vast amount of personal information available on SNWs. Information regarding gender, race, age, disabilities, and other criteria which should not be used when making hiring decisions will most certainly, consciously or not, influence who gets hired. Even if this information does not bias the hiring decision, disparate impact issues may still exist. Future research should also examine the potential issues of adverse impact and potentially illegal information in hiring decisions using personal information from SNWs. In addition, research should be conducted to compare assessments of SNWs to other employment selection methods, such as personality assessment, intelligence testing, and employment interviews.Based on the relative absence of research evidence in this newly developing area, particularly regarding the potential for adverse impact and the lack of validity evidence, we believe the most important practical implication of this paper is for organizations to use SNWs with these issues in mind. Organizational representatives assessing SNWs should ask themselves two important questions. First, is the organization assessing (or could be perceived as assessing) information which could lead to discrimination against a legally protected group? Second, is the specific social networking information used to help make a hiring decision valid in determining who will perform better on the job? The approach used in this paper of assessing personality traits, intelligence, or general performance begin to provide answers to these questions.
|
What factors influence firm perceptions of labour market constraints to growth in the MENA region?
|
[
"MENA region",
"Labour regulations",
"Labour skill shortages",
"Labour market constraints",
"Bivariate probit model"
] |
Summarize the following paper into structured abstract.
1. Introduction: Stringent labour market constraints are expected to pose serious obstacles to firm performance and economic growth. A wide range of literature finds that rigid labour regulations would induce lower labour force participation and higher unemployment rates (e.g. Botero et al., 2004; Besley and Burgess, 2004; Amin, 2009; Djankov and Ramalho, 2009), and would prevent labour markets from being efficient leading to losses in productivity (e.g. Kaplan, 2009). Another strand of literature inspects the problem of labour skill shortages or "skill deficits", which can be defined as the divergence between the educational attainments of workers and the skill requirements of jobs (Kiker et al., 1997). This literature regularly indicates that accentuated labour skill shortages impose significant restrictions on employment creation and economic growth (e.g. Pissarides and Veganzones-Varoudakis, 2007; Bhattacharya and Wolde, 2012), and could eventually inflict severe impacts on economic performance and labour market outcomes (e.g. Allen and van der Velden, 2001).
2. Review of related literature: 2.1. Labour regulations
3. Some considerations about data: The empirical analysis is carried out for the perceived levels of labour market constraints as reported by the respondents (e.g. senior managers, business owners) through the World Bank's Enterprise Surveys database. Pierre and Scarpetta (2004, 2006) examine the relationship between the perceived and actual stringency of labour regulations using national labour protection indices (i.e. de jure labour laws). They find that the reported perceptions are closely related to the actual levels of labour regulations' constraints. Specifically, countries with higher national indices on the stringency of labour regulations are associated with higher proportions of firms perceiving labour regulations as being significant constraints.
4. Data description and variables: We use a data set sourced from the World Bank's Enterprise Surveys database. This database represents a comprehensive source of firm-level data in emerging and developing economies. It covers firms operating in the manufacturing, service, and other sectors. It contains information on various aspects of the business environment such as, access to finance, corruption, workforce characteristics, innovation and technology, and trade. It should be noted that one of many advantages of using data from these surveys is that the questions are identical through firms across all countries. The basic data set used in this paper covers 5,052 firms located in eight developing Arab countries of the MENA region: Algeria, Egypt, Jordan, Lebanon, Morocco, Oman, Syria, and Yemen[5].
5. Empirical specification: Consider a given firm j (j=1, ..., J) belonging to sector k (k=1, ..., K) and located in country c (c=1, ..., C). Firm perception levels of constraints related to labour regulations and those related to labour skill shortages are depicted through the latent variables (Equation 1) and (Equation 2), respectively. These latent variables are not observed. However, we observe the perceptions of firms through dichotomous responses on whether labour regulations and labour skill shortages do or do not pose major/severe obstacles on firm operations and development. Let R
6. Benchmark empirical results: Table III presents the marginal effects from the benchmark bivariate probit estimation carried out for the pooled data set covering existing firms' perceptions of labour market constraints. The Wald test rejects the null hypothesis of zero correlation between the errors in the two labour market constraints' equations and, hence, it indicates that the model should be estimated through the bivariate probit estimator rather than through the univariate probit estimator. The estimated coefficient of correlation between the errors in the two equations is positive and statistically significant at the 1 per cent level. Table III displays the unconditional marginal effects for Pr(R
7. Empirical results by sector and by country: 7.1. Empirical results by sector
8. Conclusion: Labour market constraints are often identified as main business obstacles facing firm operation and development in the MENA region. Therefore, they are naturally listed through the primary items on the labour policy agenda of the MENA countries. The comprehension of the factors influencing the perceived severity of labour market constraints is essential in the design of policies aiming at improving labour market conditions and enhancing business environments. This paper examines the implications of firm characteristics, national locations, and sectoral associations for the perceptions of firms located in the MENA region concerning two primary labour market constraints: labour regulations and labour skill shortages. The empirical analysis is carried out using firm-level data set sourced from the World Bank's Enterprise Surveys database. A bivariate probit estimator is used to account for potential correlations between the errors in the labour regulations' equation and labour skill shortages' equation. The empirical results are generated through overall estimations and by implementing comparative cross-country and cross-sector analyses.
|
Shared brands and sustainable competitive advantage in the Brazilian wine sector
|
[
"Marketing strategy",
"Qualitative research",
"Competitive strategy",
"Brands",
"Interviews",
"Geographical indications",
"Sustainable competitive advantage",
"Shared brands",
"Collective brands",
"Sector brands"
] |
Summarize the following paper into structured abstract.
1. Introduction: A brand can be seen as a strategic asset that helps a company to be more competitive. In the same way that companies invest in brands, countries can also be seen as such (Anholt, 2005; Huang and Tsai, 2013; Kotler et al., 2006). In considering the role that the image of a country can play in a buyer's behavior, constructs such as the country's brand, the country's image and the country of origin may be attributes that offer the potential for companies to achieve a sustainable competitive advantage (SCA), both in the internal and external market (Baker and Ballington, 2002; Hakala et al., 2013).
2. Literature review: 2.1 Sustainable competitive advantage
3. Methodology: This study was based on a qualitative approach that, for Bauer and Gaskell (2000), aims to understand in detail beliefs, attitudes, values and motivations regarding the behavior of people in specific social contexts (Malhotra, 2006). The research was exploratory and the field study was conducted with in-depth interviews (Bardin, 2011; Cooper and Schindler, 2014; Sampieri et al., 2010).
4. Results: 4.1 C1 - valuable
5. Conclusions: It can be concluded that the proposition that shared brands - GI, collective brands and sector brands - provide SCA, according to the VRIA, can be confirmed, thus fulfilling the four conditions that form the acronym VRIA - valuable, rare, imperfectly imitable/replaceable and association. The value added to the product through the information contained in the shared brands facilitates the establishment of a relationship of trust between producer and consumer, being a source of competitive advantage.
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 21