Skip to content
Read the latest news stories about Mailman faculty, research, and events.
We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.
Learn more about our research centers, which focus on critical issues in public health.
Meet the faculty of the Mailman School of Public Health.
Become a Student
Life and community, how to apply.
Learn how to apply to the Mailman School of Public Health.
Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts. As an example, researchers can evaluate language used within a news article to search for bias or partiality. Researchers can then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of surrounding the text.
Sources of data could be from interviews, open-ended questions, field research notes, conversations, or literally any occurrence of communicative language (such as books, essays, discussions, newspaper headlines, speeches, media, historical documents). A single study may analyze various forms of text in its analysis. To analyze the text using content analysis, the text must be coded, or broken down, into manageable code categories for analysis (i.e. “codes”). Once the text is coded into code categories, the codes can then be further categorized into “code categories” to summarize data even further.
Three different definitions of content analysis are provided below.
Definition 1: “Any technique for making inferences by systematically and objectively identifying special characteristics of messages.” (from Holsti, 1968)
Definition 2: “An interpretive and naturalistic approach. It is both observational and narrative in nature and relies less on the experimental elements normally associated with scientific research (reliability, validity, and generalizability) (from Ethnography, Observational Research, and Narrative Inquiry, 1994-2012).
Definition 3: “A research technique for the objective, systematic and quantitative description of the manifest content of communication.” (from Berelson, 1952)
Uses of Content Analysis
Identify the intentions, focus or communication trends of an individual, group or institution
Describe attitudinal and behavioral responses to communications
Determine the psychological or emotional state of persons or groups
Reveal international differences in communication content
Reveal patterns in communication content
Pre-test and improve an intervention or survey prior to launch
Analyze focus group interviews and open-ended questions to complement quantitative data
Types of Content Analysis
There are two general types of content analysis: conceptual analysis and relational analysis. Conceptual analysis determines the existence and frequency of concepts in a text. Relational analysis develops the conceptual analysis further by examining the relationships among concepts in a text. Each type of analysis may lead to different results, conclusions, interpretations and meanings.
Typically people think of conceptual analysis when they think of content analysis. In conceptual analysis, a concept is chosen for examination and the analysis involves quantifying and counting its presence. The main goal is to examine the occurrence of selected terms in the data. Terms may be explicit or implicit. Explicit terms are easy to identify. Coding of implicit terms is more complicated: you need to decide the level of implication and base judgments on subjectivity (an issue for reliability and validity). Therefore, coding of implicit terms involves using a dictionary or contextual translation rules or both.
To begin a conceptual content analysis, first identify the research question and choose a sample or samples for analysis. Next, the text must be coded into manageable content categories. This is basically a process of selective reduction. By reducing the text to categories, the researcher can focus on and code for specific words or patterns that inform the research question.
General steps for conducting a conceptual content analysis:
1. Decide the level of analysis: word, word sense, phrase, sentence, themes
2. Decide how many concepts to code for: develop a pre-defined or interactive set of categories or concepts. Decide either: A. to allow flexibility to add categories through the coding process, or B. to stick with the pre-defined set of categories.
Option A allows for the introduction and analysis of new and important material that could have significant implications to one’s research question.
Option B allows the researcher to stay focused and examine the data for specific concepts.
3. Decide whether to code for existence or frequency of a concept. The decision changes the coding process.
When coding for the existence of a concept, the researcher would count a concept only once if it appeared at least once in the data and no matter how many times it appeared.
When coding for the frequency of a concept, the researcher would count the number of times a concept appears in a text.
4. Decide on how you will distinguish among concepts:
Should text be coded exactly as they appear or coded as the same when they appear in different forms? For example, “dangerous” vs. “dangerousness”. The point here is to create coding rules so that these word segments are transparently categorized in a logical fashion. The rules could make all of these word segments fall into the same category, or perhaps the rules can be formulated so that the researcher can distinguish these word segments into separate codes.
What level of implication is to be allowed? Words that imply the concept or words that explicitly state the concept? For example, “dangerous” vs. “the person is scary” vs. “that person could cause harm to me”. These word segments may not merit separate categories, due the implicit meaning of “dangerous”.
5. Develop rules for coding your texts. After decisions of steps 1-4 are complete, a researcher can begin developing rules for translation of text into codes. This will keep the coding process organized and consistent. The researcher can code for exactly what he/she wants to code. Validity of the coding process is ensured when the researcher is consistent and coherent in their codes, meaning that they follow their translation rules. In content analysis, obeying by the translation rules is equivalent to validity.
6. Decide what to do with irrelevant information: should this be ignored (e.g. common English words like “the” and “and”), or used to reexamine the coding scheme in the case that it would add to the outcome of coding?
7. Code the text: This can be done by hand or by using software. By using software, researchers can input categories and have coding done automatically, quickly and efficiently, by the software program. When coding is done by hand, a researcher can recognize errors far more easily (e.g. typos, misspelling). If using computer coding, text could be cleaned of errors to include all available data. This decision of hand vs. computer coding is most relevant for implicit information where category preparation is essential for accurate coding.
8. Analyze your results: Draw conclusions and generalizations where possible. Determine what to do with irrelevant, unwanted, or unused text: reexamine, ignore, or reassess the coding scheme. Interpret results carefully as conceptual content analysis can only quantify the information. Typically, general trends and patterns can be identified.
Relational analysis begins like conceptual analysis, where a concept is chosen for examination. However, the analysis involves exploring the relationships between concepts. Individual concepts are viewed as having no inherent meaning and rather the meaning is a product of the relationships among concepts.
To begin a relational content analysis, first identify a research question and choose a sample or samples for analysis. The research question must be focused so the concept types are not open to interpretation and can be summarized. Next, select text for analysis. Select text for analysis carefully by balancing having enough information for a thorough analysis so results are not limited with having information that is too extensive so that the coding process becomes too arduous and heavy to supply meaningful and worthwhile results.
There are three subcategories of relational analysis to choose from prior to going on to the general steps.
Affect extraction: an emotional evaluation of concepts explicit in a text. A challenge to this method is that emotions can vary across time, populations, and space. However, it could be effective at capturing the emotional and psychological state of the speaker or writer of the text.
Proximity analysis: an evaluation of the co-occurrence of explicit concepts in the text. Text is defined as a string of words called a “window” that is scanned for the co-occurrence of concepts. The result is the creation of a “concept matrix”, or a group of interrelated co-occurring concepts that would suggest an overall meaning.
Cognitive mapping: a visualization technique for either affect extraction or proximity analysis. Cognitive mapping attempts to create a model of the overall meaning of the text such as a graphic map that represents the relationships between concepts.
General steps for conducting a relational content analysis:
1. Determine the type of analysis: Once the sample has been selected, the researcher needs to determine what types of relationships to examine and the level of analysis: word, word sense, phrase, sentence, themes. 2. Reduce the text to categories and code for words or patterns. A researcher can code for existence of meanings or words. 3. Explore the relationship between concepts: once the words are coded, the text can be analyzed for the following:
Strength of relationship: degree to which two or more concepts are related.
Sign of relationship: are concepts positively or negatively related to each other?
Direction of relationship: the types of relationship that categories exhibit. For example, “X implies Y” or “X occurs before Y” or “if X then Y” or if X is the primary motivator of Y.
4. Code the relationships: a difference between conceptual and relational analysis is that the statements or relationships between concepts are coded. 5. Perform statistical analyses: explore differences or look for relationships among the identified variables during coding. 6. Map out representations: such as decision mapping and mental models.
Reliability and Validity
Reliability : Because of the human nature of researchers, coding errors can never be eliminated but only minimized. Generally, 80% is an acceptable margin for reliability. Three criteria comprise the reliability of a content analysis:
Stability: the tendency for coders to consistently re-code the same data in the same way over a period of time.
Reproducibility: tendency for a group of coders to classify categories membership in the same way.
Accuracy: extent to which the classification of text corresponds to a standard or norm statistically.
Validity : Three criteria comprise the validity of a content analysis:
Closeness of categories: this can be achieved by utilizing multiple classifiers to arrive at an agreed upon definition of each specific category. Using multiple classifiers, a concept category that may be an explicit variable can be broadened to include synonyms or implicit variables.
Conclusions: What level of implication is allowable? Do conclusions correctly follow the data? Are results explainable by other phenomena? This becomes especially problematic when using computer software for analysis and distinguishing between synonyms. For example, the word “mine,” variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. Software can obtain an accurate count of that word’s occurrence and frequency, but not be able to produce an accurate accounting of the meaning inherent in each particular usage. This problem could throw off one’s results and make any conclusion invalid.
Generalizability of the results to a theory: dependent on the clear definitions of concept categories, how they are determined and how reliable they are at measuring the idea one is seeking to measure. Generalizability parallels reliability as much of it depends on the three criteria for reliability.
Advantages of Content Analysis
Directly examines communication using text
Allows for both qualitative and quantitative analysis
Provides valuable historical and cultural insights over time
Allows a closeness to data
Coded form of the text can be statistically analyzed
Unobtrusive means of analyzing interactions
Provides insight into complex models of human thought and language use
When done well, is considered a relatively “exact” research method
Content analysis is a readily-understood and an inexpensive research method
A more powerful tool when combined with other research methods such as interviews, observation, and use of archival records. It is very useful for analyzing historical material, especially for documenting trends over time.
Disadvantages of Content Analysis
Can be extremely time consuming
Is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation
Is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study
Is inherently reductive, particularly when dealing with complex texts
Tends too often to simply consist of word counts
Often disregards the context that produced the text, as well as the state of things after the text is produced
Can be difficult to automate or computerize
Textbooks & Chapters
Berelson, Bernard. Content Analysis in Communication Research.New York: Free Press, 1952.
Busha, Charles H. and Stephen P. Harter. Research Methods in Librarianship: Techniques and Interpretation.New York: Academic Press, 1980.
de Sola Pool, Ithiel. Trends in Content Analysis. Urbana: University of Illinois Press, 1959.
Krippendorff, Klaus. Content Analysis: An Introduction to its Methodology. Beverly Hills: Sage Publications, 1980.
Fielding, NG & Lee, RM. Using Computers in Qualitative Research. SAGE Publications, 1991. (Refer to Chapter by Seidel, J. ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’.)
Hsieh HF & Shannon SE. (2005). Three Approaches to Qualitative Content Analysis.Qualitative Health Research. 15(9): 1277-1288.
Elo S, Kaarianinen M, Kanste O, Polkki R, Utriainen K, & Kyngas H. (2014). Qualitative Content Analysis: A focus on trustworthiness. Sage Open. 4:1-10.
Abroms LC, Padmanabhan N, Thaweethai L, & Phillips T. (2011). iPhone Apps for Smoking Cessation: A content analysis. American Journal of Preventive Medicine. 40(3):279-285.
Ullstrom S. Sachs MA, Hansson J, Ovretveit J, & Brommels M. (2014). Suffering in Silence: a qualitative study of second victims of adverse events. British Medical Journal, Quality & Safety Issue. 23:325-331.
Owen P. (2012).Portrayals of Schizophrenia by Entertainment Media: A Content Analysis of Contemporary Movies. Psychiatric Services. 63:655-659.
Choosing whether to conduct a content analysis by hand or by using computer software can be difficult. Refer to ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’ listed above in “Textbooks and Chapters” for a discussion of the issue.
QSR NVivo: http://www.qsrinternational.com/products.aspx
R- RQDA package: http://rqda.r-forge.r-project.org/
Rolly Constable, Marla Cowell, Sarita Zornek Crawford, David Golden, Jake Hartvigsen, Kathryn Morgan, Anne Mudgett, Kris Parrish, Laura Thomas, Erika Yolanda Thompson, Rosie Turner, and Mike Palmquist. (1994-2012). Ethnography, Observational Research, and Narrative Inquiry. Writing@CSU . Colorado State University. Available at: https://writing.colostate.edu/guides/guide.cfm?guideid=63 .
As an introduction to Content Analysis by Michael Palmquist, this is the main resource on Content Analysis on the Web. It is comprehensive, yet succinct. It includes examples and an annotated bibliography. The information contained in the narrative above draws heavily from and summarizes Michael Palmquist’s excellent resource on Content Analysis but was streamlined for the purpose of doctoral students and junior researchers in epidemiology.
At Columbia University Mailman School of Public Health, more detailed training is available through the Department of Sociomedical Sciences- P8785 Qualitative Research Methods.
Join the Conversation
Have a question about methods? Join us on Facebook
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- Content Analysis | Guide, Methods & Examples
Content Analysis | Guide, Methods & Examples
Published on July 18, 2019 by Amy Luo . Revised on June 22, 2023.
Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:
- Books, newspapers and magazines
- Speeches and interviews
- Web content and social media posts
- Photographs and films
Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding). In both types, you categorize or “code” words, themes, and concepts within the texts and then analyze the results.
Table of contents
What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis, other interesting articles.
Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyze.
Content analysis can be used to quantify the occurrence of certain words, phrases, subjects or concepts in a set of historical or contemporary texts.
Quantitative content analysis example
To research the importance of employment issues in political campaigns, you could analyze campaign speeches for the frequency of terms such as unemployment , jobs , and work and use statistical analysis to find differences over time or between candidates.
In addition, content analysis can be used to make qualitative inferences by analyzing the meaning and semantic relationship of words and concepts.
Qualitative content analysis example
To gain a more qualitative understanding of employment issues in political campaigns, you could locate the word unemployment in speeches, identify what other words or phrases appear next to it (such as economy, inequality or laziness ), and analyze the meanings of these relationships to better understand the intentions and targets of different campaigns.
Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:
- Finding correlations and patterns in how concepts are communicated
- Understanding the intentions of an individual, group or institution
- Identifying propaganda and bias in communication
- Revealing differences in communication in different contexts
- Analyzing the consequences of communication content, such as the flow of information or audience responses
Prevent plagiarism. Run a free check.
- Unobtrusive data collection
You can analyze communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.
- Transparent and replicable
When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .
- Highly flexible
You can conduct content analysis at any time, in any location, and at low cost – all you need is access to the appropriate sources.
Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.
Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions, leading to various types of research bias and cognitive bias .
- Time intensive
Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.
If you want to use content analysis in your research, you need to start with a clear, direct research question .
Example research question for content analysis
Is there a difference in how the US media represents younger politicians compared to older ones in terms of trustworthiness?
Next, you follow these five steps.
1. Select the content you will analyze
Based on your research question, choose the texts that you will analyze. You need to decide:
- The medium (e.g. newspapers, speeches or websites) and genre (e.g. opinion pieces, political campaign speeches, or marketing copy)
- The inclusion and exclusion criteria (e.g. newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
- The parameters in terms of date range, location, etc.
If there are only a small amount of texts that meet your criteria, you might analyze all of them. If there is a large volume of texts, you can select a sample .
2. Define the units and categories of analysis
Next, you need to determine the level at which you will analyze your chosen texts. This means defining:
- The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
- The set of categories that you will use for coding. Categories can be objective characteristics (e.g. aged 30-40 , lawyer , parent ) or more conceptual (e.g. trustworthy , corrupt , conservative , family oriented ).
Your units of analysis are the politicians who appear in each article and the words and phrases that are used to describe them. Based on your research question, you have to categorize based on age and the concept of trustworthiness. To get more detailed data, you also code for other categories such as their political party and the marital status of each politician mentioned.
3. Develop a set of rules for coding
Coding involves organizing the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.
Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.
In considering the category “younger politician,” you decide which titles will be coded with this category ( senator, governor, counselor, mayor ). With “trustworthy”, you decide which specific words or phrases related to trustworthiness (e.g. honest and reliable ) will be coded in this category.
4. Code the text according to the rules
You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti and Diction , which can help speed up the process of counting and categorizing words and phrases.
Following your coding rules, you examine each newspaper article in your sample. You record the characteristics of each politician mentioned, along with all words and phrases related to trustworthiness that are used to describe them.
5. Analyze the results and draw conclusions
Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context and audience of the texts.
Let’s say the results reveal that words and phrases related to trustworthiness appeared in the same sentence as an older politician more frequently than they did in the same sentence as a younger politician. From these results, you conclude that national newspapers present older politicians as more trustworthy than younger politicians, and infer that this might have an effect on readers’ perceptions of younger people in politics.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Normal distribution
- Measures of central tendency
- Chi square tests
- Confidence interval
- Quartiles & Quantiles
- Cluster sampling
- Stratified sampling
- Thematic analysis
- Cohort study
- Peer review
- Implicit bias
- Cognitive bias
- Conformity bias
- Hawthorne effect
- Availability heuristic
- Attrition bias
- Social desirability bias
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Luo, A. (2023, June 22). Content Analysis | Guide, Methods & Examples. Scribbr. Retrieved November 30, 2023, from https://www.scribbr.com/methodology/content-analysis/
Is this article helpful?
Other students also liked
Qualitative vs. quantitative research | differences, examples & methods, descriptive research | definition, types, methods & examples, reliability vs. validity in research | difference, types and examples, what is your plagiarism score.
Using Content Analysis
This guide provides an introduction to content analysis, a research methodology that examines words or phrases within a wide range of texts.
- Introduction to Content Analysis : Read about the history and uses of content analysis.
- Conceptual Analysis : Read an overview of conceptual analysis and its associated methodology.
- Relational Analysis : Read an overview of relational analysis and its associated methodology.
- Commentary : Read about issues of reliability and validity with regard to content analysis as well as the advantages and disadvantages of using content analysis as a research methodology.
- Examples : View examples of real and hypothetical studies that use content analysis.
- Annotated Bibliography : Complete list of resources used in this guide and beyond.
An Introduction to Content Analysis
Content analysis is a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Researchers quantify and analyze the presence, meanings and relationships of such words and concepts, then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of which these are a part. Texts can be defined broadly as books, book chapters, essays, interviews, discussions, newspaper headlines and articles, historical documents, speeches, conversations, advertising, theater, informal conversation, or really any occurrence of communicative language. Texts in a single study may also represent a variety of different types of occurrences, such as Palmquist's 1990 study of two composition classes, in which he analyzed student and teacher interviews, writing journals, classroom discussions and lectures, and out-of-class interaction sheets. To conduct a content analysis on any such text, the text is coded, or broken down, into manageable categories on a variety of levels--word, word sense, phrase, sentence, or theme--and then examined using one of content analysis' basic methods: conceptual analysis or relational analysis.
A Brief History of Content Analysis
Historically, content analysis was a time consuming process. Analysis was done manually, or slow mainframe computers were used to analyze punch cards containing data punched in by human coders. Single studies could employ thousands of these cards. Human error and time constraints made this method impractical for large texts. However, despite its impracticality, content analysis was already an often utilized research method by the 1940's. Although initially limited to studies that examined texts for the frequency of the occurrence of identified terms (word counts), by the mid-1950's researchers were already starting to consider the need for more sophisticated methods of analysis, focusing on concepts rather than simply words, and on semantic relationships rather than just presence (de Sola Pool 1959). While both traditions still continue today, content analysis now is also utilized to explore mental models, and their linguistic, affective, cognitive, social, cultural and historical significance.
Uses of Content Analysis
Perhaps due to the fact that it can be applied to examine any piece of writing or occurrence of recorded communication, content analysis is currently used in a dizzying array of fields, ranging from marketing and media studies, to literature and rhetoric, ethnography and cultural studies, gender and age issues, sociology and political science, psychology and cognitive science, and many other fields of inquiry. Additionally, content analysis reflects a close relationship with socio- and psycholinguistics, and is playing an integral role in the development of artificial intelligence. The following list (adapted from Berelson, 1952) offers more possibilities for the uses of content analysis:
- Reveal international differences in communication content
- Detect the existence of propaganda
- Identify the intentions, focus or communication trends of an individual, group or institution
- Describe attitudinal and behavioral responses to communications
- Determine psychological or emotional state of persons or groups
Types of Content Analysis
In this guide, we discuss two general categories of content analysis: conceptual analysis and relational analysis. Conceptual analysis can be thought of as establishing the existence and frequency of concepts most often represented by words of phrases in a text. For instance, say you have a hunch that your favorite poet often writes about hunger. With conceptual analysis you can determine how many times words such as hunger, hungry, famished, or starving appear in a volume of poems. In contrast, relational analysis goes one step further by examining the relationships among concepts in a text. Returning to the hunger example, with relational analysis, you could identify what other words or phrases hunger or famished appear next to and then determine what different meanings emerge as a result of these groupings.
Traditionally, content analysis has most often been thought of in terms of conceptual analysis. In conceptual analysis, a concept is chosen for examination, and the analysis involves quantifying and tallying its presence. Also known as thematic analysis [although this term is somewhat problematic, given its varied definitions in current literature--see Palmquist, Carley, & Dale (1997) vis-a-vis Smith (1992)], the focus here is on looking at the occurrence of selected terms within a text or texts, although the terms may be implicit as well as explicit. While explicit terms obviously are easy to identify, coding for implicit terms and deciding their level of implication is complicated by the need to base judgments on a somewhat subjective system. To attempt to limit the subjectivity, then (as well as to limit problems of reliability and validity ), coding such implicit terms usually involves the use of either a specialized dictionary or contextual translation rules. And sometimes, both tools are used--a trend reflected in recent versions of the Harvard and Lasswell dictionaries.
Methods of Conceptual Analysis
Conceptual analysis begins with identifying research questions and choosing a sample or samples. Once chosen, the text must be coded into manageable content categories. The process of coding is basically one of selective reduction . By reducing the text to categories consisting of a word, set of words or phrases, the researcher can focus on, and code for, specific words or patterns that are indicative of the research question.
An example of a conceptual analysis would be to examine several Clinton speeches on health care, made during the 1992 presidential campaign, and code them for the existence of certain words. In looking at these speeches, the research question might involve examining the number of positive words used to describe Clinton's proposed plan, and the number of negative words used to describe the current status of health care in America. The researcher would be interested only in quantifying these words, not in examining how they are related, which is a function of relational analysis. In conceptual analysis, the researcher simply wants to examine presence with respect to his/her research question, i.e. is there a stronger presence of positive or negative words used with respect to proposed or current health care plans, respectively.
Once the research question has been established, the researcher must make his/her coding choices with respect to the eight category coding steps indicated by Carley (1992).
Steps for Conducting Conceptual Analysis
The following discussion of steps that can be followed to code a text or set of texts during conceptual analysis use campaign speeches made by Bill Clinton during the 1992 presidential campaign as an example. To read about each step, click on the items in the list below:
- Decide the level of analysis.
First, the researcher must decide upon the level of analysis . With the health care speeches, to continue the example, the researcher must decide whether to code for a single word, such as "inexpensive," or for sets of words or phrases, such as "coverage for everyone."
- Decide how many concepts to code for.
The researcher must now decide how many different concepts to code for. This involves developing a pre-defined or interactive set of concepts and categories. The researcher must decide whether or not to code for every single positive or negative word that appears, or only certain ones that the researcher determines are most relevant to health care. Then, with this pre-defined number set, the researcher has to determine how much flexibility he/she allows him/herself when coding. The question of whether the researcher codes only from this pre-defined set, or allows him/herself to add relevant categories not included in the set as he/she finds them in the text, must be answered. Determining a certain number and set of concepts allows a researcher to examine a text for very specific things, keeping him/her on task. But introducing a level of coding flexibility allows new, important material to be incorporated into the coding process that could have significant bearings on one's results.
- Decide whether to code for existence or frequency of a concept.
After a certain number and set of concepts are chosen for coding , the researcher must answer a key question: is he/she going to code for existence or frequency ? This is important, because it changes the coding process. When coding for existence, "inexpensive" would only be counted once, no matter how many times it appeared. This would be a very basic coding process and would give the researcher a very limited perspective of the text. However, the number of times "inexpensive" appears in a text might be more indicative of importance. Knowing that "inexpensive" appeared 50 times, for example, compared to 15 appearances of "coverage for everyone," might lead a researcher to interpret that Clinton is trying to sell his health care plan based more on economic benefits, not comprehensive coverage. Knowing that "inexpensive" appeared, but not that it appeared 50 times, would not allow the researcher to make this interpretation, regardless of whether it is valid or not.
- Decide on how you will distinguish among concepts.
The researcher must next decide on the , i.e. whether concepts are to be coded exactly as they appear, or if they can be recorded as the same even when they appear in different forms. For example, "expensive" might also appear as "expensiveness." The research needs to determine if the two words mean radically different things to him/her, or if they are similar enough that they can be coded as being the same thing, i.e. "expensive words." In line with this, is the need to determine the level of implication one is going to allow. This entails more than subtle differences in tense or spelling, as with "expensive" and "expensiveness." Determining the level of implication would allow the researcher to code not only for the word "expensive," but also for words that imply "expensive." This could perhaps include technical words, jargon, or political euphemism, such as "economically challenging," that the researcher decides does not merit a separate category, but is better represented under the category "expensive," due to its implicit meaning of "expensive."
- Develop rules for coding your texts.
After taking the generalization of concepts into consideration, a researcher will want to create translation rules that will allow him/her to streamline and organize the coding process so that he/she is coding for exactly what he/she wants to code for. Developing a set of rules helps the researcher insure that he/she is coding things consistently throughout the text, in the same way every time. If a researcher coded "economically challenging" as a separate category from "expensive" in one paragraph, then coded it under the umbrella of "expensive" when it occurred in the next paragraph, his/her data would be invalid. The interpretations drawn from that data will subsequently be invalid as well. Translation rules protect against this and give the coding process a crucial level of consistency and coherence.
- Decide what to do with "irrelevant" information.
The next choice a researcher must make involves irrelevant information . The researcher must decide whether irrelevant information should be ignored (as Weber, 1990, suggests), or used to reexamine and/or alter the coding scheme. In the case of this example, words like "and" and "the," as they appear by themselves, would be ignored. They add nothing to the quantification of words like "inexpensive" and "expensive" and can be disregarded without impacting the outcome of the coding.
- Code the texts.
Once these choices about irrelevant information are made, the next step is to code the text. This is done either by hand, i.e. reading through the text and manually writing down concept occurrences, or through the use of various computer programs. Coding with a computer is one of contemporary conceptual analysis' greatest assets. By inputting one's categories, content analysis programs can easily automate the coding process and examine huge amounts of data, and a wider range of texts, quickly and efficiently. But automation is very dependent on the researcher's preparation and category construction. When coding is done manually, a researcher can recognize errors far more easily. A computer is only a tool and can only code based on the information it is given. This problem is most apparent when coding for implicit information, where category preparation is essential for accurate coding.
- Analyze your results.
Once the coding is done, the researcher examines the data and attempts to draw whatever conclusions and generalizations are possible. Of course, before these can be drawn, the researcher must decide what to do with the information in the text that is not coded. One's options include either deleting or skipping over unwanted material, or viewing all information as relevant and important and using it to reexamine, reassess and perhaps even alter one's coding scheme. Furthermore, given that the conceptual analyst is dealing only with quantitative data, the levels of interpretation and generalizability are very limited. The researcher can only extrapolate as far as the data will allow. But it is possible to see trends, for example, that are indicative of much larger ideas. Using the example from step three, if the concept "inexpensive" appears 50 times, compared to 15 appearances of "coverage for everyone," then the researcher can pretty safely extrapolate that there does appear to be a greater emphasis on the economics of the health care plan, as opposed to its universal coverage for all Americans. It must be kept in mind that conceptual analysis, while extremely useful and effective for providing this type of information when done right, is limited by its focus and the quantitative nature of its examination. To more fully explore the relationships that exist between these concepts, one must turn to relational analysis.
Relational analysis, like conceptual analysis, begins with the act of identifying concepts present in a given text or set of texts. However, relational analysis seeks to go beyond presence by exploring the relationships between the concepts identified. Relational analysis has also been termed semantic analysis (Palmquist, Carley, & Dale, 1997). In other words, the focus of relational analysis is to look for semantic, or meaningful, relationships. Individual concepts, in and of themselves, are viewed as having no inherent meaning. Rather, meaning is a product of the relationships among concepts in a text. Carley (1992) asserts that concepts are "ideational kernels;" these kernels can be thought of as symbols which acquire meaning through their connections to other symbols.
Theoretical Influences on Relational Analysis
The kind of analysis that researchers employ will vary significantly according to their theoretical approach. Key theoretical approaches that inform content analysis include linguistics and cognitive science.
Linguistic approaches to content analysis focus analysis of texts on the level of a linguistic unit, typically single clause units. One example of this type of research is Gottschalk (1975), who developed an automated procedure which analyzes each clause in a text and assigns it a numerical score based on several emotional/psychological scales. Another technique is to code a text grammatically into clauses and parts of speech to establish a matrix representation (Carley, 1990).
Approaches that derive from cognitive science include the creation of decision maps and mental models. Decision maps attempt to represent the relationship(s) between ideas, beliefs, attitudes, and information available to an author when making a decision within a text. These relationships can be represented as logical, inferential, causal, sequential, and mathematical relationships. Typically, two of these links are compared in a single study, and are analyzed as networks. For example, Heise (1987) used logical and sequential links to examine symbolic interaction. This methodology is thought of as a more generalized cognitive mapping technique, rather than the more specific mental models approach.
Mental models are groups or networks of interrelated concepts that are thought to reflect conscious or subconscious perceptions of reality. According to cognitive scientists, internal mental structures are created as people draw inferences and gather information about the world. Mental models are a more specific approach to mapping because beyond extraction and comparison because they can be numerically and graphically analyzed. Such models rely heavily on the use of computers to help analyze and construct mapping representations. Typically, studies based on this approach follow five general steps:
- Identifing concepts
- Defining relationship types
- Coding the text on the basis of 1 and 2
- Coding the statements
- Graphically displaying and numerically analyzing the resulting maps
To create the model, a researcher converts a text into a map of concepts and relations; the map is then analyzed on the level of concepts and statements, where a statement consists of two concepts and their relationship. Carley (1990) asserts that this makes possible the comparison of a wide variety of maps, representing multiple sources, implicit and explicit information, as well as socially shared cognitions.
Relational Analysis: Overview of Methods
As with other sorts of inquiry, initial choices with regard to what is being studied and/or coded for often determine the possibilities of that particular study. For relational analysis, it is important to first decide which concept type(s) will be explored in the analysis. Studies have been conducted with as few as one and as many as 500 concept categories. Obviously, too many categories may obscure your results and too few can lead to unreliable and potentially invalid conclusions. Therefore, it is important to allow the context and necessities of your research to guide your coding procedures.
The steps to relational analysis that we consider in this guide suggest some of the possible avenues available to a researcher doing content analysis. We provide an example to make the process easier to grasp. However, the choices made within the context of the example are but only a few of many possibilities. The diversity of techniques available suggests that there is quite a bit of enthusiasm for this mode of research. Once a procedure is rigorously tested, it can be applied and compared across populations over time. The process of relational analysis has achieved a high degree of computer automation but still is, like most forms of research, time consuming. Perhaps the strongest claim that can be made is that it maintains a high degree of statistical rigor without losing the richness of detail apparent in even more qualitative methods.
Three Subcategories of Relational Analysis
Affect extraction: This approach provides an emotional evaluation of concepts explicit in a text. It is problematic because emotion may vary across time and populations. Nevertheless, when extended it can be a potent means of exploring the emotional/psychological state of the speaker and/or writer. Gottschalk (1995) provides an example of this type of analysis. By assigning concepts identified a numeric value on corresponding emotional/psychological scales that can then be statistically examined, Gottschalk claims that the emotional/psychological state of the speaker or writer can be ascertained via their verbal behavior.
Proximity analysis: This approach, on the other hand, is concerned with the co-occurrence of explicit concepts in the text. In this procedure, the text is defined as a string of words. A given length of words, called a window , is determined. The window is then scanned across a text to check for the co-occurrence of concepts. The result is the creation of a concept determined by the concept matrix . In other words, a matrix, or a group of interrelated, co-occurring concepts, might suggest a certain overall meaning. The technique is problematic because the window records only explicit concepts and treats meaning as proximal co-occurrence. Other techniques such as clustering, grouping, and scaling are also useful in proximity analysis.
Cognitive mapping: This approach is one that allows for further analysis of the results from the two previous approaches. It attempts to take the above processes one step further by representing these relationships visually for comparison. Whereas affective and proximal analysis function primarily within the preserved order of the text, cognitive mapping attempts to create a model of the overall meaning of the text. This can be represented as a graphic map that represents the relationships between concepts.
In this manner, cognitive mapping lends itself to the comparison of semantic connections across texts. This is known as map analysis which allows for comparisons to explore "how meanings and definitions shift across people and time" (Palmquist, Carley, & Dale, 1997). Maps can depict a variety of different mental models (such as that of the text, the writer/speaker, or the social group/period), according to the focus of the researcher. This variety is indicative of the theoretical assumptions that support mapping: mental models are representations of interrelated concepts that reflect conscious or subconscious perceptions of reality; language is the key to understanding these models; and these models can be represented as networks (Carley, 1990). Given these assumptions, it's not surprising to see how closely this technique reflects the cognitive concerns of socio-and psycholinguistics, and lends itself to the development of artificial intelligence models.
Steps for Conducting Relational Analysis
The following discussion of the steps (or, perhaps more accurately, strategies) that can be followed to code a text or set of texts during relational analysis. These explanations are accompanied by examples of relational analysis possibilities for statements made by Bill Clinton during the 1998 hearings.
- Identify the Question.
The question is important because it indicates where you are headed and why. Without a focused question, the concept types and options open to interpretation are limitless and therefore the analysis difficult to complete. Possibilities for the Hairy Hearings of 1998 might be:
What did Bill Clinton say in the speech? OR What concrete information did he present to the public?
- Choose a sample or samples for analysis.
Once the question has been identified, the researcher must select sections of text/speech from the hearings in which Bill Clinton may have not told the entire truth or is obviously holding back information. For relational content analysis, the primary consideration is how much information to preserve for analysis. One must be careful not to limit the results by doing so, but the researcher must also take special care not to take on so much that the coding process becomes too heavy and extensive to supply worthwhile results.
- Determine the type of analysis.
Once the sample has been chosen for analysis, it is necessary to determine what type or types of relationships you would like to examine. There are different subcategories of relational analysis that can be used to examine the relationships in texts.
In this example, we will use proximity analysis because it is concerned with the co-occurrence of explicit concepts in the text. In this instance, we are not particularly interested in affect extraction because we are trying to get to the hard facts of what exactly was said rather than determining the emotional considerations of speaker and receivers surrounding the speech which may be unrecoverable.
Once the subcategory of analysis is chosen, the selected text must be reviewed to determine the level of analysis. The researcher must decide whether to code for a single word, such as "perhaps," or for sets of words or phrases like "I may have forgotten."
- Reduce the text to categories and code for words or patterns.
At the simplest level, a researcher can code merely for existence. This is not to say that simplicity of procedure leads to simplistic results. Many studies have successfully employed this strategy. For example, Palmquist (1990) did not attempt to establish the relationships among concept terms in the classrooms he studied; his study did, however, look at the change in the presence of concepts over the course of the semester, comparing a map analysis from the beginning of the semester to one constructed at the end. On the other hand, the requirement of one's specific research question may necessitate deeper levels of coding to preserve greater detail for analysis.
In relation to our extended example, the researcher might code for how often Bill Clinton used words that were ambiguous, held double meanings, or left an opening for change or "re-evaluation." The researcher might also choose to code for what words he used that have such an ambiguous nature in relation to the importance of the information directly related to those words.
- Explore the relationships between concepts (Strength, Sign & Direction).
Once words are coded, the text can be analyzed for the relationships among the concepts set forth. There are three concepts which play a central role in exploring the relations among concepts in content analysis.
- Strength of Relationship: Refers to the degree to which two or more concepts are related. These relationships are easiest to analyze, compare, and graph when all relationships between concepts are considered to be equal. However, assigning strength to relationships retains a greater degree of the detail found in the original text. Identifying strength of a relationship is key when determining whether or not words like unless, perhaps, or maybe are related to a particular section of text, phrase, or idea.
- Sign of a Relationship: Refers to whether or not the concepts are positively or negatively related. To illustrate, the concept "bear" is negatively related to the concept "stock market" in the same sense as the concept "bull" is positively related. Thus "it's a bear market" could be coded to show a negative relationship between "bear" and "market". Another approach to coding for strength entails the creation of separate categories for binary oppositions. The above example emphasizes "bull" as the negation of "bear," but could be coded as being two separate categories, one positive and one negative. There has been little research to determine the benefits and liabilities of these differing strategies. Use of Sign coding for relationships in regard to the hearings my be to find out whether or not the words under observation or in question were used adversely or in favor of the concepts (this is tricky, but important to establishing meaning).
- Direction of the Relationship: Refers to the type of relationship categories exhibit. Coding for this sort of information can be useful in establishing, for example, the impact of new information in a decision making process. Various types of directional relationships include, "X implies Y," "X occurs before Y" and "if X then Y," or quite simply the decision whether concept X is the "prime mover" of Y or vice versa. In the case of the 1998 hearings, the researcher might note that, "maybe implies doubt," "perhaps occurs before statements of clarification," and "if possibly exists, then there is room for Clinton to change his stance." In some cases, concepts can be said to be bi-directional, or having equal influence. This is equivalent to ignoring directionality. Both approaches are useful, but differ in focus. Coding all categories as bi-directional is most useful for exploratory studies where pre-coding may influence results, and is also most easily automated, or computer coded.
- Code the relationships.
One of the main differences between conceptual analysis and relational analysis is that the statements or relationships between concepts are coded. At this point, to continue our extended example, it is important to take special care with assigning value to the relationships in an effort to determine whether the ambiguous words in Bill Clinton's speech are just fillers, or hold information about the statements he is making.
- Perform Statisical Analyses.
This step involves conducting statistical analyses of the data you've coded during your relational analysis. This may involve exploring for differences or looking for relationships among the variables you've identified in your study.
- Map out the Representations.
In addition to statistical analysis, relational analysis often leads to viewing the representations of the concepts and their associations in a text (or across texts) in a graphical -- or map -- form. Relational analysis is also informed by a variety of different theoretical approaches: linguistic content analysis, decision mapping, and mental models.
The authors of this guide have created the following commentaries on content analysis.
Issues of Reliability & Validity
The issues of reliability and validity are concurrent with those addressed in other research methods. The reliability of a content analysis study refers to its stability , or the tendency for coders to consistently re-code the same data in the same way over a period of time; reproducibility , or the tendency for a group of coders to classify categories membership in the same way; and accuracy , or the extent to which the classification of a text corresponds to a standard or norm statistically. Gottschalk (1995) points out that the issue of reliability may be further complicated by the inescapably human nature of researchers. For this reason, he suggests that coding errors can only be minimized, and not eliminated (he shoots for 80% as an acceptable margin for reliability).
On the other hand, the validity of a content analysis study refers to the correspondence of the categories to the conclusions , and the generalizability of results to a theory.
The validity of categories in implicit concept analysis, in particular, is achieved by utilizing multiple classifiers to arrive at an agreed upon definition of the category. For example, a content analysis study might measure the occurrence of the concept category "communist" in presidential inaugural speeches. Using multiple classifiers, the concept category can be broadened to include synonyms such as "red," "Soviet threat," "pinkos," "godless infidels" and "Marxist sympathizers." "Communist" is held to be the explicit variable, while "red," etc. are the implicit variables.
The overarching problem of concept analysis research is the challenge-able nature of conclusions reached by its inferential procedures. The question lies in what level of implication is allowable, i.e. do the conclusions follow from the data or are they explainable due to some other phenomenon? For occurrence-specific studies, for example, can the second occurrence of a word carry equal weight as the ninety-ninth? Reasonable conclusions can be drawn from substantive amounts of quantitative data, but the question of proof may still remain unanswered.
This problem is again best illustrated when one uses computer programs to conduct word counts. The problem of distinguishing between synonyms and homonyms can completely throw off one's results, invalidating any conclusions one infers from the results. The word "mine," for example, variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. One may obtain an accurate count of that word's occurrence and frequency, but not have an accurate accounting of the meaning inherent in each particular usage. For example, one may find 50 occurrences of the word "mine." But, if one is only looking specifically for "mine" as an explosive device, and 17 of the occurrences are actually personal pronouns, the resulting 50 is an inaccurate result. Any conclusions drawn as a result of that number would render that conclusion invalid.
The generalizability of one's conclusions, then, is very dependent on how one determines concept categories, as well as on how reliable those categories are. It is imperative that one defines categories that accurately measure the idea and/or items one is seeking to measure. Akin to this is the construction of rules. Developing rules that allow one, and others, to categorize and code the same data in the same way over a period of time, referred to as stability , is essential to the success of a conceptual analysis. Reproducibility , not only of specific categories, but of general methods applied to establishing all sets of categories, makes a study, and its subsequent conclusions and results, more sound. A study which does this, i.e. in which the classification of a text corresponds to a standard or norm, is said to have accuracy .
Advantages of Content Analysis
Content analysis offers several advantages to researchers who consider using it. In particular, content analysis:
- looks directly at communication via texts or transcripts, and hence gets at the central aspect of social interaction
- can allow for both quantitative and qualitative operations
- can provides valuable historical/cultural insights over time through analysis of texts
- allows a closeness to text which can alternate between specific categories and relationships and also statistically analyzes the coded form of the text
- can be used to interpret texts for purposes such as the development of expert systems (since knowledge and rules can both be coded in terms of explicit statements about the relationships among concepts)
- is an unobtrusive means of analyzing interactions
- provides insight into complex models of human thought and language use
Disadvantages of Content Analysis
Content analysis suffers from several disadvantages, both theoretical and procedural. In particular, content analysis:
- can be extremely time consuming
- is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation
- is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study
- is inherently reductive, particularly when dealing with complex texts
- tends too often to simply consist of word counts
- often disregards the context that produced the text, as well as the state of things after the text is produced
- can be difficult to automate or computerize
The Palmquist, Carley and Dale study, a summary of "Applications of Computer-Aided Text Analysis: Analyzing Literary and Non-Literary Texts" (1997) is an example of two studies that have been conducted using both conceptual and relational analysis. The Problematic Text for Content Analysis shows the differences in results obtained by a conceptual and a relational approach to a study.
Related Information: Example of a Problematic Text for Content Analysis
In this example, both students observed a scientist and were asked to write about the experience.
Student A: I found that scientists engage in research in order to make discoveries and generate new ideas. Such research by scientists is hard work and often involves collaboration with other scientists which leads to discoveries which make the scientists famous. Such collaboration may be informal, such as when they share new ideas over lunch, or formal, such as when they are co-authors of a paper.
Student B: It was hard work to research famous scientists engaged in collaboration and I made many informal discoveries. My research showed that scientists engaged in collaboration with other scientists are co-authors of at least one paper containing their new ideas. Some scientists make formal discoveries and have new ideas.
Content analysis coding for explicit concepts may not reveal any significant differences. For example, the existence of "I, scientist, research, hard work, collaboration, discoveries, new ideas, etc..." are explicit in both texts, occur the same number of times, and have the same emphasis. Relational analysis or cognitive mapping, however, reveals that while all concepts in the text are shared, only five concepts are common to both. Analyzing these statements reveals that Student A reports on what "I" found out about "scientists," and elaborated the notion of "scientists" doing "research." Student B focuses on what "I's" research was and sees scientists as "making discoveries" without emphasis on research.
Related Information: The Palmquist, Carley and Dale Study
Consider these two questions: How has the depiction of robots changed over more than a century's worth of writing? And, do students and writing instructors share the same terms for describing the writing process? Although these questions seem totally unrelated, they do share a commonality: in the Palmquist, Carley & Dale study, their answers rely on computer-aided text analysis to demonstrate how different texts can be analyzed.
One half of the study explored the depiction of robots in 27 science fiction texts written between 1818 and 1988. After texts were divided into three historically defined groups, readers look for how the depiction of robots has changed over time. To do this, researchers had to create concept lists and relationship types, create maps using a computer software (see Fig. 1), modify those maps and then ultimately analyze them. The final product of the analysis revealed that over time authors were less likely to depict robots as metallic humanoids.
The second half of the study used student journals and interviews, teacher interviews, texts books, and classroom observations as the non-literary texts from which concepts and words were taken. The purpose behind the study was to determine if, in fact, over time teacher and students would begin to share a similar vocabulary about the writing process. Again, researchers used computer software to assist in the process. This time, computers helped researchers generated a concept list based on frequently occurring words and phrases from all texts. Maps were also created and analyzed in this study (see Fig. 2).
Resources On How To Conduct Content Analysis
Beard, J., & Yaprak, A. (1989). Language implications for advertising in international markets: A model for message content and message execution. A paper presented at the 8th International Conference on Language Communication for World Business and the Professions. Ann Arbor, MI.
This report discusses the development and testing of a content analysis model for assessing advertising themes and messages aimed primarily at U.S. markets which seeks to overcome barriers in the cultural environment of international markets. Texts were categorized under 3 headings: rational, emotional, and moral. The goal here was to teach students to appreciate differences in language and culture.
Berelson, B. (1971). Content analysis in communication research . New York: Hafner Publishing Company.
While this book provides an extensive outline of the uses of content analysis, it is far more concerned with conveying a critical approach to current literature on the subject. In this respect, it assumes a bit of prior knowledge, but is still accessible through the use of concrete examples.
Budd, R. W., Thorp, R.K., & Donohew, L. (1967). Content analysis of communications . New York: Macmillan Company.
Although published in 1967, the decision of the authors to focus on recent trends in content analysis keeps their insights relevant even to modern audiences. The book focuses on specific uses and methods of content analysis with an emphasis on its potential for researching human behavior. It is also geared toward the beginning researcher and breaks down the process of designing a content analysis study into 6 steps that are outlined in successive chapters. A useful annotated bibliography is included.
Carley, K. (1992). Coding choices for textual analysis: A comparison of content analysis and map analysis. Unpublished Working Paper.
Comparison of the coding choices necessary to conceptual analysis and relational analysis, especially focusing on cognitive maps. Discusses concept coding rules needed for sufficient reliability and validity in a Content Analysis study. In addition, several pitfalls common to texts are discussed.
Carley, K. (1990). Content analysis. In R.E. Asher (Ed.), The Encyclopedia of Language and Linguistics. Edinburgh: Pergamon Press.
Quick, yet detailed, overview of the different methodological kinds of Content Analysis. Carley breaks down her paper into five sections, including: Conceptual Analysis, Procedural Analysis, Relational Analysis, Emotional Analysis and Discussion. Also included is an excellent and comprehensive Content Analysis reference list.
Carley, K. (1989). Computer analysis of qualitative data . Pittsburgh, PA: Carnegie Mellon University.
Presents graphic, illustrated representations of computer based approaches to content analysis.
Carley, K. (1992). MECA . Pittsburgh, PA: Carnegie Mellon University.
A resource guide explaining the fifteen routines that compose the Map Extraction Comparison and Analysis (MECA) software program. Lists the source file, input and out files, and the purpose for each routine.
Carney, T. F. (1972). Content analysis: A technique for systematic inference from communications . Winnipeg, Canada: University of Manitoba Press.
This book introduces and explains in detail the concept and practice of content analysis. Carney defines it; traces its history; discusses how content analysis works and its strengths and weaknesses; and explains through examples and illustrations how one goes about doing a content analysis.
de Sola Pool, I. (1959). Trends in content analysis . Urbana, Ill: University of Illinois Press.
The 1959 collection of papers begins by differentiating quantitative and qualitative approaches to content analysis, and then details facets of its uses in a wide variety of disciplines: from linguistics and folklore to biography and history. Includes a discussion on the selection of relevant methods and representational models.
Duncan, D. F. (1989). Content analysis in health educaton research: An introduction to purposes and methods. Heatlth Education, 20 (7).
This article proposes using content analysis as a research technique in health education. A review of literature relating to applications of this technique and a procedure for content analysis are presented.
Gottschalk, L. A. (1995). Content analysis of verbal behavior: New findings and clinical applications. Hillside, NJ: Lawrence Erlbaum Associates, Inc.
This book primarily focuses on the Gottschalk-Gleser method of content analysis, and its application as a method of measuring psychological dimensions of children and adults via the content and form analysis of their verbal behavior, using the grammatical clause as the basic unit of communication for carrying semantic messages generated by speakers or writers.
Krippendorf, K. (1980). Content analysis: An introduction to its methodology Beverly Hills, CA: Sage Publications.
This is one of the most widely quoted resources in many of the current studies of Content Analysis. Recommended as another good, basic resource, as Krippendorf presents the major issues of Content Analysis in much the same way as Weber (1975).
Moeller, L. G. (1963). An introduction to content analysis--including annotated bibliography . Iowa City: University of Iowa Press.
A good reference for basic content analysis. Discusses the options of sampling, categories, direction, measurement, and the problems of reliability and validity in setting up a content analysis. Perhaps better as a historical text due to its age.
Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis. New York: Cambridge University Press.
Billed by its authors as "the first book to be devoted primarily to content analysis systems for assessment of the characteristics of individuals, groups, or historical periods from their verbal materials." The text includes manuals for using various systems, theory, and research regarding the background of systems, as well as practice materials, making the book both a reference and a handbook.
Solomon, M. (1993). Content analysis: a potent tool in the searcher's arsenal. Database, 16 (2), 62-67.
Online databases can be used to analyze data, as well as to simply retrieve it. Online-media-source content analysis represents a potent but little-used tool for the business searcher. Content analysis benchmarks useful to advertisers include prominence, offspin, sponsor affiliation, verbatims, word play, positioning and notational visibility.
Weber, R. P. (1990). Basic content analysis, second edition . Newbury Park, CA: Sage Publications.
Good introduction to Content Analysis. The first chapter presents a quick overview of Content Analysis. The second chapter discusses content classification and interpretation, including sections on reliability, validity, and the creation of coding schemes and categories. Chapter three discusses techniques of Content Analysis, using a number of tables and graphs to illustrate the techniques. Chapter four examines issues in Content Analysis, such as measurement, indication, representation and interpretation.
Examples of Content Analysis
Adams, W., & Shriebman, F. (1978). Television network news: Issues in content research . Washington, DC: George Washington University Press.
A fairly comprehensive application of content analysis to the field of television news reporting. The books tripartite division discusses current trends and problems with news criticism from a content analysis perspective, four different content analysis studies of news media, and makes recommendations for future research in the area. Worth a look by anyone interested in mass communication research.
Auter, P. J., & Moore, R. L. (1993). Buying from a friend: a content analysis of two teleshopping programs. Journalism Quarterly, 70 (2), 425-437.
A preliminary study was conducted to content-analyze random samples of two teleshopping programs, using a measure of content interactivity and a locus of control message index.
Barker, S. P. (???) Fame: A content analysis study of the American film biography. Ohio State University. Thesis.
Barker examined thirty Oscar-nominated films dating from 1929 to 1979 using O.J. Harvey Belief System and the Kohlberg's Moral Stages to determine whether cinema heroes were positive role models for fame and success or morally ambiguous celebrities. Content analysis was successful in determining several trends relative to the frequency and portrayal of women in film, the generally high ethical character of the protagonists, and the dogmatic, close-minded nature of film antagonists.
Bernstein, J. M. & Lacy, S. (1992). Contextual coverage of government by local television news. Journalism Quarterly, 69 (2), 329-341.
This content analysis of 14 local television news operations in five markets looks at how local TV news shows contribute to the marketplace of ideas. Performance was measured as the allocation of stories to types of coverage that provide the context about events and issues confronting the public.
Blaikie, A. (1993). Images of age: a reflexive process. Applied Ergonomics, 24 (1), 51-58.
Content analysis of magazines provides a sharp instrument for reflecting the change in stereotypes of aging over past decades.
Craig, R. S. (1992). The effect of day part on gender portrayals in television commercials: a content analysis. Sex Roles: A Journal of Research, 26 (5-6), 197-213.
Gender portrayals in 2,209 network television commercials were content analyzed. To compare differences between three day parts, the sample was chosen from three time periods: daytime, evening prime time, and weekend afternoon sportscasts. The results indicate large and consistent differences in the way men and women are portrayed in these three day parts, with almost all comparisons reaching significance at the .05 level. Although ads in all day parts tended to portray men in stereotypical roles of authority and dominance, those on weekends tended to emphasize escape form home and family. The findings of earlier studies which did not consider day part differences may now have to be reevaluated.
Dillon, D. R. et al. (1992). Article content and authorship trends in The Reading Teacher, 1948-1991. The Reading Teacher, 45 (5), 362-368.
The authors explore changes in the focus of the journal over time.
Eberhardt, EA. (1991). The rhetorical analysis of three journal articles: The study of form, content, and ideology. Ft. Collins, CO: Colorado State University.
Eberhardt uses content analysis in this thesis paper to analyze three journal articles that reported on President Ronald Reagan's address in which he responded to the Tower Commission report concerning the IranContra Affair. The reports concentrated on three rhetorical elements: idea generation or content; linguistic style or choice of language; and the potential societal effect of both, which Eberhardt analyzes, along with the particular ideological orientation espoused by each magazine.
Ellis, B. G. & Dick, S. J. (1996). 'Who was 'Shadow'? The computer knows: applying grammar-program statistics in content analyses to solve mysteries about authorship. Journalism & Mass Communication Quarterly, 73 (4), 947-963.
This study's objective was to employ the statistics-documentation portion of a word-processing program's grammar-check feature as a final, definitive, and objective tool for content analyses - used in tandem with qualitative analyses - to determine authorship. Investigators concluded there was significant evidence from both modalities to support their theory that Henry Watterson, long-time editor of the Louisville Courier-Journal, probably was the South's famed Civil War correspondent "Shadow" and to rule out another prime suspect, John H. Linebaugh of the Memphis Daily Appeal. Until now, this Civil War mystery has never been conclusively solved, puzzling historians specializing in Confederate journalism.
Gottschalk, L. A., Stein, M. K. & Shapiro, D.H. (1997). The application of computerized content analysis in a psychiatric outpatient clinic. Journal of Clinical Psychology, 53 (5) , 427-442.
Twenty-five new psychiatric outpatients were clinically evaluated and were administered a brief psychological screening battery which included measurements of symptoms, personality, and cognitive function. Included in this assessment procedure were the Gottschalk-Gleser Content Analysis Scales on which scores were derived from five minute speech samples by means of an artificial intelligence-based computer program. The use of this computerized content analysis procedure for initial, rapid diagnostic neuropsychiatric appraisal is supported by this research.
Graham, J. L., Kamins, M. A., & Oetomo, D. S. (1993). Content analysis of German and Japanese advertising in print media from Indonesia, Spain, and the United States. Journal of Advertising , 22 (2), 5-16.
The authors analyze informational and emotional content in print advertisements in order to consider how home-country culture influences firms' marketing strategies and tactics in foreign markets. Research results provided evidence contrary to the original hypothesis that home-country culture would influence ads in each of the target countries.
Herzog, A. (1973). The B.S. Factor: The theory and technique of faking it in America . New York: Simon and Schuster.
Herzog takes a look at the rhetoric of American culture using content analysis to point out discrepancies between intention and reality in American society. The study reveals, albeit in a comedic tone, how double talk and "not quite lies" are pervasive in our culture.
Horton, N. S. (1986). Young adult literature and censorship: A content analysis of seventy-eight young adult books . Denton, TX: North Texas State University.
The purpose of Horton's content analysis was to analyze a representative seventy-eight current young adult books to determine the extent to which they contain items which are objectionable to would-be censors. Seventy-eight books were identified which fit the criteria of popularity and literary quality. Each book was analyzed for, and tallied for occurrence of, six categories, including profanity, sex, violence, parent conflict, drugs and condoned bad behavior.
Isaacs, J. S. (1984). A verbal content analysis of the early memories of psychiatric patients . Berkeley: California School of Professional Psychology.
Isaacs did a content analysis investigation on the relationship between words and phrases used in early memories and clinical diagnosis. His hypothesis was that in conveying their early memories schizophrenic patients tend to use an identifiable set of words and phrases more frequently than do nonpatients and that schizophrenic patients use these words and phrases more frequently than do patients with major affective disorders.
Jean Lee, S. K. & Hwee Hoon, T. (1993). Rhetorical vision of men and women managers in Singapore. Human Relations, 46 (4), 527-542.
A comparison of media portrayal of male and female managers' rhetorical vision in Singapore is made. Content analysis of newspaper articles used to make this comparison also reveals the inherent conflicts that women managers have to face. Purposive and multi-stage sampling of articles are utilized.
Kaur-Kasior, S. (1987). The treatment of culture in greeting cards: A content analysis . Bowling Green, OH: Bowling Green State University.
Using six historical periods dating from 1870 to 1987, this content analysis study attempted to determine what structural/cultural aspects of American society were reflected in greeting cards. The study determined that the size of cards increased over time, included more pages, and had animals and flowers as their most dominant symbols. In addition, white was the most common color used. Due to habituation and specialization, says the author, greeting cards have become institutionalized in American culture.
Koza, J. E. (1992). The missing males and other gender-related issues in music education: A critical analysis of evidence from the Music Supervisor's Journal, 1914-1924. Paper presented at the annual meeting of the American Educational Research Association. San Francisco.
The goal of this study was to identify all educational issues that would today be explicitly gender related and to analyze the explanations past music educators gave for the existence of gender-related problems. A content analysis of every gender-related reference was undertaken, finding that the current preoccupation with males in music education has a long history and that little has changed since the early part of this century.
Laccinole, M. D. (1982). Aging and married couples: A language content analysis of a conversational and expository speech task . Eugene, OR: University of Oregon.
Using content analysis, this paper investigated the relationship of age to the use of the grammatical categories, and described the differences in the usage of these grammatical categories in a conversation and expository speech task by fifty married couples. The subjects Laccinole used in his analysis were Caucasian, English speaking, middle class, ranged in ages from 20 to 83 years of age, were in good health and had no history of communication disorders.
Laffal, J. (1995). A concept analysis of Jonathan Swift's 'A Tale of a Tub' and 'Gulliver's Travels.' Computers and Humanities, 29 (5), 339-362.
In this study, comparisons of concept profiles of "Tub," "Gulliver," and Swift's own contemporary texts, as well as a composite text of 18th century writers, reveal that "Gulliver" is conceptually different from "Tub." The study also discovers that the concepts and words of these texts suggest two strands in Swift's thinking.
Lewis, S. M. (1991). Regulation from a deregulatory FCC: Avoiding discursive dissonance. Masters Thesis, Fort Collins, CO: Colorado State University.
This thesis uses content analysis to examine inconsistent statements made by the Federal Communications Commission (FCC) in its policy documents during the 1980s. Lewis analyzes positions set forth by the FCC in its policy statements and catalogues different strategies that can be used by speakers to be or to appear consistent, as well as strategies to avoid inconsistent speech or discursive dissonance.
Norton, T. L. (1987). The changing image of childhood: A content analysis of Caldecott Award books. Los Angeles: University of South Carolina.
Content analysis was conducted on 48 Caldecott Medal Recipient books dating from 1938 to 1985 to determine whether the reflect the idea that the social perception of childhood has altered since the early 1960's. The results revealed an increasing "loss of childhood innocence," as well as a general sentimentality for childhood pervasive in the texts. Suggests further study of children's literature to confirm the validity of such study.
O'Dell, J. W. & Weideman, D. (1993). Computer content analysis of the Schreber case. Journal of Clinical Psychology, 49 (1), 120-125.
An example of the application of content analysis as a means of recreating a mental model of the psychology of an individual.
Pratt, C. A. & Pratt, C. B. (1995). Comparative content analysis of food and nutrition advertisements in Ebony, Essence, and Ladies' Home Journal. Journal of Nutrition Education, 27 (1), 11-18.
This study used content analysis to measure the frequencies and forms of food, beverage, and nutrition advertisements and their associated health-promotional message in three U.S. consumer magazines during two 3-year periods: 1980-1982 and 1990-1992. The study showed statistically significant differences among the three magazines in both frequencies and types of major promotional messages in the advertisements. Differences between the advertisements in Ebony and Essence, the readerships of which were primarily African-American, and those found in Ladies Home Journal were noted, as were changes in the two time periods. Interesting tie in to ethnographic research studies?
Riffe, D., Lacy, S., & Drager, M. W. (1996). Sample size in content analysis of weekly news magazines. Journalism & Mass Communication Quarterly,73 (3), 635-645.
This study explores a variety of approaches to deciding sample size in analyzing magazine content. Having tested random samples of size six, eight, ten, twelve, fourteen, and sixteen issues, the authors show that a monthly stratified sample of twelve issues is the most efficient method for inferring to a year's issues.
Roberts, S. K. (1987). A content analysis of how male and female protagonists in Newbery Medal and Honor books overcome conflict: Incorporating a locus of control framework. Fayetteville, AR: University of Arkansas.
The purpose of this content analysis was to analyze Newbery Medal and Honor books in order to determine how male and female protagonists were assigned behavioral traits in overcoming conflict as it relates to an internal or external locus of control schema. Roberts used all, instead of just a sample, of the fictional Newbery Medal and Honor books which met his study's criteria. A total of 120 male and female protagonists were categorized, from Newbery books dating from 1922 to 1986.
Schneider, J. (1993). Square One TV content analysis: Final report . New York: Children's Television Workshop.
This report summarizes the mathematical and pedagogical content of the 230 programs in the Square One TV library after five seasons of production, relating that content to the goals of the series which were to make mathematics more accessible, meaningful, and interesting to the children viewers.
Smith, T. E., Sells, S. P., and Clevenger, T. Ethnographic content analysis of couple and therapist perceptions in a reflecting team setting. The Journal of Marital and Family Therapy, 20 (3), 267-286.
An ethnographic content analysis was used to examine couple and therapist perspectives about the use and value of reflecting team practice. Postsession ethnographic interviews from both couples and therapists were examined for the frequency of themes in seven categories that emerged from a previous ethnographic study of reflecting teams. Ethnographic content analysis is briefly contrasted with conventional modes of quantitative content analysis to illustrate its usefulness and rationale for discovering emergent patterns, themes, emphases, and process using both inductive and deductive methods of inquiry.
Stahl, N. A. (1987). Developing college vocabulary: A content analysis of instructional materials. Reading, Research and Instruction , 26 (3).
This study investigates the extent to which the content of 55 college vocabulary texts is consistent with current research and theory on vocabulary instruction. It recommends less reliance on memorization and more emphasis on deep understanding and independent vocabulary development.
Swetz, F. (1992). Fifteenth and sixteenth century arithmetic texts: What can we learn from them? Science and Education, 1 (4).
Surveys the format and content of 15th and 16th century arithmetic textbooks, discussing the types of problems that were most popular in these early texts and briefly analyses problem contents. Notes the residual educational influence of this era's arithmetical and instructional practices.
Walsh, K., et al. (1996). Management in the public sector: a content analysis of journals. Public Administration 74 (2), 315-325.
The popularity and implementaion of managerial ideas from 1980 to 1992 are examined through the content of five journals revolving on local government, health, education and social service. Contents were analyzed according to commercialism, user involvement, performance evaluation, staffing, strategy and involvement with other organizations. Overall, local government showed utmost involvement with commercialism while health and social care articles were most concerned with user involvement.
For Further Reading
Abernethy, A. M., & Franke, G. R. (1996).The information content of advertising: a meta-analysis. Journal of Advertising, Summer 25 (2) , 1-18.
Carley, K., & Palmquist, M. (1992). Extracting, representing and analyzing mental models. Social Forces , 70 (3), 601-636.
Fan, D. (1988). Predictions of public opinion from the mass media: Computer content analysis and mathematical modeling . New York, NY: Greenwood Press.
Franzosi, R. (1990). Computer-assisted coding of textual data: An application to semantic grammars. Sociological Methods and Research, 19 (2), 225-257.
McTavish, D.G., & Pirro, E. (1990) Contextual content analysis. Quality and Quantity , 24 , 245-265.
Palmquist, M. E. (1990). The lexicon of the classroom: language and learning in writing class rooms . Doctoral dissertation, Carnegie Mellon University, Pittsburgh, PA.
Palmquist, M. E., Carley, K.M., and Dale, T.A. (1997). Two applications of automated text analysis: Analyzing literary and non-literary texts. In C. Roberts (Ed.), Text Analysis for the Social Sciences: Methods for Drawing Statistical Inferences from Texts and Tanscripts. Hillsdale, NJ: Lawrence Erlbaum Associates.
Roberts, C.W. (1989). Other than counting words: A linguistic approach to content analysis. Social Forces, 68 , 147-177.
Issues in Content Analysis
Jolliffe, L. (1993). Yes! More content analysis! Newspaper Research Journal , 14 (3-4), 93-97.
The author responds to an editorial essay by Barbara Luebke which criticizes excessive use of content analysis in newspaper content studies. The author points out the positive applications of content analysis when it is theory-based and utilized as a means of suggesting how or why the content exists, or what its effects on public attitudes or behaviors may be.
Kang, N., Kara, A., Laskey, H. A., & Seaton, F. B. (1993). A SAS MACRO for calculating intercoder agreement in content analysis. Journal of Advertising, 22 (2), 17-28.
A key issue in content analysis is the level of agreement across the judgments which classify the objects or stimuli of interest. A review of articles published in the Journal of Advertising indicates that many authors are not fully utilizing recommended measures of intercoder agreement and thus may not be adequately establishing the reliability of their research. This paper presents a SAS MACRO which facilitates the computation of frequently recommended indices of intercoder agreement in content analysis.
Lacy, S. & Riffe, D. (1996). Sampling error and selecting intercoder reliability samples for nominal content categories. Journalism & Mass Communication Quarterly, 73 (4) , 693-704.
This study views intercoder reliability as a sampling problem. It develops a formula for generating sample sizes needed to have valid reliability estimates. It also suggests steps for reporting reliability. The resulting sample sizes will permit a known degree of confidence that the agreement in a sample of items is representative of the pattern that would occur if all content items were coded by all coders.
Riffe, D., Aust, C. F., & Lacy, S. R. (1993). The effectiveness of random, consecutive day and constructed week sampling in newspaper content analysis. Journalism Quarterly, 70 (1), 133-139.
This study compares 20 sets each of samples for four different sizes using simple random, constructed week and consecutive day samples of newspaper content. Comparisons of sample efficiency, based on the percentage of sample means in each set of 20 falling within one or two standard errors of the population mean, show the superiority of constructed week sampling.
Thomas, S. (1994). Artifactual study in the analysis of culture: A defense of content analysis in a postmodern age. Communication Research, 21 (6), 683-697.
Although both modern and postmodern scholars have criticized the method of content analysis with allegations of reductionism and other epistemological limitations, it is argued here that these criticisms are ill founded. In building and argument for the validity of content analysis, the general value of artifact or text study is first considered.
Zollars, C. (1994). The perils of periodical indexes: Some problems in constructing samples for content analysis and culture indicators research. Communication Research, 21 (6), 698-714.
The author examines problems in using periodical indexes to construct research samples via the use of content analysis and culture indicator research. Issues of historical and idiosyncratic changes in index subject category heading and subheadings make article headings potentially misleading indicators. Index subject categories are not necessarily invalid as a result; nevertheless, the author discusses the need to test for category longevity, coherence, and consistency over time, and suggests the use of oversampling, cross-references, and other techniques as a means of correcting and/or compensating for hidden inaccuracies in classification, and as a means of constructing purposive samples for analytic comparisons.
Busch, Carol, Paul S. De Maret, Teresa Flynn, Rachel Kellum, Sheri Le, Brad Meyers, Matt Saunders, Robert White, and Mike Palmquist. (2005). Content Analysis. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=61
- Research article
- Open access
- Published: 17 September 2018
A content analysis of popular media reporting regarding increases in minimum ages of legal access for tobacco
- Jocelyn Huey 1 &
- Dorie E. Apollonio ORCID: orcid.org/0000-0003-4694-0826 1
BMC Public Health volume 18 , Article number: 1129 ( 2018 ) Cite this article
In the late 20th century, US localities began increasing the minimum age of legal access (MLA) for tobacco from 18 to 21 years by enacting “Tobacco 21” ordinances. Although these policies have a strong evidence base and broad popular support, popular media coverage of tobacco control laws has not always been accurate. This study sought to determine if contemporaneous popular media reporting accurately reflected the scientific findings regarding increased tobacco MLAs.
We searched LexisNexis for popular media reports that (1) addressed proposed or enacted Tobacco 21 ordinances and were (2) published in English, (3) drawn from a US news source, and (4) written after January 2004. We conducted a content analysis for quality based on a validated measure of accuracy of reporting, the Index of Scientific Quality (ISQ), which allows assessment of articles by assigning scores ranging from 1 (lowest) to 5 (highest).
Searches yielded 378 articles; after screening for relevance and duplicates, 98 were included in the review. All studies identified through the keyword searches addressed Tobacco 21 policies. The average global score identifying the scientific quality of the articles was 2.98 of 5. Over three-quarters of the popular media articles addressing Tobacco 21 laws were written after a systematic review of these policies was released by the Institute of Medicine and approximately 4 in 10 cited findings from that review.
Popular media reports on Tobacco 21 laws demonstrated average overall quality and relied on both anecdotal and scientific evidence, in contrast to previous studies found that popular media reports on tobacco issues demonstrated low overall quality and relied primarily on anecdotal evidence. The systematic review of increased MLAs for tobacco written by the Institute of Medicine diffused quickly into popular reporting, suggesting that this type of evidence might improve research translation.
Peer Review reports
Tobacco use is the leading preventable cause of death in United States and the negative health consequences of tobacco use have been well established for decades [ 1 ]. In 2015, approximately 4.7 million middle and high school students in the US were current tobacco smokers [ 2 ]. Nine out of ten of smokers begin smoking before age 18, and smoking behavior among young adults is predictive of smoking in later years [ 1 , 3 , 4 ]. Despite evidence of tobacco industry marketing toward youth and young adults, [ 5 ] policies to reduce access to tobacco for this group have been limited in scope [ 2 ].
In the late 20th century, localities in the United States instituted renewed efforts to increase the MLA for tobacco from 18 to 21 years, generally referred to as “Tobacco 21” laws [ 6 ]. These efforts resulted in a nearly 50% decrease in cigarette smoking rates among high-school students (13% to 7%) and a comparable decrease in store purchases of cigarettes (18% to 13%) [ 7 ]. In 2015 the Institute of Medicine projected that increasing the MLA for tobacco to 21 years would reduce adult smoking by 12% and prevent 223,000 premature deaths [ 4 ]. Tobacco 21 policies are popular: 70% of adults support raising the MLA for tobacco to 21 years, including a majority of adults in all demographic and smoking status categories [ 8 ]. However despite clinical evidence and popular support, as of 2016 only two states (California and Hawaii) had increased their MLA for tobacco to 21 years; as of 2017, an additional three states (Maine, New Jersey, and Oregon) had done so [ 6 ].
Existing studies of research translation detail the process from the generation of research to its use by policymakers [ 9 , 10 ]. These studies have identified the importance of systematic reviews in translating evidence into policy, [ 11 , 12 , 13 , 14 ] and note that dissemination strategies that involve contact with policymakers are critical [ 15 , 16 , 17 , 18 ] because most policymakers are not trained to interpret scientific research or rewarded for doing so [ 19 , 20 ]. The nature of reporting affects public opinion, influences individual behavior, and plays a central role in the process of public health policy formation [ 21 , 22 ]. Although partisanship, ideology, and maintaining consistent voting records all factor into policymakers’ decisions, the extent of public support for proposed policies offers critical information in making decisions about whether to enact such changes [ 23 ].
Media misunderstanding of research findings is common [ 24 ]. Multiple studies report that journalists translate research evidence poorly, particularly during novel events [ 25 , 26 , 27 ]. Past studies suggest that the accuracy of research translation by journalists covering tobacco issues has been inconsistent [ 28 , 29 ]. The limited research on the scientific accuracy of popular reporting on tobacco has led to calls for additional research in this area [ 30 , 31 , 32 , 33 ].
To address this gap, we sought to assess the accuracy of popular media reporting on Tobacco 21 laws. The coverage of proposed increases in MLAs for tobacco offers particular insight in understanding research translation because it addresses two issues anticipated to affect the accuracy of popular media reports: novelty and systematic reviews. Tobacco 21 policies became relevant over a limited time period; the issue first became relevant in the 21st century after the passage of a Tobacco 21 ordinance in Needham, Massachusetts in 2005 [ 7 ]. In March 2015, the US Institute of Medicine (IOM) published a systematic review of the effects of increasing MLAs for tobacco [ 4 ]. Following the publication of this report Hawaii passed a Tobacco 21 law in June 2015, and California passed a similar law in June 2016 [ 6 ]. Consistent with existing research, we hypothesized that (1) popular reporting on Tobacco 21 laws would rely heavily on anecdotal evidence; and (2) the publication of the IOM report would lead to higher quality popular media reports.
We conducted a content analysis of popular news articles that addressed increased MLAs for tobacco. We focused on articles in the public domain that were most likely to be easily found by individuals who were inexperienced with traditional academic research methods. To identify these reports, one of the authors (JH) searched the LexisNexis database for newspaper and magazine articles with the assistance of a university librarian. The search was conducted in May 2016 and relied on relevant keywords: “smoking” AND “tobacco” AND (“smoking age” OR “legal age” OR “minimum age” OR ((“teen age” OR “adolescent”) AND “tobacco control”) or “tobacco 21”). We included articles from newspapers and newswires that were (1) published in English, (2) drawn from a US news source, and (3) written after January 2004. This start date was chosen because it was one year prior to the first local US implementation of a Tobacco 21 policy in the 21st century. We excluded duplicate articles and articles that assessed smoking cessation and other clean air policies.
The following information was extracted from each article or website by one reviewer (JH):
Title of the article
Publication type (e.g., newspaper, magazine article, wire service stories)
We relied on a validated instrument created by Oxman et al., [ 34 ] the Index of Scientific Quality (ISQ), to assess the quality of popular media reports. The ISQ index uses a five-point scale, with 1 corresponding to the lowest level of quality and 5 corresponding to the highest level of quality. A score of 4 or 5 indicates clear references to evidence, while a score of 2 or 3 represents partly or definitely unclear references to evidence. An ISQ score of 1 is assigned to criteria where the evidence base is potentially misleading. We modified the ISQ coding instrument to reflect outcomes relevant to Tobacco 21 laws. The applicability measure restricted the topic specifically to MLAs for tobacco; the validity measure relied both on specific terms (e.g. “prestigious” used as a marker for quality) and mention of systematic reviews; the magnitude measure included measures of health outcomes related to tobacco; the consequences considered health outcomes specific to tobacco such as smoking rates and costs. Details regarding the coding of each content area are provided in the Appendix . In addition, the instrument was expanded so that both coders made a judgment regarding whether the article, taken overall, claimed that increasing the tobacco sales age to 21 was effective, ineffective, or took no position. We used the IOM report as a gold standard for assessing reporting of relevant research.
The instrument covered the following content areas:
Applicability: Describes whether or not the author clearly refers to the affected population (21 and under)
Opinions versus Facts: Describes whether or not facts are clearly distinguished from opinions
Validity: Describes whether or not the assessment of the credibility (validity) of the evidence is clear and well-founded (not misleading)
Magnitude: Describes whether or not the strength or magnitude of the findings (effects on smoking rate, health, or costs) that are the main focus of the article are clearly reported
Precision: Describes whether or not the author provides a clear and well-founded (not misleading) assessment of the precision of any estimates that are reported or of the probability that any of the reported findings might be due to chance
Consistency: Describes whether or not the consistency of the evidence (between studies) is considered and whether the assessment is well-founded (not misleading)
Consequences: Describes whether or not all of the important consequences (youth and adult smoking rates, deaths from tobacco use, health care costs, sales and government revenue) of concern relative to the central topic of the report are identified
Global: Describes the overall scientific quality of the report
The analysis of article quality relied on the mean quality scores in each category identified by the ISQ, with subgroup analyses conducted for articles published before and after the release of the IOM report. In coding for content, both authors reviewed each article using the instrument, working independently. Cohen’s κ was run to assess interrater reliability for each ISQ quality criteria. κ was interpreted by the guidelines from Altman (1991) in which a κ score of 0.00–0.20 indicates poor agreement, 0.21–0.40 indicates fair agreement, 0.41–0.60 indicates moderate agreement, 0.61–0.80 indicates good agreement, and 0.81–1.00 indicates very good agreement [ 35 ]. Agreement was good for applicability and consistency; moderate for consequences and global, fair for opinions versus facts, validity, and precision, and poor for magnitude. Coding discrepancies in all categories were discussed and resolved by consensus.
The initial database searches identified 378 popular media articles. One of the authors (JH) screened these articles for relevance. Eighty-five articles were identified as duplicates based on title, word count, and preview of the first three lines, and were excluded from the analysis. An additional 162 articles were removed from analysis because they did not meet the inclusion criteria based on title and preview of the first three lines. The remaining 134 articles were eligible for full-text review by both authors. After reading these articles in full, an additional 36 were identified to be either duplicates or to not meet the inclusion criteria by consensus of both authors, leaving 98 articles included in the final analysis. The screening process is outlined in Fig. 1 .
Flow of included articles regarding Tobacco 21 laws
Publication dates ranged from 2006 to 2016. Eighty percent of the articles did not take a position on Tobacco 21 laws; of the remaining articles, 16% supported the policies and 4% opposed them, as shown in Table 1 . The majority of articles (82%) were published after 2015, with only 18 articles published before 2015. An increase in reporting on Tobacco 21 was correlated with the introduction of Hawaii Senate Bill 1030 in January 2015, which first proposed to raise the state’s MLA for tobacco to 21 years.
Quality scores by content area
Table 2 provides the mean values and SDs for each of the ISQ quality criteria.
Each article clearly stated that it considered Tobacco 21 policies so applicability received an average score of 5, the highest ranking for all criteria. Although the search strategy was designed to identify articles addressing Tobacco 21 laws, previous research has found that popular media articles do not always accurately reference policies that are putatively being reported or assessed.
Opinion v. facts
Distinction of opinion v. facts averaged 3.92, the second highest ranking for all criteria, indicating that articles were more evidence-based than opinion-based. Of the total 98 articles, 35 (36%) received a 5, indicating all factual claims were quoted or cited, and two received a 1, which meant opinions were offered as facts without qualification.
Validity represented the journalist’s assessment of the quality of evidence used the article. A score of 1 indicates that research was misrepresented, 2 that research was not referenced, 3 that studies were presented without discussion of their quality, 4 that the article made unqualified claims, and 5 that there was some discussion about why a study was “good” such as a reference to the weight of evidence. The articles scored an average of 2.83 for validity, a score representing average quality.
The magnitude of findings, which referred to the extent to which claims about effects were anchored with data averaged 3.84, the third highest quality ranking across criteria, suggesting that the articles made both general and specific claims about the potential effects of a Tobacco 21 policy. A score of 1 indicated that effects either were not mentioned or were misrepresented, 2 that effects were implied but not explicitly mentioned, 3 that effects were discussed in general terms, 4 that exact figures assessing outcomes were mixed in with general claims, and 5 that the article relied on exact percentages or estimates of the numbers of lives saved.
Assessment of the precision of results due to study design scored an average of 1.08, the lowest ranking for all criteria; a score of 1 indicated there was no indication of whether results were due to chance, 3 that there was some effort to link study design to credibility, and 5 that there was an explanation of study design.
Consistency of evidence between studies, referencing the number of studies discussed and the accuracy with which they represented the state of contemporaneous research, scored 2.79 for all articles, suggesting average quality. Articles that did not cite a specific study or that used a potentially misleading source of data were assigned a score of 1, while discussions of one, two, or three or more studies were scored 2, 3, and 4 respectively. Articles that referred to a systematic review, such as the IOM report, were scored 5.
We tallied the number of consequences relevant to Tobacco 21 that were mentioned in articles, specifically potential effects on smoking rates, deaths from tobacco use, health care costs, and sales of tobacco and/or tax receipts. On average, 2.26 consequences were listed, with 12 articles listing 4 or more potential effects of the policy and 19 articles listing either one potential effect or none.
The average global score identifying the scientific quality of the articles was 2.98 of a potential 5, representing average quality. Misleading articles scored were scored as 1, those that treated evidence equally with opinion scored 2, those that included some opinion but had more weight on evidence scored 3, those that presented claims that were evidence focused but not explained scored 4, and articles in which major claims were supported by evidence and explained scored 5.
Before and after the IOM report
The 2015 report by the Institute of Medicine found that increasing the MLA for tobacco products would prevent or delay use of such products by adolescents, improve population health, and reduce tobacco-related deaths. Table 2 also provides a comparison of the mean values and SDs prior to and after the IOM report. Our review found that 76 (78%) articles were written after the report was released; of these, 43% cited findings from the IOM’s report. After the release of the IOM report the quality scores for consistency, which represented the number of studies discussed and their representation of current research, improved from 1.95 to 3.04; this difference was statistically significant ( p < 0.001). Scores also increased for magnitude and consequences but the differences were not statistically significant. Scores decreased for opinions v. facts, validity, and precision; these differences were also not statistically significant.
Nature of arguments
Proponents and opponents of Tobacco 21 policies included in the articles used different types of arguments, with proponents focused on outcomes and opponents focused on ideological claims, as shown in Table 3 .
Supporters of Tobacco 21 policies primarily referenced scientific studies that focused on the prevalence of smoking and the health consequences of increasing the MLA. Among the five major consequences analyzed, the impact on youth smoking rate (80% of articles) and deaths due to smoking (51%) were the most frequently mentioned effects. Articles also referred to effects on adult smoking rate (43%), health care cost (35%), and revenue (16%), however these issues were discussed less frequently. Approximately 7% of articles cited statistics that demonstrated strong public support for the policies, particularly among current or former smokers. Supportive claims were typically made by public health professionals or legislators speaking on tobacco-related issues, rather than the general public.
Consistent with past arguments against stronger tobacco control policies, the concerns expressed by opponents of Tobacco 21 policies primarily focused on the individual rights to make decisions rather than on research findings regarding the effects of the policies or on tobacco industry marketing to youth. About 38% of articles claimed that Tobacco 21 laws would impede individual decision making; opponents argued that increasing the MLA was tantamount to creating a “nanny state” that interfered with the decisions of young adults. These claims often focused on extended analogy; 33% of articles stated that if people were old enough to vote and enlist in the military, they were old enough to smoke. In 15% of articles, opponents of the policies speculated that despite research showing that Tobacco 21 policies had resulted in reduced tobacco use, young people might circumvent the law by purchasing tobacco in neighboring jurisdictions with lower MLAs or obtain tobacco from family and friends. In 15% of the articles, critics of the policies attempted to shift focus from the potential of an increased MLA to save lives and reduce health care costs by making counterclaims that such policies would have a negative financial impact on small businesses and government by reducing tobacco sales and tobacco tax revenue.
This study provides the first assessment of popular media coverage addressing laws that increased tobacco MLAs to 21 years. Consistent with previous studies, we hypothesized that popular reporting would demonstrate low overall quality and rely on anecdotal evidence. Instead, we found that media reports on this topic were of average quality and relied on both anecdotal and scientific evidence. Our content analysis found that applicability, opinion v. facts, and magnitude were the highest scoring categories, indicating that articles were focused on Tobacco 21 policies and mostly reported facts and figures assessing their effects. The views of public health advocates were better represented than those of the tobacco industry. However, when reporting on claims made by opponents to the policies, articles disproportionately relied on their anecdotes and speculation, rather than research findings. We found that measures of precision were consistently weak, suggesting that the concept of statistical significance and the role of chance remains difficult to communicate through popular media reports. This finding is consistent with previous research; one study suggested that reporters preferentially cover medical research with weaker methodology [ 27 ].
We also hypothesized that publication of the IOM report would lead to higher quality popular media reports. The scores for consistency showed a statistically significant increase in quality, suggesting that journalists recognize the value of systematic reviews over individual studies. These findings also appeared in claims made by policy advocates; in contrast, policy opponents lacked comparable evidence and relied on ideological or anecdotal claims. Our results are consistent with previous research that attempted to train consumer advocates to better understand and communicate research. The majority of advocates believed that systematic reviews were more reliable than individual studies after training, even though less than half stated that they were comfortable with analyzing research methods and designs [ 36 ]. The differences in scores for other measured categories of scientific quality were not statistically significant.
This research has limitations. We may not have identified all published articles in LexisNexis, given that our inclusion criteria limited our selection to articles published from January 2004 to March 2016. Data collection stopped shortly after the passage of California law, making it possible that later articles were missed. Coverage may increase again if additional states propose and enact Tobacco 21 laws. In addition, we focused on written media, and did not assess reporting in television, radio, or social media. Finally, our findings with respect to Tobacco 21 laws may not be generalizable to other aspects of tobacco control.
Our findings provide new evidence about translation of clinical research into community settings, and help fill a gap in understanding the accuracy of media reports on tobacco issues. Consistent with the continued concern about the quality of popular media reporting on scientific research, we found that reporting on Tobacco 21 policies was of average quality and inconsistently cited data from scientific studies. Our results also show that while a systematic review addressing this topic diffused relatively quickly into popular reporting, it was not always referenced. Nonetheless these findings suggest that systematic reviews appear to improve popular media reporting with respect to communicating the overall state of research evidence. Development of policy-relevant systematic reviews may be a useful strategy to help reduce tobacco-related disease by communicating information about research evidence to policymakers and the public.
Institute of Medicine
Index of Scientific Quality
Minimal age of legal access
Centers for Disease Control and Prevention. The health consequences of smoking—50 years of progress: a report of the Surgeon General. Atlanta: US Department of Health and Human Services; 2014.
Singh T. Tobacco use among middle and high school students—United States, 2011–2015. MMWR Morb Mortal Wkly Rep. 2016;65:361-367.
Article Google Scholar
Apollonio DE, Glantz SA. Minimum ages of legal access for tobacco in the United States from 1863 to 2015. Am J Public Health. 2016;106(7):1200.
Institute of Medicine. In: Bonnie RJ, Stratton K, Kwan LY, editors. Public Health Implications of Raising the Minimum Age of Legal Access to Tobacco Products. Washington: National Academies Press (US); 2015.
Ling PM, Glantz SA. Why and how the tobacco industry sells cigarettes to young adults: evidence from industry documents. Am J Public Health. 2002;92(6):908.
Tobacco Twenty-One [ https://tobacco21.org/ ]. Accessed 5 Oct 2016.
Kessel Schneider S, Buka SL, Dash K, Winickoff JP, O'Donnell L. Community reductions in youth smoking after raising the minimum tobacco sales age to 21. Tob Control. 2016;25(3):355.
King BA, Jama AO, Marynak KL, Promoff GR. Attitudes toward raising the minimum age of Sale for tobacco among U.S. adults. Am J Prev Med. 2015;49(4):583–8.
Lavis JN, Posada FB, Haines A, Osei E. Use of research to inform public policymaking. Lancet. 2004;364(9445):1615.
Lavis JN, Oxman AD, Lewin S, Fretheim A. SUPPORT Tools for evidence-informed health Policymaking (STP). Health Res Policy Syst. 2009;7 Suppl 1:I1.
Cartwright N, Hardie J. Evidence-based policy: a practical guide to doing it better. Oxford: Oxford University Press; 2012.
Book Google Scholar
Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. Jama. 2003;290(12):1624–32.
Article CAS Google Scholar
Hunink MGM, Weinstein MC, Wittenberg E, Drummond MF, Pliskin JS, Wong JB, Glasziou PP. Decision making in health and medicine: integrating evidence and values. Cambridge: Cambridge University Press; 2014.
Mays N, Pope C, Popay J. Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. J Health Serv Res Policy. 2005;10(Suppl 1):6–20.
Jacobs JA, Jones E, Gabella BA, Spring B, Brownson RC. Tools for implementing an evidence-based approach in public health practice. Prev Chronic Dis. 2012;9:E116.
PubMed PubMed Central Google Scholar
Brownson RC, Fielding JE, Maylahn CM. Evidence-based decision making to improve public health practice. Front Public Health Serv Syst Res. 2013;2(2):2.
Lavis JN, Ross SE, Hurley JE. Examining the role of health services research in public policymaking. Milbank Q. 2002;80(1):125.
Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J. How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q. 2003;81(2):221.
Elliott H, Popay J. How are policy makers using evidence? Models of research utilisation and local NHS policy making. J Epidemiol Community Health. 2000;54(6):461–8.
Innvær S, Vist G, Trommald M, Oxman A. Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7(4):239.
Lima JC, Siegel M. The tobacco settlement: an analysis of newspaper coverage of a national policy debate, 1997-98. Tob Control. 1999;8(3):247–53.
Happer C, Philo G. The role of the media in the construction of public belief and social change. J Soc Pol Psychol. 2013;1(1):321.
Jamieson AM. The messenger as policy maker: thinking about the press and policy networks in the Washington community. Democratization. 1996;3(1):114.
Schwitzer G. How the media left the evidence out in the cold. Bmj. 2003;326(7403):1403.
Lee ST, Basnyat I. From press release to news: mapping the framing of the 2009 H1N1 a influenza pandemic. Health Commun. 2013;28(2):119–32.
Hooker C, King C, Leask J. Journalists' views about reporting avian influenza and a potential pandemic: a qualitative study. Influenza Other Respir Viruses. 2012;6(3):224–9.
Selvaraj S, Borkar DS, Prasad V. Media coverage of medical journals: do the best articles make the news? PLoS One. 2014;9(1):e85355.
Grilli R, Ramsay C, Minozzi S. Mass media interventions: effects on health services utilization. Cochrane Database Syst Rev. 2001;4:CD000389.
Winsten JA. Science and the media: the boundaries of truth. Health Aff (Project Hope). 1985;4(1):5.
Long M, Slater MD, Lysengen L. US news media coverage of tobacco control issues. Tob Control. 2006;15(5):367–72.
Nelson DE, Pederson LL, Mowery P, Bailey S, Sevilimedu V, London J, Babb S, Pechacek T. Trends in US newspaper and television coverage of tobacco. Tob Control. 2015;24(1):94–9.
Krauth D, Apollonio D. Accuracy of popular media reporting on tobacco cessation therapy in substance abuse and mental health populations. BMJ Open. 2015;5(3):e007169.
Eckler P, Rodgers S, Everett K. Characteristics of community newspaper coverage of tobacco control and its relationship to the passage of tobacco ordinances. J Community Health. 2016;41(5):953–61.
Oxman AD, Guyatt GH, Cook DJ, Jaeschke R, Heddle N, Keller J. An index of scientific quality for health reports in the lay press. J Clin Epidemiol. 1993;46(9):987.
Altman D. Practical statistics for medical research. New York: Chapman & Hall/CRC Press; 1990.
Apollonio DE, Bero LA. Challenges to generating evidence-informed policy and the role of systematic reviews and (perceived) conflicts of interest. J Commun Healthcare. 2016;9(2):135.
The authors acknowledge Evans Whitaker for assistance with the literature search, and Nancy Hessol for reading the manuscript and suggesting revisions.
This work was supported by NIH CA140236, California TRDRP 26IR-0014, and the UCSF Research Allocation Program. The funders played no role in the conduct of the research or preparation of the manuscript.
Availability of data and materials
The data supporting the conclusions of this article are publicly available and cited in the references section.
Authors and affiliations.
Department of Clinical Pharmacy, University of California, 3333 California Street, Suite 420, San Francisco, CA, 94143-0613, USA
Jocelyn Huey & Dorie E. Apollonio
You can also search for this author in PubMed Google Scholar
Both authors conceived and designed the paper, interpreted the results, reviewed and revised the manuscript in preparation for publication, and read and approved the final manuscript. JH conducted the literature search and completed the first draft of the manuscript.
Correspondence to Dorie E. Apollonio .
Ethics approval and consent to participate.
Not applicable to this research design.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.
Reprints and Permissions
About this article
Cite this article.
Huey, J., Apollonio, D.E. A content analysis of popular media reporting regarding increases in minimum ages of legal access for tobacco. BMC Public Health 18 , 1129 (2018). https://doi.org/10.1186/s12889-018-6020-6
Received : 24 January 2018
Accepted : 06 September 2018
Published : 17 September 2018
DOI : https://doi.org/10.1186/s12889-018-6020-6
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Popular Media Reports
- Tobacco Issues
- Translation Research
- Tobacco Industry Marketing
- Adult Smoking Rates
BMC Public Health
Standardisierte Inhaltsanalyse in der Kommunikationswissenschaft – Standardized Content Analysis in Communication Research pp 67–76 Cite as
Content Analysis in the Research on Reporting Styles
- Miriam Klein 6
- Open Access
- First Online: 25 September 2022
The content analysis of reporting styles enables a rough characterization of the journalistic content with regard to the news format (news stories, commentaries, feature journalism, interviews) as well as an evaluation of the journalistic style in terms of content and language. This latter question of how content is presented encompasses many research traditions and refers, for example, to the objectivity norm, horse race coverage, storytelling or news softening. The present chapter provides a brief overview of news formats and content-related or stylistic journalistic reporting styles and discusses possible further research questions and designs as well as the contribution of automated content analysis in this field.
- Reporting Styles
- Horse Race Coverage
- News Softening
Download chapter PDF
Journalistic reporting styles refer above all to how journalistic content is presented. On the one hand, reporting styles are the subject of practical textbooks which contain instructions for future journalists on how to apply certain reporting styles in practice (for Germany, e.g., Mast 2018 ; Ruß-Mohl 2016 ). On the other hand, reporting styles are the subject of scientific studies—the present chapter is based on this area.
From the perspective of democratic theory, the motivation to explore reporting styles is strongly normative. Since the media have the task of providing citizens with information, any deviation from factual or neutral reporting is usually seen as a potential threat to well-informed citizens and thus to democracy. Based on this way of interpretation, trends such as news softening or horse race/game framing coverage —as a result of economic constraints and the associated increasing orientation towards the audience (van Aelst et al. 2017 ; see also Haim 2019 )—are increasingly being observed. Furthermore, it is feared that the growing importance of social media for journalism (Newman 2020 ) will reinforce these trends (e.g., Lischka 2021 ; Steiner 2016 ; Welbers and Opgenhaffen 2019 ), resulting in an even stronger audience orientation and adaptation to the so-called “social media logic” (van Dijck and Poell 2013 ). More generally, digital environments create completely new technological conditions, which also affect reporting styles. This includes, inter alia, more visualization, but also more live reporting (Huxford 2007 ) and the integration of more multimedia and interactive elements (Haim 2019 ).
2 Main Constructs
“Reporting styles” as a research object is multifaceted and can be assigned to different research areas—for example, research on tabloidization or research on objectivity in journalism. The most important fields of research are outlined in more detail in this chapter. When discussing research on reporting styles, a distinction is made first between 1) formal reporting styles and 2) content-related or stylistic reporting styles (by using certain linguistic means or highlighting certain aspects).
2.1 Formal Reporting Styles
The identification of the formal reporting style , that is, the form of news presentation or rather the journalistic genre, is one of the more common categories within codebooks for the study of news journalism. Separating news stories and commentaries is useful, for example, for research conducted on the norm of separating news and opinion (Schönbach 1977 ). There are several reporting styles to be distinguished:
News story . A news story is the “standard format” (Weischenberg and Birkner 2008 , p. 3277) in news journalism. The news story focuses only on the most important facts and presents them in order of importance. This form is called “inverted pyramid” (Pöttker 2005 , p. 51; Weischenberg and Birkner 2008 , p. 3278) and has its origins in nineteenth century American journalism (Pöttker 2005 , p. 52). This way of news writing makes it easy to shorten the news story from the end (e.g., when taking over agency material).
Commentary . A commentary is a “genre of journalism that provides interpretations and opinions on current events, rather than factual reporting” (Djerf-Pierre 2008 , p. 566). While news stories primarily have an informative function, commentaries play an important role in the formation of public opinion (Djerf-Pierre 2008 , p. 567). Apart from the classic “commentary”, the journalist’s opinion can also be found in similar news formats such as editorials or columns (Mast 2018 ). However, the occurrence of comments and the separation of news and comments vary between countries (Djerf-Pierre 2008 , p. 567): While US journalists apply a more neutral style, journalists in southern Europe follow a more advocacy tradition and mix commentaries more with factual news coverage.
Feature journalism . Feature journalism is a journalistic format that differs from the classic news format in that it does not use the inverted pyramid, but is rather structured chronologically (Steensen 2009 ). Furthermore, it often contains subjective descriptions and reflections and, as it often portrays people, is rather personal and emotional (Steensen 2009 ). Therefore, feature journalism is a mixture between opinion-based and objective news formats.
Interview . The interview is a journalistic format consisting of questions (interviewer, journalist) and answers (interviewee, most often public officials such as politicians or experts) (Clayman 2008 ). The first printed interview appeared in 1859 in New York Tribune (Schudson 1995 , pp. 73–74). While interviews were initially criticized as “artificial” and “intrusive” (Clayman 2008 ; Schudson 1994 ), they have become very important today. With the help of increasingly direct and aggressive questions, journalists also manage to stop the interviewees from using the format for mere self-representation (Clayman 2008 ; Schudson 1994 ).
Codebooks on news journalism generally distinguish between these four news formats (e.g., Kösters 2020 ; see also Magin 2006 with some additional news formats). Other codebooks distinguish between factual and opinionated news formats (e.g., Seethaler 2015 ). This distinction has become more difficult, however, as factual and opinionated formats have increasingly blurred (Schäfer-Hock 2018 )—not only in private television and tabloid media, but also in the quality press, where background reporting and commentary often merge (Ruß-Mohl 2016 , p. 68) (see also the trend towards more “interpretive journalism”, Salgado and Strömbäck 2012 ).
2.2 Content-Related or Stylistic Reporting Styles
Apart from the distinction of journalistic news formats, there are several reporting styles that deal with specific ways of how news is covered. This chapter will take up some of the more common concepts and briefly outline them.
Objectivity. Objectivity is a journalistic goal (Cunningham 2003 ) which is both difficult to achieve and difficult to measure (Neuberger 2017 ; Ruß-Mohl 2016 ). However, it is an important aspect of journalistic professionalism and contains criteria such as neutrality or the negation of journalistic subjectivity, the fair representation of opposed opinions, but also telling the truth, providing all relevant information and being transparent (Bentele 1988 ; Donsbach and Klett 1993 ; Hackett 2008 ; Ruß-Mohl 2016 ). Many studies in this field of research, often in the context of news performance, analyze how neutral/impartial or subjective/partisan journalists cover specific issues, thereby referring to the concept of impartiality (Schönhagen 1998 ).
Horse race coverage/game framing . Horse race coverage or game framing refers to political reporting that uses sports metaphors and sees politics as a race (of ideas, candidates) rather than focusing on factual content (Brettschneider 2008 , p. 2137). This reporting style is a specific feature of election campaign reporting. Journalists focus on candidates instead of topics, as well as on polls and often over-interpret the smallest changes in popularity ratings and try to forecast the results (Brettschneider 2008 ; Patterson 2005 ). While the horse race reporting style is often criticized for trivializing election campaigns, it is also argued that it helps to increase public interest (Broh 1980 , p. 515). Studies show that horse race reporting is particularly common for American election campaigns (Brettschneider 1996 ; Farnsworth and Lichter 2003 ).
Storytelling . Storytelling describes a reporting style in which news is enriched with narrative elements to make it more interesting, meaningful and attractive for the audience (Boesman and Costera Meijer 2018 , p. 997; Früh 2014 , p. 93). While some journalists consider storytelling to be the “opposite of good journalism”, others rather see it as a “toolkit” to “present the facts in a good way” (Boesman and Costera Meijer 2018 , pp. 1001–1002). Storytelling is also an important factor in online journalism (e.g. long-form journalism with many multimedia elements) (Jacobson et al. 2016 ; Meadows 2003 ).
News softening. The news softening, or tabloidization, describes the adaption of tabloid standards, particularly by elite or so-called “quality” media (Esser 1999 ; Lefkowitz 2018 ; Magin 2019 ). It is often seen as a result of increased competitive and economic pressure and the struggle for public attention (Magin 2019 ; Skovsgaard 2014 ). However, the concept of news softening is rather a conglomerate of several concepts (for an overview see Otto et al. 2017 ; Reinemann et al. 2012 ) and therefore describes several reporting styles which are typical for tabloid media. Reinemann et al. ( 2012 ) summarize them in three sub-dimensions. The first sub-dimension refers to the topic of a news item—soft (entertainment, crime etc.) vs. hard (mainly politics) news—, and the other two sub-dimensions refer to specific reporting styles: The focus dimension means the accentuation of certain aspects within an article, for example, the focus on the individual (soft news) vs. the societal relevance or the difference between episodic framing (soft news; focus on the event itself) vs. thematic framing (focus on the thematic context). The style dimension is concerned with verbal or also (audio)visual stylistic elements. The concrete indicators for news softening in this sub-dimension vary across different studies. However, emotionalization plays a particularly important role in most of them (Reinemann et al. 2012 ). This includes the reporting on or visual presentation of emotions (e.g., showing crying or laughing people), but also affective wording (see also sentiment analyses, next chapter). The latter is achieved through certain linguistic elements, such as emotionalising metaphors, a short-term sentence structure, dramatizing or exaggerating adjectives etc. Another important feature is the appearance of the journalists' points of view (personal reporting; Reinemann et al. 2012 ; cf. objectivity norm, described further up). Apart from this, some studies also regard colloquial language or a loose language as a characteristic for softened news (e.g., Leidenberger 2015 ; Steiner 2016 ) or also use a narrative presentation (e.g., Donsbach and Büttner 2005 ) or the emphasis on conflicts (e.g., Donsbach and Büttner 2005 ; Leidenberger 2015 ) as indicators of softened news. Since the debate initially refers to the adaption of tabloid news standards, analyses therefore traditionally focus on newspapers (Esser 1999 ; Lefkowitz 2018 ; Magin 2019 ). However, research has extended to other types of media such as television (e.g., Donsbach and Büttner 2005 ; Grabe et al. 2001 ; Vettehen et al. 2008 ), online media outlets (Gran 2015 ; Karlsson 2016 ) and even social media (Lischka and Werning 2017 ; Steiner 2016 ) or cross-media (Reinemann et al. 2016 ).
3 New Research Designs and Combination of Methods
So far, the outlined concepts and indicators are usually measured using manual content analyses (e.g., Donsbach and Büttner 2005 ; Magin 2019 ; Seethaler 2015 ). However, like in other research areas, first steps towards automated analyses are taken. Boumans and Trilling ( 2016 ) give an overview of different approaches, ranging from strongly inductive (unsupervised machine-learning) to strongly deductive (dictionary-based methods) orientations. For each approach, they present examples from journalism research, also including reporting styles.
One of these examples is sentiment analysis. This analytical approach belongs to the field of computational linguistics and “aims at identifying and classifying subjective language” (van Atteveldt et al. 2008 , p. 78). While most research in this field is based on a fixed list of words (dictionary-based approach) (Boumans and Trilling 2016 ; van Atteveldt et al. 2008 ), machine-learning approaches additionally help to analyze the context in which specific words appear (van Atteveldt et al. 2008 ). With regard to a similar research question, Welbers and Opgenhaffen ( 2019 ) also choose a computer-based approach. They use a lexicon for subjective adjectives and a lexicon for emoticons to investigate to which extent subjective language appears within status messages, headlines and leads of journalistic Facebook posts. Correspondingly, in her study on the tabloidization of German and Austrian elite newspapers, Magin ( 2019 ) examines the occurrence of emotional terms. She bases her study on a list (Berlin Affective word List Reloaded: Võ et al. 2009 ), the terms of which have previously been examined by 200 people with regard to their valence and strength of arousal. However, computer-based analyses of reporting styles are not limited to the identification of individual subjective terms. On the basis of machine-learning, those analyses can identify more complex structures and thus, for example, investigate framing (e.g., Burscher et al. 2014 ).
Research on reporting styles also benefits from combining findings from content analyses with other methods. Mixed-methods designs can, for example, help to identify journalists' motives or strategies for using specific reporting styles or to determine the effects on the audience more precisely. For example, Glogger ( 2019 ) uses an online survey to examine the extent to which the role expectations of journalists can affect the use of the soft news style In another study, Lischka ( 2021 ) analyzes reporting styles used within journalistic news posts on Facebook, based on qualitative interviews and a quantitative survey with social media editors from Finland and Switzerland. Furthermore, Grabe et al. ( 2000 ) use an experiment to investigate the effect that tabloid reporting style has on, for example, the memory of the recipients.
4 Research Desiderata
Due to increasing commercialisation and competitive pressure, audience orientation seems more important than ever (Haim 2019 ). For this reason, some authors fear that news media are increasingly focusing on how news is presented ( reporting styles ) instead of the content of news, which could be harmful to democracy (Blumler and Gurevitch 1995 ; Sparks 2000 ). Social media is particularly criticized for changing journalistic practices. However, it is still unclear to what extent news media adapt to the social media logic (van Dijck and Poell 2013 ) and neglect professional standards for the sake of attention-oriented reporting styles. With regard to news softening, first studies (e.g., Lischka 2021 ; Steiner 2016 ; Welbers and Opgenhaffen 2019 ) indicate that there is no complete departure from professional standards. However, future research should pay more attention to how news media adapt to communicative developments such as the increased importance of social media for news consumption. In addition, more studies should investigate the extent to which criticism of certain reporting styles is justified and what positive effects these reporting styles can have (e.g., Bernhard 2012 ; Frey 2014 ).
Furthermore, outlining the different concepts and indicators of reporting styles in this article has shown that journalistic reporting styles are very complex and thus not easy to measure. If researchers want to apply automated methods (e.g., machine-learning) to enlarge their data set, they may first need a sufficient amount of (manually coded) training data (e.g., see Burscher et al. 2014 on the importance of the amount of training material for the performance of the classifiers). For this reason, it is not only important that researchers share their data (see Dienlin et al. 2021 for the call for open science), but also that they use the same instruments (e.g., see Reinemann et al. 2012 , p. 225 on the problem of “conceptual fuzziness” with respect to news softening) so that their data can be used by other researchers.
Relevant Variables in DOCA—Database of Variables for Content Analysis
Formal reporting style: https://doi.org/10.34778/2r
Soft news/tabloidization: https://doi.org/10.34778/2t
Bentele, G. (1988). Wie objektiv können Journalisten sein? [How objective can journalists be?]. In L. Erbring, S. Ruß-Mohl, B. Seewald, & B. Sösemann (Eds.), Medien ohne Moral: Variationen über Journalismus und Ethik (pp. 196–225). Berlin: Argon.
Bernhard, U. (2012). Infotainment in der Zeitung: Der Einfluss unterhaltungsorientierter Gestaltungsmittel auf die Wahrnehmung und Verarbeitung politischer Informationen [Infotainment in the newspaper. The influence of entertainment-oriented style elements on the perception and processing of political information]. Baden-Baden: Nomos.
Blumler, J. G., & Gurevitch, M. (1995). The crisis of public communication . London, New York: Routledge.
Boesman, J., & Costera Meijer, I. (2018). Nothing but the facts? Journalism Practice , 12 (8), 997–1007.
CrossRef Google Scholar
Boumans, J. W., & Trilling, D. (2016). Taking stock of the toolkit: An overview of relevant automated content analysis approaches and techniques for digital journalism scholars. Digital Journalism , 4 (1), 8–23.
Brettschneider, F. (1996). Wahlumfragen und Medien – Eine empirische Untersuchung der Presseberichterstattung über Meinungsumfragen vor den Bundestagswahlen 1980 bis 1994 [Election polls and the media – An empirical study of press coverage of opinion polls prior to the 1980 to 1994 federal elections]. Politische Vierteljahresschrift , 37 (3), 475–493.
Brettschneider, F. (2008). Horse race coverage. In W. Donsbach (Ed.), The International Encyclopedia of Communication (pp. 2137–2139). Chichester: Wiley.
Broh, C. A. (1980). Horse-race journalism: Reporting the polls in the 1976 presidential election. The Public Opinion Quarterly , 44 (4), 514–529.
Burscher, B., Odijk, D., Vliegenthart, R., de Rijke, M., & de Vreese, C. H. (2014). Teaching the computer to code frames in news: Comparing two supervised machine learning approaches to frame analysis. Communication Methods and Measures , 8 (3), 190–206.
Clayman, S. E. (2008). Interview as journalistic form. In W. Donsbach (Ed.), The International Encyclopedia of Communication (pp. 2509–2513). Chichester: Wiley.
Cunningham, B. (2003). Re-thinking objectivity: In a world of spin, our awkward embrace of an ideal can make us passive recipients of the news. Columbia Journalism Review . https://archives.cjr.org/feature/rethinking_objectivity.php
Dienlin, T., Johannes, N., Bowman, N. D., Masur, P. K., Engesser, S., Kümpel, A. S., … de Vreese, C. (2021). An agenda for open science in communication. Journal of Communication, 71 (1), 1–26.
Djerf-Pierre, M. (2008). Commentary. In W. Donsbach (Ed.), The International Encyclopedia of Communication (pp. 566–568). Chichester: Wiley.
Donsbach, W., & Büttner, K. (2005). Boulevardisierungstrend in deutschen Fernsehnachrichten: Darstellungsmerkmale der Politikberichterstattung vor den Bundestagswahlen 1983, 1990 und 1998 [Trends of tabloidization in German TV news: How the news broadcast presented politics before the general elections in 1983, 1990, and 1998]. Publizistik , 50 (1), 21–38.
Donsbach, W., & Klett, B. (1993). Subjective objectivity: How journalists in four countries define a key term of their profession. Gazette , 51 (1), 53–83.
Esser, F. (1999). `Tabloidization’ of news: A comparative analysis of Anglo-American and German press journalism. European Journal of Communication , 14 (3), 291–324.
Farnsworth, S. J., & Lichter, S. R. (2003). The nightly news nightmare: Media coverage of U.S. presidential elections, 1988–2008 . Lanham et al.: Rowman & Littlefield.
Frey, F. (2014). Wirkung des Narrativen: Ein systematischer Forschungsüberblick zu Effekten narrativer Kommunikationsformen [Effect of the narrative: A systematic research overview of the effects of narrative forms of communication]. In W. Früh & F. Frey (Eds.), Narration und Storytelling: Theorie und empirische Befunde. (pp. 120–192). Köln: Herbert von Halem.
Früh, W. (2014). Narration und Storytelling [Narration and storytelling]. In W. Früh & F. Frey (Eds.), Narration und Storytelling: Theorie und empirische Befunde (pp. 63–119). Köln: Herbert von Halem.
Glogger, I. (2019). Soft spot for soft news? Influences of journalistic role conceptions on hard and soft news coverage. Journalism Studies , 20 (16), 2293–2311.
Grabe, M. E., Zhou, S., & Barnett, B. (2001). Explicating sensationalism in television news: Content and the bells and whistles of form. Journal of Broadcasting & Electronic Media , 45 (4), 635–655.
Grabe, M. E., Zhou, S., Lang, A., & Bolls, P. D. (2000). Packaging television news: The effects of tabloid on information processing and evaluative responses. Journal of Broadcasting & Electronic Media , 44 (4), 581–598.
Gran, C. S. (2015). Tabloidisation of the Norwegian news media: A quantitative analysis of print and online newspaper platforms (Doctoral dissertation, University of London). Retrieved from https://www.lse.ac.uk/media-and-communications/assets/documents/research/msc-dissertations/2014/Celine-Storstad-Gran-LSE-MediaSeries-AF.pdf .
Hackett, R. A. (2008). Objectivity in reporting. In W. Donsbach (Ed.), The International Encyclopedia of Communication (pp. 3345–3350). Chichester: Wiley.
Haim, M. (2019). Die Orientierung von Online-Journalismus an seinen Publika: Anforderung, Antizipation, Anspruch [The orientation of online journalism towards its audiences: Demands, anticipation, expectations]. Wiesbaden: Springer.
Huxford, J. (2007). The proximity paradox: Live reporting, virtual proximity and the concept of place in the news. Journalism: Theory, Practice & Criticism , 8 (6), 657–674.
Jacobson, S., Marino, J., & Gutsche, R. E. (2016). The digital animation of literary journalism. Journalism: Theory, Practice & Criticism , 17 (4), 527–546.
Karlsson, M. B. (2016). Goodbye politics, hello lifestyle: Changing news topics in tabloid, quality and local newspaper websites in the U.K. and Sweden from 2002 to 2012. Observatorio , 10 (4), 150–165.
Kösters, R. (2020). Medien als Mittler im Konflikt? Der Streit um die Migration im Spiegel der Berichterstattung [Media as intermediaries in conflicts? The debate on migration in media coverage]. (Doctoral dissertation, Heinrich-Heine-University Düsseldorf). Retrieved from https://d-nb.info/1203369883/34
Lefkowitz, J. (2018). “Tabloidization” or dual-convergence: Quoted speech in tabloid and “quality” British newspapers 1970–2010. Journalism Studies , 19 (3), 353–375.
Leidenberger, J. (2015). Boulevardisierung von Fernsehnachrichten: Eine Inhaltsanalyse deutscher und französischer Hauptnachrichtensendungen [Tabloidization of TV news: A content analysis comparing German and French main newscasts]. Wiesbaden: Springer.
Lischka, J. A. (2021). Logics in social media news making: How social media editors marry the Facebook logic with journalistic standards. Journalism, 22 (2), 430–447.
Lischka, J. A., & Werning, M. (2017). Wie Facebook den Regionaljournalismus verändert: Publikums- und Algorithmusorientierung bei der Facebook-Themenselektion von Regionalzeitungen [How Facebook is changing regional journalism: Audience and algorithm orientation in the Facebook topic selection of regional newspapers]. kommunikation@gesellschaft , 18 .
Magin, M. (2006). Qualitätszeitungen in Deutschland und Österreich: Ein Vergleich. Codebuch [Quality newspapers in Germany and Austria: A comparison. Codebook].
Magin, M. (2019). Attention, please! Structural influences on tabloidization of campaign coverage in German and Austrian elite newspapers (1949–2009). Journalism , 20 (12), 1704–1724.
Mast, C. (Hrsg.). (2018). ABC des Journalismus: Ein Handbuch [ABC of journalism: A manual]. (13. ed.). Köln: Herbert von Halem.
Meadows, D. (2003). Digital storytelling: Research-based practice in new media. Visual Communication , 2 (2), 189–193.
Neuberger, C. (2017). Journalistische Objektivität. Vorschlag für einen pragmatischen Theorierahmen [Journalistic objectivity: Proposal for a pragmatic theoretical framework]. M&K Medien & Kommunikationswissenschaft , 65 (2), 406–431.
Newman, N. (2020). Journalism, media, and technology trends and predictions 2020 . Oxford: Reuters Institute for the Study of Journalism.
Otto, L., Glogger, I., & Boukes, M. (2017). The softening of journalistic political communication: A comprehensive framework model of sensationalism, soft news, infotainment, and tabloidization. Communication Theory , 27 (2), 136–155.
Patterson, T. E. (2005). Of polls, mountains: U.S. journalists and their use of election surveys. Public Opinion Quarterly , 69 (5), 716–724.
Pöttker, H. (2005). The news pyramid and its origin from the American Journalism in the 19th Century: A professional approach and an empirical inquiry. In S. Høyer & H. Pöttker (Eds.), Diffusion of the news paradigm 1850–2000 (pp. 51–64). Göteborg: Nordicom.
Reinemann, C., Stanyer, J., & Scherr, S. (2016). Hard and soft news. In C. H. de Vreese, F. Esser, & N. Hopman (Eds.), Comparing political journalism (pp. 131–149). London, New York: Routledge.
Reinemann, C., Stanyer, J., Scherr, S., & Legnante, G. (2012). Hard and soft news: A review of concepts, operationalizations and key findings. Journalism , 13 (2), 221–239.
Ruß-Mohl, S. (2016). Journalismus: Das Lehr- und Handbuch [Journalism: The textbook and manual]. (3. ed.). Frankfurt/Main: Frankfurter Allgemeine.
Salgado, S., & Strömbäck, J. (2012). Interpretive journalism: A review of concepts, operationalizations and key findings. Journalism: Theory, Practice & Criticism , 13 (2), 144–161.
Schäfer-Hock, C. (2018). Journalistische Darstellungsformen im Wandel: Eine Untersuchung deutscher Tageszeitungen von 1992 bis 2012 [Changing forms of journalistic reporting: A study of German daily newspapers from 1992 to 2012]. Wiesbaden: Springer.
Schönbach, K. (1977). Trennung von Nachricht und Meinung: Empirische Untersuchung eines journalistischen Qualitätskriteriums [Separation of news and opinion: Empirical investigation of a journalistic quality criterion]. Freiburg et al.: Alber.
Schönhagen, P. (1998). Unparteilichkeit im Journalismus: Tradition einer Qualitätsnorm [Impartiality in journalism: Tradition of a quality standard]. Tübingen: Niemeyer.
Schudson, M. (1994). Question authority: A history of the news interview in American Journalism, 1860s–1930s. Media, Culture & Society , 16 (4), 565–587.
Schudson, M. (1995). The power of news . Cambridge et al.: Harvard University Press.
Seethaler, J. (2015). Qualität des tagesaktuellen Informationsangebots in den österreichischen Medien: Eine crossmediale Untersuchung [News media quality in Austria: A crossmedia analysis]. Rundfunk und Telekom Regulierungs-GmbH. Retrieved from https://www.rtr.at/medien/aktuelles/publikationen/Publikationen/SchriftenreiheNr12015.de.html .
Skovsgaard, M. (2014). A tabloid mind? Professional values and organizational pressures as explanations of tabloid journalism. Media, Culture & Society , 36 (2), 200–218.
Sparks, C. (2000). Introduction: The panic over tabloid news. In C. Sparks & J. Tulloch (Eds.), Tabloid Tales: Global debates over media standards. (pp. 1–40). Lanham et al.: Rowman & Littlefield.
Steensen, S. (2009). Online feature journalism: A clash of discourses. Journalism Practice , 3 (1), 13–29.
Steiner, M. (2016). Boulevardisierung goes Facebook? Ein inhaltsanalytischer Vergleich politischer Nachrichten von tagesschau, heute, RTL Aktuell und Sat.1 Nachrichten im Fernsehen und auf Facebook [Tabloidization goes Facebook? A content-analytical comparison of political news from tagesschau, heute, RTL Aktuell and Sat.1 news on television and Facebook]. In L. Leißner, H. Bause, & L. Hagemeyer (Eds.), Politische Kommunikation – neue Phänomene, neue Perspektiven, neue Methoden (pp. 27–46). Berlin: Frank & Timme.
van Aelst, P., Strömbäck, J., Aalberg, T., Esser, F., de Vreese, C., Matthes, J., … Stanyer, J. (2017). Political communication in a high-choice media environment: A challenge for democracy? Annals of the International Communication Association , 41 (1), 3–27.
van Atteveldt, W., Kleinnijenhuis, J., Ruigrok, N., & Schlobach, S. (2008). Good news or bad news? Conducting sentiment analysis on Dutch text to distinguish between positive and negative relations. Journal of Information Technology & Politics , 5 (1), 73–94.
van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1 (1), 2–14
Vettehen, P. H., Nuijten, K., & Peeters, A. (2008). Explaining effects of sensationalism on liking of television news stories: The role of emotional arousal. Communication Research , 35 (3), 319–338.
Võ, M. L.-H., Conrad, M., Kuchinke L., Urton, K., Hofmann, M. J., & Jacobs, A. M. (2009) The Berlin Affective Word List Reloaded (BAWL-R). Behavior Research Methods 41 (2), 534–538.
Weischenberg, S., & Birkner, T. (2008). News story. In W. Donsbach (Ed.), The International Encyclopedia of Communication (pp. 3277–3281). Chichester: Wiley.
Welbers, K., & Opgenhaffen, M. (2019). Presenting news on social media: Media logic in the communication style of newspapers on Facebook. Digital Journalism , 7 (1), 45–62.
Authors and affiliations.
Institut für Publizistik, Johannes Gutenberg-Universität Mainz, Mainz, Germany
You can also search for this author in PubMed Google Scholar
Correspondence to Miriam Klein .
Editors and affiliations.
Fachhochschule Graubünden, Chur, Schweiz
IKMZ - Institut für Kommunikationswissenschaft und Medienforschung, Universität Zürich, Zürich, Schweiz
Sabrina Heike Kessler
Zürcher Hochschule für angewandte Wissenschaft (ZHAW), Zürich, Schweiz
Rights and permissions
Open Access Dieses Kapitel wird unter der Creative Commons Namensnennung 4.0 International Lizenz ( http://creativecommons.org/licenses/by/4.0/deed.de ) veröffentlicht, welche die Nutzung, Vervielfältigung, Bearbeitung, Verbreitung und Wiedergabe in jeglichem Medium und Format erlaubt, sofern Sie den/die ursprünglichen Autor(en) und die Quelle ordnungsgemäß nennen, einen Link zur Creative Commons Lizenz beifügen und angeben, ob Änderungen vorgenommen wurden.
Die in diesem Kapitel enthaltenen Bilder und sonstiges Drittmaterial unterliegen ebenfalls der genannten Creative Commons Lizenz, sofern sich aus der Abbildungslegende nichts anderes ergibt. Sofern das betreffende Material nicht unter der genannten Creative Commons Lizenz steht und die betreffende Handlung nicht nach gesetzlichen Vorschriften erlaubt ist, ist für die oben aufgeführten Weiterverwendungen des Materials die Einwilligung des jeweiligen Rechteinhabers einzuholen.
Reprints and Permissions
© 2023 Der/die Autor(en)
About this chapter
Cite this chapter.
Klein, M. (2023). Content Analysis in the Research on Reporting Styles. In: Oehmer-Pedrazzi, F., Kessler, S.H., Humprecht, E., Sommer, K., Castro, L. (eds) Standardisierte Inhaltsanalyse in der Kommunikationswissenschaft – Standardized Content Analysis in Communication Research. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-36179-2_6
DOI : https://doi.org/10.1007/978-3-658-36179-2_6
Published : 25 September 2022
Publisher Name : Springer VS, Wiesbaden
Print ISBN : 978-3-658-36178-5
Online ISBN : 978-3-658-36179-2
eBook Packages : Social Science and Law (German Language)
Share this chapter
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Find a journal
- Publish with us