Categorie: Quantitative Research

  • Guide SPSS How to: Calculate the dependent t-test

    Here’s a guide for 1st year students on how to calculate the dependent t-test in SPSS:

    Step-by-Step Guide for Dependent t-test in SPSS

    1. Prepare Your Data

    • Ensure your data is in the correct format: two columns, one for each condition (e.g., before and after)
    • Each row should represent a single participant

    2. Open SPSS and Enter Data

    • Open SPSS and switch to the “Variable View”
    • Define your variables (e.g., “Before” and “After”)
    • Switch to “Data View” and enter your data

    3. Run the Test

    • Click on “Analyze” in the top menu
    • Select “Compare Means” > “Paired-Samples t Test”.
    • In the dialog box, move your two variables (e.g., Before and After) to the “Paired Variables” box
    • Click “OK” to run the test

    4. Interpret the Results

    • Look at the “Paired Samples Statistics” table for descriptive statistics
    • Check the “Paired Samples Test” table:
    • Find the t-value, degrees of freedom (df), and significance (p-value)
    • If p < 0.05, there’s a significant difference between the two conditions

    5. Report the Results

    • State whether there was a significant difference.
    • Report the t-value, degrees of freedom, and p-value.
    • Include means for both conditions.

    Tips:

    • Always check your data for accuracy before running the test.
    • Ensure your sample size is adequate for reliable results.
    • Consider the assumptions of the dependent t-test, such as normal distribution of differences between pairs.

    Remember, practice with sample datasets will help you become more comfortable with this process.

  • Guide SPSS How to: Calculate the independent t-test

    Step-by-Step Guide

    1. Open your SPSS data file.
    2. Click on “Analyze” in the top menu, then select “Compare Means” > “Independent-Samples T Test”
    3. In the dialog box that appears:
    • Move your dependent variable (continuous) into the “Test Variable(s)” box.
    • Move your independent variable (categorical with two groups) into the “Grouping Variable” box
    1. Click on the “Define Groups” button next to the Grouping Variable box
    2. In the new window, enter the values that represent your two groups (e.g., 0 for “No” and 1 for “Yes”)[1].
    3. Click “Continue” and then “OK” to run the test

    Interpreting the Results

    1. Check Levene’s Test for Equality of Variances:
    • If p > 0.05, use the “Equal variances assumed” row.
    • If p ≤ 0.05, use the “Equal variances not assumed” row
    1. Look at the “Sig. (2-tailed)” column:
    • If p ≤ 0.05, there is a significant difference between the groups.
    • If p > 0.05, there is no significant difference
    1. If significant, compare the means in the “Group Statistics” table to see which group has the higher score

    Tips

    • Ensure your data meets the assumptions for an independent t-test, including normal distribution and independence of observations
    • Consider calculating effect size, as SPSS doesn’t provide this automatically

  • Guide SPSS How to: Calculate Chi Square

    1. Open your data file in SPSS.
    2. Click on “Analyze” in the top menu, then select “Descriptive Statistics” > “Crosstabs”
    3. In the Crosstabs dialog box:
    • Move one categorical variable into the “Row(s)” box.
    • Move the other categorical variable into the “Column(s)” box.
    1. Click on the “Statistics” button and check the box for “Chi-square”
    2. Click on the “Cells” button and ensure “Observed” is checked under “Counts”
    3. Click “Continue” and then “OK” to run the analysis.

    Interpreting the Results

    1. Look for the “Chi-Square Tests” table in the output
    2. Find the “Pearson Chi-Square” row and check the significance value (p-value) in the “Asymptotic Significance (2-sided)” column
    3. If the p-value is less than your chosen significance level (typically 0.05), you can reject the null hypothesis and conclude there is a significant association between the variables

    Main Weakness of Chi-square Test

    The main weakness of the Chi-square test is its sensitivity to sample size[3]. Specifically:

    1. Assumption violation: The test assumes that the expected frequency in each cell should be 5 or more in at least 80% of the cells, and no cell should have an expected frequency of less than 1
    2. Sample size issues:
    • With small sample sizes, the test may not be valid as it’s more likely to violate the above assumption.
    • With very large sample sizes, even small, practically insignificant differences can appear statistically significant.

    To address this weakness, always check the “Expected Count” in your output to ensure the assumption is met. If not, consider combining categories or using alternative tests for small samples, such as Fisher’s Exact Test for 2×2 tables

  • Guide SPSS How to: Correlation

    Calculating Correlation in SPSS

    Step 1: Prepare Your Data

    • Enter your data into SPSS, with each variable in a separate column.
    • Ensure your variables are measured on an interval or ratio scale for Pearson’s r, or ordinal scale for Spearman’s rho

    Step 2: Access the Correlation Analysis Tool

    1. Click on “Analyze” in the top menu.
    2. Select “Correlate” from the dropdown menu.
    3. Choose “Bivariate” from the submenu

    Step 3: Select Variables

    • In the new window, move your variables of interest into the “Variables” box.
    • You can select multiple variables to create a correlation matrix

    Step 4: Choose Correlation Coefficient

    • For Pearson’s r: Ensure “Pearson” is checked (it’s usually the default).
    • For Spearman’s rho: Check the “Spearman” box

    Step 5: Additional Options

    • Under “Test of Significance,” select “Two-tailed” unless you have a specific directional hypothesis.
    • Check “Flag significant correlations” to highlight significant results

    Step 6: Run the Analysis

    • Click “OK” to generate the correlation output

    Interpreting the Results

    Correlation Coefficient

    • The value ranges from -1 to +1.
    • Positive values indicate a positive relationship, negative values indicate an inverse relationship[1].
    • Strength of correlation:
    • 0.00 to 0.29: Weak
    • 0.30 to 0.49: Moderate
    • 0.50 to 1.00: Strong

    Statistical Significance

    • Look for p-values less than 0.05 (or your chosen significance level) to determine if the correlation is statistically significant.

    Sample Size

    • The output will also show the sample size (n) for each correlation.

    Remember, correlation does not imply causation. Always interpret your results in the context of your research question and theoretical framework.

    To interpret the results of a Pearson correlation in SPSS, focus on these key elements:

    1. Correlation Coefficient (r): This value ranges from -1 to +1 and indicates the strength and direction of the relationship between variables
    • Positive values indicate a positive relationship, negative values indicate an inverse relationship.
    • Strength interpretation:
      • 0.00 to 0.29: Weak correlation
      • 0.30 to 0.49: Moderate correlation
      • 0.50 to 1.00: Strong correlation
    1. Statistical Significance: Look at the “Sig. (2-tailed)” value
    • If this value is less than your chosen significance level (typically 0.05), the correlation is statistically significant.
    • Significant correlations are often flagged with asterisks in the output.
    1. Sample Size (n): This indicates the number of cases used in the analysis

    Example Interpretation

    Let’s say you have a correlation coefficient of 0.228 with a significance value of 0.060:

    1. The correlation coefficient (0.228) indicates a weak positive relationship between the variables.
    2. The significance value (0.060) is greater than 0.05, meaning the correlation is not statistically significant
    3. This suggests that while a small positive correlation was observed in the sample, there’s not enough evidence to conclude that this relationship exists in the population
    4. Remember, correlation does not imply causation. Always interpret results in the context of your research question and theoretical framework.

  • Guide SPSS how to: Measures of Central Tendency and Measures of Dispersion

    Here’s a guide for 1st year students to calculate measures of central tendency and dispersion in SPSS:

    Calculating Measures of Central Tendency

    1. Open your dataset in SPSS.
    2. Click on “Analyze” in the top menu, then select “Descriptive Statistics” > “Frequencies”
    3. In the new window, move the variables you want to analyze into the “Variable(s)” box
    4. Click on the “Statistics” button
    5. In the “Frequencies: Statistics” window, check the boxes for:
    • Mean
    • Median
    • Mode
    1. Click “Continue” and then “OK” to run the analysis

    Calculating Measures of Dispersion

    1. Follow steps 1-4 from above.
    2. In the “Frequencies: Statistics” window, also check the boxes for:
    • Standard deviation
    • Range
    • Minimum
    • Maximum
    1. For interquartile range, check the box for “Quartiles”
    2. Click “Continue” and then “OK” to run the analysis.

    Interpreting the Results

    • Mean: The average of all values
    • Median: The middle value when data is ordered
    • Mode: The most frequently occurring value
    • Range: The difference between the highest and lowest values
    • Standard Deviation: Measures the spread of data from the mean
    • Interquartile Range: The range of the middle 50% of the data.

    Choosing the Appropriate Measure

    • For nominal data: Use mode only.
    • For ordinal data: Use median and mode.
    • For interval/ratio data: Use mean, median, and mode.

    Remember, if your distribution is skewed, the median may be more appropriate than the mean for interval/ratio data.

  • Anova and Manova

    Exploring ANOVA and MANOVA Techniques in Marketing and Media Studies

    Analysis of Variance (ANOVA) and Multivariate Analysis of Variance (MANOVA) are powerful statistical tools that can provide valuable insights for marketing and media studies. Let’s explore these techniques with relevant examples for college students in these fields.

    Repeated Measures ANOVA

    Repeated Measures ANOVA is used when the same participants are measured multiple times under different conditions. This technique is particularly useful in marketing and media studies for assessing changes in consumer behavior or media consumption over time or across different scenarios.

    Example for Marketing Students:
    Imagine a study evaluating the effectiveness of different advertising formats (TV, social media, print) on brand recall. Participants are exposed to all three formats over time, and their brand recall is measured after each exposure. The repeated measures ANOVA would help determine if there are significant differences in brand recall across these advertising formats.

    The general formula for repeated measures ANOVA is:

    $$F = \frac{MS_{between}}{MS_{within}}$$

    Where:

    • $$MS_{between}$$ is the mean square between treatments
    • $$MS_{within}$$ is the mean square within treatments

    MANOVA

    MANOVA extends ANOVA by allowing the analysis of multiple dependent variables simultaneously. This is particularly valuable in marketing and media studies, where researchers often want to examine the impact of independent variables on multiple outcome measures.

    Example for Media Studies:
    Consider a study investigating the effects of different types of news coverage (positive, neutral, negative) on viewers’ emotional responses and information retention. The dependent variables could be:

    1. Emotional response (measured on a scale)
    2. Information retention (measured by a quiz score)
    3. Likelihood to share the news (measured on a scale)

    MANOVA would allow researchers to analyze how the type of news coverage affects all these outcomes simultaneously.

    The most commonly used test statistic in MANOVA is Pillai’s trace, which can be represented as:

    $$V = \sum_{i=1}^s \frac{\lambda_i}{1 + \lambda_i}$$

    Where:

    • $$V$$ is Pillai’s trace
    • $$\lambda_i$$ are the eigenvalues of the matrix product of the between-group sum of squares and cross-products matrix and the inverse of the within-group sum of squares and cross-products matrix
    • $$s$$ is the number of eigenvalues

    Discriminant Function Analysis and MANOVA

    After conducting a MANOVA, discriminant function analysis can help identify which aspects of the dependent variables contribute most to group differences.

    Marketing Example:
    In a study of consumer preferences for different product attributes (price, quality, brand reputation), discriminant function analysis could reveal which combination of these attributes best distinguishes between different consumer segments.

    Reporting MANOVA Results

    When reporting MANOVA results, include:

    1. The specific multivariate test used (e.g., Pillai’s trace)
    2. F-statistic, degrees of freedom, and p-value
    3. Interpretation in the context of your research question

    Example: “A one-way MANOVA revealed a significant multivariate main effect for news coverage type, Pillai’s trace = 0.38, F(6, 194) = 7.62, p < .001, partial η2 = .19.”

    Conclusion

    ANOVA and MANOVA techniques offer powerful tools for marketing and media studies students to analyze complex datasets involving multiple variables. By understanding these methods, students can design more sophisticated studies and draw more nuanced conclusions about consumer behavior, media effects, and market trends[1][2][3][4][5].

    Citations:
    [1] https://fastercapital.com/content/MANOVA-and-MANCOVA–Marketing-Mastery–Unleashing-the-Potential-of-MANOVA-and-MANCOVA.html
    [2] https://fastercapital.com/content/MANOVA-and-MANCOVA–MANOVA-and-MANCOVA–A-Strategic-Approach-for-Marketing-Research.html
    [3] https://www.proquest.com/docview/1815499254
    [4] https://business.adobe.com/blog/basics/multivariate-analysis-examples
    [5] https://www.worldsupporter.org/en/summary/when-and-how-use-manova-and-mancova-chapter-7-exclusive-86003
    [6] https://www.linkedin.com/advice/0/how-can-you-use-manova-analyze-impact-advertising-35cbf
    [7] https://methods.sagepub.com/video/an-introduction-to-manova-and-mancova-for-marketing-research
    [8] https://www.researchgate.net/publication/2507074_MANOVAMAP_Graphical_Representation_of_MANOVA_in_Marketing_Research

  • Data Analysis (Section D)

    Ever wondered how researchers make sense of all the information they collect? Section D of Matthews and Ross’ book is your treasure map to the hidden gems in data analysis. Let’s embark on this adventure together!

    Why Analyze Data?

    Imagine you’re a detective solving a mystery. You’ve gathered all the clues (that’s your data), but now what? Data analysis is your magnifying glass, helping you piece together the puzzle and answer your burning research questions.

    Pro Tip: Plan Your Analysis Strategy Early!

    Before you start collecting data, decide how you’ll analyze it. It’s like choosing your weapon before entering a video game battle – your data collection method will determine which analysis techniques you can use.

    Types of Data: A Trilogy

    1. Structured Data: The neat freak of the data world. Think multiple-choice questionnaires – easy to categorize and analyze.
    2. Unstructured Data: The free spirit. This could be interviews or open-ended responses – more challenging but often rich in insights.
    3. Semi-structured Data: The best of both worlds. A mix of structured and unstructured elements.

    Crunching Numbers: Statistical Analysis

    For all you number lovers out there, statistical analysis is your playground. Learn to summarize data, spot patterns, and explore relationships between different factors. It’s like being a data detective!

    Thematic Analysis: Finding the Hidden Threads

    This is where you become a storyteller, weaving together themes and patterns from qualitative data. Pro tip: Keep a research diary to track your “Eureka!” moments.

    Beyond the Basics: Other Cool Techniques

    • Narrative Analysis: Decoding the stories people tell
    • Discourse Analysis: Understanding how language shapes reality
    • Content Analysis: Counting words to uncover meaning
    • Grounded Theory: Building theories from the ground up

    Tech to the Rescue: Computers in Data Analysis

    Say goodbye to manual number crunching! Learn about software like SPSS and NVivo that can make your analysis life much easier.

    The Grand Finale: Drawing Conclusions

    This is where you answer the ultimate question: “So what?” What does all this analysis mean, and why should anyone care?

    Remember, data analysis isn’t just about crunching numbers or coding text. It’s about uncovering insights that can change the world. So, are you ready to become a data analysis superhero? Let’s get started!

  • Chi Square

    Chi-square is a statistical test widely used in media research to analyze relationships between categorical variables. This essay will explain the concept, its formula, and provide an example, while also discussing significance and significance levels.

    Understanding Chi-Square

    Chi-square (χ²) is a non-parametric test that examines whether there is a significant association between two categorical variables. It compares observed frequencies with expected frequencies to determine if the differences are due to chance or a real relationship.

    The Chi-Square Formula

    The formula for calculating the chi-square statistic is:

    $$ χ² = \sum \frac{(O – E)²}{E} $$

    Where:

    • χ² is the chi-square statistic
    • O is the observed frequency
    • E is the expected frequency
    • Σ represents the sum of all categories

    Example in Media Research

    Let’s consider a study examining the relationship between gender and preferred social media platform among college students.

    Observed frequencies:

    PlatformMaleFemale
    Instagram4060
    Twitter3020
    TikTok3070

    To calculate χ², we first determine the expected frequencies for each cell, then apply the formula.

    To calculate the chi-square statistic for the given example of gender and preferred social media platform, we’ll use the formula:

    $$ χ² = \sum \frac{(O – E)²}{E} $$

    First, we need to calculate the expected frequencies for each cell:

    Expected Frequencies

    Total respondents: 250
    Instagram: 100, Twitter: 50, TikTok: 100
    Males: 100, Females: 150

    PlatformMaleFemale
    Instagram4060
    Twitter2030
    TikTok4060

    Chi-Square Calculation

    $$ χ² = \frac{(40 – 40)²}{40} + \frac{(60 – 60)²}{60} + \frac{(30 – 20)²}{20} + \frac{(20 – 30)²}{30} + \frac{(30 – 40)²}{40} + \frac{(70 – 60)²}{60} $$

    $$ χ² = 0 + 0 + 5 + 3.33 + 2.5 + 1.67 $$

    $$ χ² = 12.5 $$

    Degrees of Freedom

    df = (number of rows – 1) * (number of columns – 1) = (3 – 1) * (2 – 1) = 2

    Significance

    For df = 2 and α = 0.05, the critical value is 5.991[1].

    Since our calculated χ² (12.5) is greater than the critical value (5.991), we reject the null hypothesis.

    The result is statistically significant at the 0.05 level. This indicates that there is a significant relationship between gender and preferred social media platform among college students in this sample.

    Significance and Significance Level

    The calculated χ² value is compared to a critical value from the chi-square distribution table. This comparison helps determine if the relationship between variables is statistically significant.

    The significance level (α) is typically set at 0.05, meaning there’s a 5% chance of rejecting the null hypothesis when it’s actually true. If the calculated χ² exceeds the critical value at the chosen significance level, we reject the null hypothesis and conclude there’s a significant relationship between the variables[1][2].

    Interpreting Results

    A significant result suggests that the differences in observed frequencies are not due to chance, indicating a real relationship between gender and social media platform preference in our example. This information can be valuable for media strategists in targeting specific demographics[3][4].

    In conclusion, chi-square is a powerful tool for media researchers to analyze categorical data, providing insights into relationships between variables that can inform decision-making in various media contexts.

    Citations:
    [1] https://datatab.net/tutorial/chi-square-distribution
    [2] https://www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/chi-square/
    [3] https://www.scribbr.com/statistics/chi-square-test-of-independence/
    [4] https://www.investopedia.com/terms/c/chi-square-statistic.asp
    [5] https://en.wikipedia.org/wiki/Chi_squared_test
    [6] https://statisticsbyjim.com/hypothesis-testing/chi-square-test-independence-example/
    [7] https://passel2.unl.edu/view/lesson/9beaa382bf7e/8
    [8] https://www.bmj.com/about-bmj/resources-readers/publications/statistics-square-one/8-chi-squared-tests

  • Correlation Spearman and Pearson

    Correlation is a fundamental concept in statistics that measures the strength and direction of the relationship between two variables. For first-year media students, understanding correlation is crucial for analyzing data trends and making informed decisions. This essay will explore two common correlation coefficients: Pearson’s r and Spearman’s rho.

    Pearson’s Correlation Coefficient (r)

    Pearson’s r is used to measure the linear relationship between two continuous variables. It ranges from -1 to +1, where:

    • +1 indicates a perfect positive linear relationship
    • 0 indicates no linear relationship
    • -1 indicates a perfect negative linear relationship

    The formula for Pearson’s r is:

    $$r = \frac{\sum_{i=1}^{n} (x_i – \bar{x})(y_i – \bar{y})}{\sqrt{\sum_{i=1}^{n} (x_i – \bar{x})^2 \sum_{i=1}^{n} (y_i – \bar{y})^2}}$$

    Where:

    • $$x_i$$ and $$y_i$$ are individual values
    • $$\bar{x}$$ and $$\bar{y}$$ are the means of x and y

    Example: A media researcher wants to investigate the relationship between the number of social media posts and engagement rates. They collect data from 50 social media campaigns and calculate Pearson’s r to be 0.75. This indicates a strong positive linear relationship between the number of posts and engagement rates.

    Spearman’s Rank Correlation Coefficient (ρ)

    Spearman’s rho is used when data is ordinal or does not meet the assumptions for Pearson’s r. It measures the strength and direction of the monotonic relationship between two variables.

    The formula for Spearman’s rho is:

    $$\rho = 1 – \frac{6 \sum d_i^2}{n(n^2 – 1)}$$

    Where:

    • $$d_i$$ is the difference between the ranks of corresponding values
    • n is the number of pairs of values

    Example: A researcher wants to study the relationship between a TV show’s IMDB rating and its viewership ranking. They use Spearman’s rho because the data is ordinal. A calculated ρ of 0.85 would indicate a strong positive monotonic relationship between IMDB ratings and viewership rankings.

    Significance and Significance Level

    When interpreting correlation coefficients, it’s crucial to consider their statistical significance[1]. The significance of a correlation tells us whether the observed relationship is likely to exist in the population or if it could have occurred by chance in our sample.

    To test for significance, we typically use a hypothesis test:

    • Null Hypothesis (H0): ρ = 0 (no correlation in the population)
    • Alternative Hypothesis (Ha): ρ ≠ 0 (correlation exists in the population)

    The significance level (α) is the threshold we use to make our decision. Commonly, α = 0.05 is used[3]. If the p-value of our test is less than α, we reject the null hypothesis and conclude that the correlation is statistically significant[4].

    For example, if we calculate a Pearson’s r of 0.75 with a p-value of 0.001, we would conclude that there is a statistically significant strong positive correlation between our variables, as 0.001 < 0.05.

    Understanding correlation and its significance is essential for media students to interpret research findings, analyze trends, and make data-driven decisions in their future careers.

    The Pearson correlation coefficient (r) is a measure of the strength and direction of the linear relationship between two continuous variables. Here’s how to interpret the results:

    Strength of Correlation

    The absolute value of r indicates the strength of the relationship:

    • 0.00 – 0.19: Very weak correlation
    • 0.20 – 0.39: Weak correlation
    • 0.40 – 0.59: Moderate correlation
    • 0.60 – 0.79: Strong correlation
    • 0.80 – 1.00: Very strong correlation

    Direction of Correlation

    The sign of r indicates the direction of the relationship:

    • Positive r: As one variable increases, the other tends to increase
    • Negative r: As one variable increases, the other tends to decrease

    Interpretation Examples

    • r = 0.85: Very strong positive correlation
    • r = -0.62: Strong negative correlation
    • r = 0.15: Very weak positive correlation
    • r = 0: No linear correlation

    Coefficient of Determination

    The square of r (r²) represents the proportion of variance in one variable that can be explained by the other variable[2].

    Statistical Significance

    To determine if the correlation is statistically significant:

    1. Set a significance level (α), typically 0.05
    2. Calculate the p-value
    3. If p-value < α, the correlation is statistically significant

    A statistically significant correlation suggests that the relationship observed in the sample likely exists in the population[4].

    Remember that correlation does not imply causation, and Pearson’s r only measures linear relationships. Always visualize your data with a scatterplot to check for non-linear patterns[3].

    Citations:
    [1] https://statistics.laerd.com/statistical-guides/pearson-correlation-coefficient-statistical-guide.php
    [2] https://sites.education.miami.edu/statsu/2020/09/22/how-to-interpret-correlation-coefficient-r/
    [3] https://statisticsbyjim.com/basics/correlations/
    [4] https://towardsdatascience.com/eveything-you-need-to-know-about-interpreting-correlations-2c485841c0b8?gi=5c69d367a0fc
    [5] https://datatab.net/tutorial/pearson-correlation
    [6] https://stats.oarc.ucla.edu/spss/output/correlation/


    [super_web_share type=”inline” color=”#2271b1″ text=”Share” icon=”share-icon-1″ style=”default” size=”large” align=”start” ]

  • Audience Transportation in Film

    Audience transportation is a concept in film that describes the extent to which viewers are transported into the narrative world of a movie, creating a sense of immersion and emotional involvement. Studies have shown that audience transportation is achieved through a combination of factors, including setting, character development, sound, music, and plot structure.

    Setting plays a critical role in audience transportation, as it provides a context for the story and creates a sense of place. According to a study by Gromer and colleagues (2015), the use of setting can create a feeling of being transported into a different world, with the audience feeling more involved in the story. The study found that the more immersive the setting, the greater the level of transportation experienced by the audience.

    Character development is also important in creating audience transportation, as it allows viewers to connect emotionally with the characters in the film. A study by Sest and colleagues (2013) found that viewers who became more involved with the characters in a film reported a higher level of transportation. The study also found that the more complex the characters, the more involved the viewer became in the story.

    Sound and music are other important factors in audience transportation. According to a study by Adolphs and colleagues (2018), the use of sound can create an emotional response in the viewer, while music can be used to create a sense of mood and atmosphere. The study found that the use of sound and music can significantly impact the level of transportation experienced by the audience.

    Finally, the plot and narrative structure of a film can also contribute to audience transportation. A study by Green and Brock (2000) found that the more complex the plot of a film, the greater the level of transportation experienced by the audience. The study also found that non-linear plot structures, such as those used in films like “Memento,” can create a greater level of immersion for the audience.

    In conclusion, audience transportation is a critical aspect of the cinematic experience that is achieved through a combination of factors, including setting, character development, sound, music, and plot structure. When these elements are used effectively, they can create a sense of immersion and emotional involvement in the viewer, leaving a lasting impact on their memory and overall enjoyment of the film.

    References:

    Adolphs, S., et al. (2018). Sounds engaging: How music and sound design in movies enhance audience transportation into narrative worlds. Journal of Media Psychology, 30(2), 63-74.

    Gromer, D., et al. (2015). Transportation into a narrative world: A multi-method approach. Journal of Media Psychology, 27(2), 64-73.

    Green, M.C., & Brock, T.C. (2000). The role of transportation in the persuasiveness of public narratives. Journal of Personality and Social Psychology, 79(5), 701-721.

    Sest, S., et al. (2013). The effects of characters’ identification, desire, and morality on narrative transportation and perceived involvement in a story. Psychology of Aesthetics, Creativity, and the Arts, 7(3), 228-237