Star Wars Fandom Survey, Part 3: Sexism and Political Attitudes

Welcome from Part 1, where I talked mainly about methods, and Part 2, where I discussed the three major types of Star Wars fans. In this part, I will focus on sexism and political attitudes. As always, email sw.survey.2019@gmail.com with questions about analyses, methods, results, and so on.


It is not inherently sexist to dislike Disney’s Star Wars films. There have been many thoughtful, intelligent criticisms of these movies. That being said, sexism played a major role in the backlash against these movies, particularly The Last Jedi. A vocal minority has published a great many articles and videos condemning The Last Jedi as feminist, politically-correct (“PC”) propaganda. Tweets about female characters contained hate speech, which drove actor Kelly Marie Tran (who plays Rose Tico in the sequel trilogy) from social media; she has since responded to this online abuse in a New York Times op-ed. And the actor who plays Rey, Daisy Ridley, responded to criticism of Rey’s competence—the “Mary Sue” critique—by calling the term sexist. The Force Awakens director J.J. Abrams and Mark Hamill (who plays Luke Skywalker) both spoke out against the sexist rhetoric used to criticize the Disney movies.

I am not here to litigate the gender politics in the Star Wars movies. My focus in this part is on the empirical, psychological relationship between favorability toward Star Wars films and sexist attitudes. I also look at the closely related concepts of “political correctness” and political identification. I focus on the survey’s questions of hostile sexism, benevolent sexism, PC beliefs, and political identification. I briefly discussed these in Part 1.

These concepts are all related to one another. If we see a relationship between how conservative someone is and their attitudes toward The Last Jedi, then how do we know it isn’t one of these other variables that is responsible for the relationship? To address this, I ran regression equations with each as a simultaneous predictor of fan-cluster membership, movie favorability, and character favorability. In these equations, I will focus only on variables that were significant predictors (p < .01).

I’ll start by looking at how fan clusters (from Part 2) differ on these items. Then I’ll turn to the relationships between these items and favorability toward Star Wars films and characters.


Fan Clusters

In Part 2, I found three major types of Star Wars fans: Prequel Skeptics, who love the saga but rate the prequels lower than the rest; Saga Lovers, who rate everything highly; and TLJ Disowners, who rate only The Last Jedi very negatively.


Sexism

I measured hostile sexism with two statements: “Most women interpret innocent remarks or acts as being sexist,” and “Feminists are making unreasonable demands of men.” I averaged these together to get a general picture of hostile sexism. Benevolent sexism was measured with: “Women should be cherished and protected by men,” and “Many women have a quality of purity that few men possess.” Again, I averaged these together for a general picture of benevolent sexism.

In the plots below, each dot represents a response. The black circle is the mean of the group, and the horizontal lines above and below each dot are the 95% confidence intervals. The confidence interval represents a plausible range of values we can expect the mean to truly be.

 
sexism-clust-1.png
 

TLJ Disowners reported more sexism than Saga Lovers, who reported more sexism than Prequel Skeptics. Comparing both means and medians, all comparisons were statistically significant, (p < .01). Those disowning The Last Jedi tended to score higher on sexism, though, as you can see, not everyone who hates The Last Jedi is sexist. This demonstrates some empirical evidence that sexism plays a role in attitudes toward The Last Jedi.


PC Beliefs and Conservatism

I asked respondents how much they agreed with: “Needing to be ‘politically correct’ creates an atmosphere in which the free exchange of ideas is impossible” to measure their “PC” beliefs. Participants also rated themselves on a scale from very liberal to very conservative.

 
pc-con-1.png
 

The same pattern of results is found here as above: TLJ Disowners reported more negative attitudes toward being PC and more conservatism than Saga Lovers, who reported more of both than Prequel Skeptics. All comparisons were statistically significant, (p < .01). The biggest differences were between TLJ Disowners and the other two clusters. TLJ Disowners are more likely to believe political correctness is a negative force in society and are less politically liberal. Once again, we see a lot of variance within these groups, showing us that the clusters are not in lockstep with these political beliefs.


Trilogy Correlations

I asked respondents how much they liked each Star Wars episode on a ten-point scale. Movies in the same trilogy tended to correlate highly with one another, so I averaged attitudes toward movies of the same trilogy together for these analyses. In this and the next section, I look only at the sexism and PC questions because political identification was no longer a significant predictor after taking these attitudes into account. That is, the relationship between conservatism and movie favorability could have been due to sexism and PC beliefs.

The plots below show sexism and PC scores on the x-axis and favorability toward the trilogy on the y-axis. Each point is someone’s response, and I drew a line showing the relationship between the two attitudes through the points.

Each graph shows the same pattern. There were small, positive relationships between each attitude and favorability for the original and prequel trilogies. The more sexism and anti-PC beliefs one reported, the more they rated the movies favorably. However, we see the opposite relationship for the sequels, especially with hostile sexism and PC beliefs. The more sexism and negative PC beliefs someone reports, the less likely they are to like the sequels.

 
host-trils-1.png
 
 
benev-trils-1.png
 
 
pc-trils-1.png
 

There has been a lot of cultural discussion about Star Wars and sexism. These data show the empirical relationship: Sexism is correlated with disliking the sequel films, as is thinking political correctness harms society. Again, however, I will point to the variability around these lines; although this relationship exists, not everyone who dislikes the sequels is a sexist or bemoans PC culture.


Character Correlations

Three sequel-trilogy characters have faced the brunt of sexist criticism: Vice Admiral Amilyn Holdo, Rey, and Rose Tico. We see the same relationship across all three characters and three attitudes (sexisms and PC beliefs): The more sexism someone reports, the less they like Holdo, Rey, and Rose, and the more one dislikes political correctness, the less favorably they feel toward the characters.

 
wom-sexism-1.png
 

This aligns with what we know about psychological theories of sexism and political correctness. Each of these women defy traditional gender stereotypes. Both types of sexisms are based on traditional gender stereotypes, and violations of these anger those who believe in the stereotypes. Those who believe political correctness harms free exchange of ideas might not see traditional stereotype-defying women as a free artistic expression, but as a decision made solely to appeal to political correctness.

These results also replicate work a colleague of mine and I did after the release of The Force Awakens. We found that hositle sexism was a positive predictor of the typical, “Mary Sue” complaints about Rey, i.e. that she was “too competent” throughout The Force Awakens.


Conclusion

These data shed some insight onto the still-ongoing conversation about sexism, politics, and The Last Jedi. Looking at responses from over five thousand Star Wars fans, it is clear that sexism and disliking political correctness are positively related to disliking Disney’s sequels, though it is also not a one-to-one relationship: Some people defy this trend and dislike the movies while holding progressive attitudes about women.

These data support the excess anecdotal evidence (tweets, comments, articles) that sexism plays a major role in the backlash to Disney’s sequels. While some criticism of the movie is in good faith, these data suggest some of the backlash to the film is likely not. Given their social and political attitudes, some people might have been predisposed to hate it—regardless of the film’s quality—due to main female characters demonstrating skill, bravery, and leadership.

Star Wars Fandom Survey, Part 2: The Three Major Types of Star Wars Fans

Welcome from Part 1, where I talked mainly about methods. This post focuses on the three major types of Star Wars fans found in the survey. As always, email sw.survey.2019@gmail.com with questions about analyses, methods, results, and so on.


Star Wars fans are diverse, with diverse interests. To get a simpler picture of the fandom, we can group people into a small number of fan types. I call these “clusters.” In the survey, respondents rated their favorability toward each Star Wars installment, and I ran several standard clustering algorithms on these data. Three distinct types of Star Wars fan emerged: “Prequel Skeptics,” “Saga Lovers,” and “TLJ Disowners.”

A lot is going on there, so here’s how to read the graph:

  • There are twenty-four panels of this graph. Each shows the distribution of favorability scores for a combination of cluster and movie.

  • Each row is a different cluster, and each column is a different episode of Star Wars.

  • Of each panel, the x-axis shows the favorability score for the movie, and the y-axis shows what percentage of that cluster rated that movie with that score.

  • I included the mean (Mn.) and median (Mdn.) response on each panel.

For example, we see that about 60% of people in the Saga Lovers cluster rated Empire Strikes Back with a perfect 10; meanwhile, over 40% of TLJ Disowners rated The Last Jedi a 1, the worst possible score.

I came up with the name for each cluster by looking at the distribution for each film:

  • Prequel Skeptics. These fans love the original and sequel trilogies, with median ratings for each film being 8 or above, and they feel less warmly toward the prequel trilogy. However, they do not rate the prequels as negatively as the TLJ Disowners rate The Last Jedi, so I chose to call these fans only “skeptics” of the prequels.

  • Saga Lovers. These fans love everything, giving every movie a median score of at least 7. And while the prequels earn the lowest ratings in this cluster, Saga Lovers are still generally favorable toward them.

  • TLJ Disowners. These fans love the original trilogy (like the other clusters), but they feel middling toward the prequels and they are torn over The Force Awakens. This cluster’s defining characteristic, however, is just how poorly they rate The Last Jedi. The most popular response is 1—and the median is only 2. This is why I gave them the stronger word “disowners,” as opposed to the “skeptics” above.

These were the three dominant clusters, according to the algorithm. But some readers might be surprised that there is no cluster of fans who only love the original trilogy. I tried forcing a fourth cluster (even though the optimal number was three), but, to my surprise, all this did was find a cluster we could call the “Super Saga Lovers,” where their scores for the prequel movies were even higher than the Saga Lovers above.

Of course, this does not mean that people who only like the original trilogy and hate everything else aren’t out there. (I’m positive there are, and you might very well be one of them.) But we just don’t see this cluster of people among the thousands who took this survey. This is not a representative sample of every single person who has seen Star Wars, but it is a large snapshot of the fandom in 2019.

So how do these clusters differ? For the rest of this post, I’ll compare the three across demographic and fandom variables. And for later posts, I’ll focus on personality and political variables.

Gender

The biggest difference in cluster membership is among men: 86% of TLJ Disowners are men, 58% of Saga Lovers are men, and 49% of Prequel Skeptics are men (ps < .006). The exact opposite pattern is found for women: 48% of Prequel Skeptics are women, 40% of Saga Lovers are women, and 13% of TLJ Disowners are women (ps < .009). Unfortunately, I did not collect a large enough sample of nonbinary and transgender fans to find statistically significant differences here, but this group showed the same pattern as women: 3% of Prequel Skeptics, 2% of Saga Lovers, 1% of TLJ Disowners.

These gender differences relate to sexism in the backlash to The Last Jedi and Disney’s Star Wars projects more broadly. I will tackle this issue in detail in Part 3.

Race

The racial makeup of each cluster was about the same; no significant differences were found within racial groups.

Age

The average ages for each cluster were all statistically different from one another (ps < .008). But these were small differences: Prequel Skeptics’ mean age was 34.9, Saga Lovers was 33.9, and TLJ Disowners was 36.1. A density plot can show the entire distribution of ages within the cluster. The plot is simple: The higher the line goes, the more people of that age were in the cluster. What most jumps out to me here is the spike for people in their forties who are TLJ Disowners; we see more representation in this range among TLJ Disowners than the other clusters. However, the differences between clusters in age are small; the clusters are more similar in age than they are different.

Critics and Other Fans

I asked participants two related questions:

  • “How much do you care about what professional critics think of Star Wars movies?”

  • “How much do you care about what other Star Wars fans think of Star Wars movies?”

To start, we’ll compare the “critics” question across clusters:

To me, the comparisons that tell the main story are the “not at all” columns, where each cluster differed greatly (ps < .001): 28% of Prequel Skeptics did not care at all for what critics think, 38% for Saga Lovers, and 52% for TLJ Disowners. This makes sense, as The Last Jedi received generally positive reviews from professional critics for Rian Johnson’s flouting of expectations.

Prequel Skeptics care the most about what critics think. All of the “somewhat” comparisons were significant (ps < .007): 30% of Prequel Skeptics reported “somewhat” caring about critics’ opinions, while 24% of Saga Lovers, and 16% of TLJ Disowners did. This aligns with expectations, too, given that the Prequel Skeptics agree most with critics’ middling takes on those films.

It should be noted, however, that no cluster exceeded 36% in caring “somewhat” or a “great deal” about what critics thought of Star Wars films. These respondents might pay some attention to critics, but professional reviews are not a primary concern.

But how much did respondents care about the opinions of their fellow Star Wars fans?

There was one statistically significant difference between the Prequel Skeptics and Saga Lovers (respectively, 5% and 8% reported they cared a “great deal,” p = .007).

But the primary takeaway is how different TLJ Disowners are from the rest of the fandom. For every response option, the difference between TLJ Disowners and the other clusters were statistically significant, ps < .004.

Comparing each response option across clusters suggests the same thing: TLJ Disowners care more about what other fans think than the other two clusters. Fifty-eight percent of TLJ Disowners reported caring about what other fans think “somewhat” or a “great deal,” whereas neither of the other clusters surpassed 40%. My guess for this is that TLJ Disowners feel the Star Wars franchise slipping away from them as Disney announces plans for more movies and TV shows. For instance, numerous websites and forums are dedicated to negative feelings about The Last Jedi, and I think it’s probable that these sites seek to persuade other fans of this view.

George Lucas and TROS Excitement

I asked participants how excited they were for The Rise of Skywalker on a scale from 1 (not at all) to 10 (very much so). However, on the following page of the survey, I asked the same question, only before doing so, I noted that “J.J. Abrams, the director of Episode IX, consulted with George Lucas while writing the story and script.” Knowing George Lucas was involved lowered the average excitement, p < .0001. On the first question, the mean excitement was 8.16, and this dropped to 7.95 after being told of Abrams consulting Lucas. A test comparing medians yielded the same result: it dropped from 10 to 9, p < .0001. Fans are still jazzed, but might be a little reserved about George Lucas being consulted.

Then I divided people into three groups: those who reported a lowered excitement on the second question, those who reported raised excitement, and those who reported the same for both questions.

So many Saga Lovers rated excitement equally across both questions because 61% of this cluster maxed out at an excitement level of 10 on both questions. On the other hand, 16% of TLJ Disowners reported 1 on both questions. Curiously, 8% of this group rated 10 on both questions—showing that a small number of these folks are still holding out hope. About 33% of the Prequel Skeptics reported 10 both times.

The biggest differences are that TLJ Disowners are more excited by Lucas’s consulting (38%) than Prequel Skeptics or Saga Lovers (both at about 7%, ps < .0001). Meanwhile, the Prequel Skeptics were far likelier to be less excited (42%) than Saga Lovers and TLJ Disowners (20% and 7%, respectively). The Saga Lovers seem to represent the overall concensus discussed above: They’re still hyped, but Lucas gives them a little pause.

Star Wars Fandom Survey, Part 1: Methods, Demographics, Validity Checks

Purpose

I put together this survey to better understand the attitudes of the Star Wars fandom. I greatly enjoyed the two sequel-trilogy movies so far (The Force Awakens, The Last Jedi), and I’m surprised by how divisive the films have been, so I wanted to survey fans of the movies to understand this controversy. After forty years, Star Wars is a mature cultural product, and different aspects of it appeal (or do not appeal) to different people. I designed this survey to look at the diversity of the fandom’s attitudes at this moment in 2019 while we all wait for the release of The Rise of Skywalker in December.

This is Part 1 of the survey results. This is the long, boring part where I discuss the necessary details of how I did the survey, who took it, and how we know we can trust the survey’s results. Parts 2 and beyond will investigate more substantive questions.

In the survey, I promised anonymity to the respondents, so this is a project where I will not be sharing the raw data. I won’t touch on every statistical detail so that the report is more reader-friendly (but I will touch on some things, like statistical significance). If you have any questions about the methods, results, analyses, or anything else, please feel free to e-mail sw.survey.2019@gmail.com.


Sampling Method

I recruited fans by “snowball” sampling, which is a type of convenience sample. And while most academic research uses this sampling technique, it is less than ideal for obtaining a representative sample. But a representative sample wasn’t my intention here; instead, I wanted to get a cross-section of the Star Wars fandom, to understand how different types of fans responded to the films. I cold-emailed numerous fan websites, podcasts, authors, and so on, asking them to take and share the survey. After the first thousand responses came in, I worried about not getting enough women to take the survey (the count was under fifty at this point). I ran this study simultaneously on Amazon’s Mechanical Turk website, where I asked 300 Star Wars fans to respond. I used TurkPrime.com to only recruit women. This is why Claudia Gray’s tweet sharing my survey said I was looking for women in particular. This led to 5,330 responses over the course of four days.

Using this survey method has important limitations. Since it isn’t sampled in a representative way, I cannot speak to how accurately estimates like “75% of people liked the movie” are to the Star Wars fandom as a whole. It means there are unobserved dependencies in the data (e.g., two friends taking the survey are likely to have similar answers), which can create trouble when trying to estimate uncertainty. Because of this, I will be more conservative in how I interpret uncertainty, and I’ll assume there is probably more uncertainty than what the math tells me.

What this method does allow us to do, though, is examine at the underlying structure and relationship between attitudes of a very specific—but heterogeneous—group of people. I can look at what psychological and political attitudes correlate with liking films and characters. I can get an idea of how the fandom forms into clusters based on what movies they like. I can see how age, race, and gender relate to these analyses. All of these are found in subsequent parts of the survey results.

Thus, this is a survey of the Star Wars fandom. If you’re looking for information that is representative of the general public (or at least of all Americans who have seen a Star Wars movie), then Morning Consult and YouGov provide fabulous polling on favorite movies, characters, and demographic trends. The Motion Picture Association of America (MPAA) publishes yearly reports that include representative demographics of those who go to top-earning box office movies. When these works are relevant to my findings, I will discuss them alongside the results. I do not use this information to weight this survey, as the general audience is not my target population. My target population is the more committed Star Wars fandom, for which I do not have information to weight on.


Survey Questions

I studied the psychology of prejudice and politics in graduate school, and I chose survey questions based on my personal research interests as well as what I thought would be useful predictors of attitudes toward Star Wars films and characters.

  • Referral. I asked participants how they found the survey.

  • Movie Favorability. I asked respondents how they felt about each of the main Star Wars movies on a scale from 1 (very negatively) to 10 (very positively). I didn’t include the spinoff movies primarily to make the survey shorter. In most of these questionnaires, I chose only a few items. This is because I wanted the survey to be short enough so that people—and not just the most dedicated people—would actually take it.

  • Character Favorability. On the same 1-to-10 scale, I asked participants how they felt about various major characters, which I limited to the movies I asked about. This is why beloved characters like Ahsoka Tano were not included. (Apologies to the many who wrote me wishing she were in this survey.)

  • Fandom Information. To better understand the type of fan responding, I asked people to rate themselves on a fan-rating scale from 1 (casual) to 10 (fanatic). I included a checklist of Star Wars activities: cosplay, reading the novels, attending meetups, and so on. Lastly, I asked how much respondents cared about what (a) other fans and (b) professional critics thought about the films.

  • Episode IX Excitement. I first asked respondents how excited they are for The Rise of Skywalker on a scale from 1 (not excited at all) to 10 (very much excited). On the next page, I remarked that George Lucas was consulted during the scriptwriting of Episode IX, and I asked the same question again.

  • Big 5. I asked participants to rate themselves on dimensions like how extroverted and anxious they saw themselves to be. This measures what psychologists call the “Big 5” personality traits, and it is one of the most reliable ways to measure personality. I used a ten-item scale to measure these personality traits (Gosling, Rentfrom, & Swann Jr., 2003).

  • Nostalgia. For many, Star Wars is intimately tied to nostalgia, so I asked people how nostalgic they were for various aspects of their past (friends, family, music, etc.). I used a shortened version of Batcho’s (1995) nostalgia scale to do this.

  • Ambivalent Sexism Inventory. These questions caused the most reaction. Social psychologists Glick and Fiske (1996) published a scale measuring two related types of sexism. The first is called “hostile sexism,” which captures the more traditional idea of what we think of as being “sexism”—i.e., that women are inferior to men, that gender relations are naturally antagonistic, etc. The second is called “benevolent sexism,” which captures a sneakier and seemingly positive form of sexism. For example, these are beliefs that women are pure and that men need to protect them. While these beliefs seem positive, researchers have shown in the two decades since Glick and Fiske’s original work that these beliefs can have negative effects for women. For brevity, I chose two items from each of the subscales. I measured sexism because part of the negative reaction to the sequel trilogy includes sexist rhetoric. The goal here is to study an empirical relationship between different types of sexism and attitudes toward Star Wars films and characters.

  • Political Correctness. A related criticism is that Disney has been preoccupied with “political correctness” in the sequel trilogy. This is a vague concept, but it can help predict attitudes. I asked how much respondents thought that “PC” culture was interfering with a free exchange of ideas. This item was taken from Lalonde, Doan, & Patterson (2000).

  • Tradition. The sequel trilogy has broken with some traditional aspects of Star Wars. George Lucas is no longer making the movies, and the young, optimistic hero of the original trilogy—Luke Skywalker—was depicted in the sequel trilogy as an old man exiling himself for mistakes he’d made. I selected four items that measure how much respondents prefer the status quo and tradition from McClosky (1958).

  • Empathy. Two items from the “fantasy” subscale of an empathy questionnaire published by Davis (1980) were included to assess how much respondents might empathize with characters from the films.

  • Movie Importances. Different people want different experiences from movies. Star Wars fans are a diverse group that often wants the movies to make conflicting narrative or stylistic choices. A single movie cannot please the entire fandom. I wrote a scale assessing how important varying experiences might when watching a movie, such as having fun, being emotionally moved, etc., to get an idea of what Star Wars fans want from movies as well as how these wants relate to favorabiltiy toward episodes and characters.

  • Demographics. Lastly, I asked standard, demographic questions of age (year born), gender, education, race, and political affiliation. Aside from political affiliation (which was a seven-point scale), I left all these as open-response items (i.e., people could freely write whatever they wanted). I did this to (a) give people the freedom to identify how they identify, and (b) help weed out troll respondents, as trolls will generally write something that gives them away as antagonistic respondents. I asked participants how old they were when they first saw a Star Wars movie.

I asked participants at the end of the survey if they had any other thoughts they’d like to share, or if they’d like to provide their email so that they can be sent the results.


Handling Trolls

I ran this survey by myself, as a hobby, with no organizational or professional affiliations. Not wanting to spend much of my own money on a side project, I opted for snowball sampling instead of using an expensive panel. I knew that if this survey were to be spread widely, then there would be risk of troll responding (i.e., antagonistic respondents trying to influence and harm the validity of the survey’s results).

Removing trolls was a two-step process. First, I read through and manually coded all the gender, education, race, and referral questions; these are questions that the survey-takers were asked about themselves, i.e. their gender, race, etc. I read every thought shared at the end of the survey. I flagged responses that felt troll-ish to me, and any case with one flag was removed. Some examples of what felt troll-ish to me:

  • Race: “whitey,” “white like chalk,” “Lando” (person also identified gender as “Lando”), “Jawa” (also identified gender as “R2D2” and education as “X-Wing Fighter”)

  • Education: “your mom goes to college,” “your mom taught me everything I need to know,” “Uzbekistani National diploma in Kazakhaphobia”

  • Gender: “Pan generic bender fluid tomato, part chair,” “droid”

  • Referral: “By some SJW bitch,” “your mom,” “Zocdoc”

  • Shared thoughts: “gay,” “I had to fart twice during this quiz”

If I was unsure about flagging anything, I looked for other giveaways of troll responding, such as saying they were born in 1969, to verify that a respondent was trolling.

I want to make it clear that people were not removed for only saying offensive things in the “shared thoughts” section; many people did, and they appear in this survey sample. I removed people only if I thought their response indicated they were not responding to the items in a faithful matter.

Second, I ran a few clustering algorithms on people’s responses. People responding genuinely tend to respond in similar ways, so clustering algorithms were able to find outliers and small clusters of unusual answers (e.g., always responding with the same number, or responding in opposite ways to questions that measure the same thing). These cases were hand-checked, and most were removed. Anyone claiming a birthday before 1930 was removed, as was one person listing their birth year as 2019.


Final Sample

The sample started with 5,330 respondents. After cleaning the data, the total sample was 5,137.

Referrals

This table shows where most referrals came from. It includes any source that accounted for over 1% of the sample. The rest were collapsed into “Other, or too vague,” where “too vague” means the respondent said “internet” or “online.”

Source % of Sample
Twitter, unspecified handle 27
JediTempleArchives.com 17
Reddit, unspecified subreddit 10
Claudia Gray 9
Didn’t specify 7
Other or too vague 6
MTurk 6
Direct referral 6
TheForce.net 5
Bryan Young 3
Facebook, unspecified page 2
StarWarsCantina subreddit 2


I hope that this represents a decent cross-section of the fandom. I think fans of Claudia Gray’s novels, collectors that frequent JediTempleArchives.com, and Redditors on /r/StarWarsCantina represent decent varieties of the fandom. If you like this survey project, I highly recommend supporting JediTempleArchives.com, reading Claudia Gray’s books, listening to TheForce.net’s podcasts, and checking out Bryan Young’s projects, such as the Full of Sith podcast. I could not have collected so much data without their help.


Demographics

Age

The average birth year was 1984, and the median was 1985. Half of the sample was born between 1976 and 1993, which means that 25% of the sample was born before 1976, while 25% was born after 1993. A wide range of fandom generations were captured here, from people who were adults when A New Hope was released to those who were babies when The Phantom Menace was released.


Gender

36% of the sample identified as a woman, 60% as a man, and 2% as non-binary or transgender. 1% did not respond to the question.

How does this compare against probabilistic samples designed to be representative? The 2015 MPAA Theatrical Market Statistics report estimates that 58% of people who saw The Force Awakens in the first two weeks were men, and 42% being women. The 2016 report estimates 59% men and 41% women for Rogue One, and the 2017 report has The Last Jedi at 60% men, 40% women.

It is important to note, however, that these estimates only consider viewers in the United States. The current survey did get some international respondents, though I failed to ask in what country people resided, so it is unclear precisely how many respondents were located outside the United States.

It is unclear if the lower percentage of women in the current sample is due to (a) the nature of the sampling method, or (b) the fandom’s most dedicated group containing more men.

Men in the current fandom sample, however, do self-report as more “fanatical” than respondents identifying as women, non-binary, or transgender. When I show results, I collapse non-binary and transgender respondents solely because only 10 people identified as transgender.

Race

Race % of Sample
Asian or Pacific Islander 4
Black or African-American 2
Latinx 7
Multiracial 3
White 78
Didn’t respond 6


The sample was less racially-diverse than the data reported by the aformentioned MPAA reports. The racial compositions they estimated for theater-goers in the first two weeks after a release were:

Film    AAPI    Black    Latinx    White    Other   
TFA 7% 12% 15% 61% 5%
RO 8% 11% 15% 62% 4%
TLJ 9% 11% 18% 57% 4%


Like gender above, this difference between the current sample and representative ones could be due to (a) the nature of the sampling method, or (b) the fandom’s most dedicated group containing more White people. But unlike gender above, non-White respondents say they are just as “fanatical” as White fans. I collapsed across all non-White respondents to produce a bigger sample size for this estimate.

Education

Education % of Sample
Advanced degree (e.g., MA, PhD, MD) 20
College degree (e.g., BA, BS) 51
Less than college degree 27
Didn’t responsd 3


Political Attitudes

1% did not answer this question.


Self-Reported Fandom

1% did not answer this question.


Fan Activities

Star Wars Activity % of Sample
Follow news 90
Have collectibles 79
Watch cartoons 77
Read novels 73
Play video games 69
Read comics 58
Listen to podcasts 42
Play board games 31
Attend conventions 27
Make art 21
Cosplay 19
Write fan fiction 18
Go to meetups 13


The self-reported fandom and activity questions further support that dedicated fans were sampled.


Scale Validation and Sanity Checks

Even after removing troll respondents, I wanted to ensure that these data make sense. This section contains “obvious” and/or established findings. I wanted to replicate them here so that we know the data are telling us what we know they should be telling us. To get technical for a second: I performed confirmatory factor analyses on scales, and I compared correlation matrices of MTurk and non-MTurk recruited individuals. When I use these scales in the future, I will include these details in technical appendices. But for now, know that the correlations made sense. I highlight a few here.

We know that people who like a Star Wars episode probably also like a movie from that same trilogy. What I show below is a correlation plot of how much people liked each Star Wars movie. In all of the following parts of the survey results, I will refer to the Star Wars episodes by the abbreviations of their names: The Phantom Menace is TPM, A New Hope is ANH, etc.

Larger, bluer circles mean the correlations are more positive; larger, redder circles mean the correlations are more negative. Small, faint circles mean that the correlations are closer to zero, which means that attitudes toward the movies are unrelated to one another. A big “X” over the box means that the correlation was not significant at p < .01.

We can see what we expect: Movies within the same trilogy tend to correlate with one another. Interesting to note is that Return of the Jedi correlates better with the prequels than the other original two movies. We see small, negative correlations between attitudes toward The Last Jedi and the original trilogy.

The correlation between the “PC”" question and self-reported conservatism was r = 0.63, which is what we would expect from previous research by both academics (Lalonde, Doan, & Patterson, 2000) and the Pew Research Center.

We see that conservatism is correlated to hostile (r = 0.66) and benevolent (r = 0.39) sexism; women score lower in each than men, which is consistent with previous research (Christopher & Mull, 2006; Glick & Fiske, 1996).

Every pattern incorporates some of the most sensitive items in the survey, and we observe relationships that are well-established in the literature. This leads me to believe that, despite the unusual snowball sampling methodology, the results I will present are valid.


Attitudes Toward Movies

Lastly, we can look at how the overall sample feels toward each of the movies. “Mn.” represents the mean score, “Mdn.” the median, and “Var.” the variance. This last measure, the variance, tells us how much people varied in their responses. “The Last Jedi” had by far the highest. This shows how much opinions strongly differ on the film. In future parts, I will be looking into these varied opinions toward “The Last Jedi” as well as the other films.