Surveys and validation

Obligatory disclaimer: My views do not necessarily reflect those of the AVEN survey committee. I speak only for myself.

When the AVEN Community Census, a volunteer-run survey of asexual communities, approached the data analysis phase, someone on the committee said, “Who will help me eat the bread?” That phrase stuck with me as a particularly apt description of survey creation, albeit completely backwards.

The question, “Who will help me eat the bread?” comes from the folk story, “The Little Red Hen”. In the story, the hen asks for volunteers to help in the various steps to create bread, but none will volunteer. Finally, she asks for volunteers to eat the bread, and suddenly everyone is feeling helpful!

As the committee member put it, writing the questions and disseminating the survey is the hard work to make bread. Analyzing the survey is eating the bread. Everyone wants to know the results!

But having actually worked on the survey analysis, my impression is that writing the questions is eating the bread, while data analysis is just plain work. Even when it comes to myself, I’m enthusiastic about doing analysis but it’s a lot of work and I can’t seem to find the time. (In contrast, finding time for blogging is much easier.) And this is not a singular pattern. You can see from the history of asexual community surveys that the hardest part is getting the analysis moving.

Writing questions also garners a lot of community interest, judging by the abundance of complaints about questions, and scarcity of complaints about the lack of analysis. More specifically, people are interested in what’s in the questions, because questions are a form of validation. People really want to see their particular identity, their particular experience, and their particular views reflected in the survey.

And this is all wrong, in my opinion. What makes a good survey question is independent of validating identities. For example, when choosing what romantic orientations appear on the survey, the primary consideration is to capture the most common combinations of identities, basically so we can minimize the labor of interpreting write-in responses, and create a succinct summary of results. To assign validity according to whether the romantic orientation appears in the survey is to assign validity based on popularity.

There are other concerns too, such as whether an experience can be easily quantified, whether it tells us something we already knew, and whether it’s the sort of thing we want to measure on an annual basis, or just for one year. Plus there are a bunch of random personal factors and plain stupid oversights, so I’m not saying it’s wrong for people to complain.

Complaints about survey questions evoked many mixed feelings in me.  Some critiques of the survey were valid, and I don’t wish to discourage people from giving them.  And even if there’s nothing to be changed, people’s feelings are still valid and can be expressed freely.  But clearly a lot of people have an emotional need for validation from authority, any kind of authority at all. I don’t want to be your benevolent authority, I want you to be free.

About Siggy

Siggy is a physics grad student in the U.S. He is gay gray-A, and makes amateur attempts at asexual activism. His interests include godlessness, scientific skepticism, and math. While not working or blogging, he plays video and board games with his boyfriend, and folds colored squares.
This entry was posted in activism, Articles, asexual politics. Bookmark the permalink.

6 Responses to Surveys and validation

  1. Kasey Weird says:

    I do, to some extent, judge the validity of a survey’s questions (and thus its resulting data) based on whether it is even possible for me (or others) to answer the questions honestly. I’m not totally clear from your post here whether the AVEN survey had write-in options (which would completely eliminate the criticism I’ve laid out here), but if it didn’t, than having limited options available would actually invalidate the survey itself, at least as a representation of the general ace community. It would then only be a representation of the specific subset of people who fit the categories included. Or am I missing something?

    • Sennkestra says:

      Depending on the question, many had “other” or sometimes write-in options when we knew not all categories were listed – the goal is just to minimize the number of write-in or other answers. The vast majority of questions could also simply be skipped and not answered. (you can see the actual survey text at asexualcensus.wordpress.com).

    • Siggy says:

      Having limited options does not necessarily invalidate a survey question. If 1% of respondents are not included in the options, then the survey is still accurate to within 1%. I mean, that kind of error certainly isn’t good, but it’s not the end of the world. Allowing people to leave the question blank, or answer with “other” allows us to mitigate and assess the damage, but it still leaves this 1% that we know nearly nothing about.

      But you see, considering the problems with analysis is concerned is completely different from considering whether those 1% have valid experiences. We have different priorities, which are only incidentally aligned with the priorities of validating people’s experiences.

  2. Sennkestra says:

    I totally agree with you about the division of labor problems with the census – writing questions and posting links is great fun, but the actual analysis is slow and takes work and gets a lot fewer people who will actually do it. (And I can speak from experience as one of those people who was totally gung ho about analysis but, as you can see, I’ve barely done anything after that first week or two of enthusiasm after the survey finished…so I am totally a part of the problem.)

    I think the problem with analysis is that everyone wants to *see* the analysis, but no one actually wants to *do* it – because that’s hard and slow (and perhaps more importantly, requires some amount of technical skill and training before you can get into the fun stuff). That’s the other big problem – writing questions takes little to no expertise (though writing good questions might be another matter) so everyone can do it if they have a little enthusiasm. Analysis takes skills and training, though, so enthusiasm isn’t enough.

    It also only takes a few days to write a set of questions, so it’s easy to get that done during people’s initial bursts of enthusiasm. But good analysis can take months, so it’s much easier for people to get distracted by life and other exciting new projects and end up never actually getting anything done (totally guilty here again).

    Community based, volunteer-run surveys also have the problem of little to no supervision or deadlines for motivation – there’s no consequences for putting things off for another week, or another month, unlike when your job depends on producing research or you have an advisor breathing down your neck.

  3. Sennkestra says:

    Generally speaking I feel like there are two sorts of requested changes: “bad data” concerns and “feel-good” concerns (maybe not the most polite terms, but that’s how I label them in my head).

    “Bad Data” concerns are cases where something about the question will lead to bad results – for example, if the first question in your survey is “are you male or female” with no other option and no ability to skip, you’ll lose a large portion of ace respondents and that will skew the data.

    “Feel Good” concerns are cases where something about the question makes people upset or dissatisfied in some way, but that don’t necessarily lead to bad data. For example, there was some criticism when we asked about sex assigned at birth, from people who were morally against acknowledging sex at birth. Asking the question, however, did not skew the data but actually provided better data than if we had omitted the question.

    There can, of course, be overlap between the two types of concerns – if lots of people get so mad about a question that they all stop taking the survey, a feel good concern can become a bad data concern. And concerns that lead to bad data can often make people feel upset too.

    There are also good reasons for and against considering each kind of concern. For example, having feel good questions like write-in only questions that provide terrible or bad data may be useless for analysis, but if they keep people satisfied enough to keep taking your surveys, they can still be useful. And sometimes there are questions that may make some people upset but that still provide really important data on the rest of the respondents (and making questions skippable helps make this a better compromise).On the other hand, including options that are only there to make people “feel good” can sometimes have serious detrimental effects on the data that make it basically useless (the original AAW surveys had serious problems with this, as they were designed, IMO, more as inclusivity/publicity tools than as serious data collection attempts). And sometimes you want to balance the desire for more data with the desire to maintain certain levels of respect and politeness and ethics.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s