Augmented ethnography: processing qualitative data from massive conversations | Government In The Lab

I am working on a project called Edgeryders, a massively collaborative exercise to reassess and redesign public policy towards youth. The idea is to get participants to share their experiences on policy-relevant topics, like how we make a living or how we participate in public life. As the project starts to pick up speed it does what crowdsourcing exercises do: it spews up a torrent of experiential data. In my book and elsewhereI have claimed that respectful conversations converge: a consensus is achieved, and we can just move on. I still think that’s true, and fairly obvious to the participants. The problem is how you convey it in a verifiable form to external observers – European governments and the European Commission in the case in question. Demanding that they go through even a small part of the raw data is simply not realistic. So what do we do?

My best guess is ethnography. Ethnographic methods are particularly well suited to this kind of investigation, because they are designed to embed the point of view of the people they study. For the same reason, I would argue that they are well suited to Internet ethics: we are not the lab rats, we are the lab itself, just as we are not the users, but the protagonists of our online meeting places. Modern ethnography employs software like Atlas.ti or its open source counterpart Weft QDA to annotate interview transcripts.