(Notes from qualitative data analysis workshop 12.03.16)
In Graham’s session on qualitative data analysis we discussed the value of preliminary or ‘primary’ analysis of the data, beginning early in the process of data gathering (i.e. during/after the first interview), with the following benefits:
- You can check you’re getting what you want
- You can check that you are being consistent in your approach
- You can see if additional or different forms of data would be helpful – e.g. would a photo, video be useful? A diagram, etc.
- You can start to gauge how long analysis is going to take.
- You can see what can be discarded/not collected.
Primary analysis can take place during interview transcription, compiling fieldnotes or assembling documents. Notes might include significant/surprising things and key points, contradictions and inconsistencies (between people, places, expectations), common/emergent themes and relationships with existing literature – comparisons, contrasts with other findings, etc; am I finding similar ideas or radically different? If the latter, is it because I’m doing it wrong, or because I’m making a fresh contribution? It’s a good idea to start summarising data visually even in the primary stages; e.g. on charts, figures, tables, maps, spider diagrams, etc.
Given that the primary method in my current thesis plan will be interviewing, I started thinking about respondent validation and when then should happen. I wondered if I should share my analysis with the people I interview as well as the transcript. I guess I won’t know until I start the analysis, but I worry I’ll misinterpret what people tell me. Graham assured us that while they should be ‘honest to the data’, the connections we make will and should be our own, and we should just validate at the transcript stage. I’m still not sure though; I like the idea of a more dialogic approach; an ongoing conversation – and I think it would be appropriate to what I’m investigating (NB David Scott writes about the relative merits of democratic and autocratic approaches to respondent validation in Chapter 3 of Understanding Educational Research).
We’ve been advised that if we do a pilot and it is close to what we end up doing, it’s fine to include that data with the rest. I seem to recall that isn’t the case for our feasibility studies though (which I should probably get on with, right?) – they only have M-level ethics clearance and I think are only to inform the development of the research proposal. I am thinking I probably need to do another feasibility study given the changes in my project; one that involves at least one interview.
Once all the data is in, secondary analysis will identify categories and themes. The aim is often to generate exclusive categories – e.g. ‘truths’, ‘feelings’ and ‘experiences’ – see the ‘constant comparison’ approach. Different category formulations may need to be tried out and exclusive categories may not be possible, for various reasons (we may be questioned in our viva about our choice of codes).
Responses may be multilayered – some respondents might answer in a single layer, others might tell you other, additional things in response to that question – to explain or clarify that answer (N.B. Ethnographers may dispute the need to impose an order on the data and try to experience a more holistic picture).
Obviously we don’t have to code up all the data we collect; just that which is considered relevant and important (codes are labels or phrases given to a particular aspect or theme in the text). The thought of using Nvivo to code data doesn’t excite me – I’d like to have a wall of post-its, string and other artefacts, like a crime room.
Analysis tips:
- Read, underline sections, make marginal notes.
- Re-read to identify patterns. Highlight quotes that are important/illustrative.
- Look at repetitions and relationships between data (may mean new codes have to be created, existing codes may need to be merged or removed).
- What is important about the links between data/codes…? Are codes related/paired? E.g. when people talk about money/fees, what else are they talking about?
- Note how well the material fits the coded themes.
- Review the results, looking for overlap & redundancy.
A couple of examples:
Descriptive typologies – for example in Alison Shreeve’s thesis; ‘Dropping in’ etc. Assemble the best examples and describe the common features that characterise the group. This might be relevant for my thesis – ‘ways of conceiving the university’? It might depend what the exact purpose of the study is, and I’m not entirely sure about that yet.
Grounded theory – often uses two rounds of sampling. Firstly a homogenous sample is analysed – a range of people with a similar experience. Then a theory is developed and proposed, and a second heterogenous sample – where individuals have had a range of experiences – is used to confirm or disconfirm tenets of the theory.
So – if I wanted to generate a theory about the influence of PG Certs on conceptions of the university, I might do a homogenous sample first of PG Cert graduates, and then a wider sample from across the university teaching population? Hmm. I’m not sure I’ve got this right… it sounds like an upside-down kind of experimental methodology. I need to look it up in one of my lovely research methods books.
We did an interesting activity near the end of the session where we coded an interview transcript (on teens and drug use) and then saw how the researcher coded up the responses. This exposed differences in coding approaches. I felt the researcher had made certain assumptions in interpreting the responses that I didn’t share (e.g. ‘adult negative stance’). They also focused on themes I thought weren’t interesting (c.f. Davis 1971), and hadn’t highlighted the thing I *did* find interesting.
This was a useful session that gave me a modicum of enthusiasm for actually collecting some data (boy do I need some enthusiasm), and helped me to develop my thesis proposal a little further. My proposal is now leaning towards an ethnographic study of our conceptions and imaginations of the university and where they come from. I am thinking about a series of conversational interviews that take place beyond the physical space of the university (e.g. while walking the dog in the park). This will give me the opportunity to draw on all the literature I find the most interesting – the literature on conversation (Zeldin), empathy (Krznaric), ideal speech situations and undistorted communication (Habermas), language and textuality (Usher)… etc etc. And if by the end of this whole thing I decide that academia is not for me, at least I will have got better at talking to people, right?