I confess… I wasn’t in the mood for learning anything on the first morning of the EdD. I was hungry after a long run, and an insubstantial, sickly-sweet breakfast had left me feeling on the verge of tears and ready to hit someone. It prompted me to think about my own students and what random trials and tribulations might be stealing their focus away from learning…
…after a couple of hours I managed to calm down a bit and focus on the topic at hand, which was “what constitutes ‘research data'”. I do a similar topic with my own students on the Teaching Development Project unit but rather than telling them the answer I hand out a couple of examples and get them to highlight everything they think counts as ‘data’. Yes, it’s supposedly more pedagogically sound than listing things on a PowerPoint, but it is quite paper-intensive, and finding a couple of really good examples can be difficult (I try to avoid using mediocre examples in my teaching activities unless the point of the activity is to find/correct the flaws). Graham’s slides were pretty efficient. Interestingly, apologies for the use of text-based PowerPoint was a running theme of the day. Now, we’re doctoral students – we can handle a few bullet points – but if people genuinely feel a different approach would be more effective then what’s stopping you, eh? 😉
I really liked these four questions about planning for data collection – they’re a bit more concise – and they go a bit further – than the prompts I give my TDP students:
- What do I need to know & why?
- What’s the best way to find this out?
- What will the data look like?
- When I have the data what will I do with it?
I was curious at this point about the contexts in which one would include an explicit justification of one’s data collection methods, as it’s not something you commonly find in research papers. I realised that I don’t give this aspect of research planning much weight on the TDP unit, primarily because it’s an intervention-based small-scale action research project and once they’ve justified their method in terms of the actual teaching intervention they don’t have much space left to rationalise their choices regarding the collection and use of evaluative data. However, this year I introduced a light-touch Ethics Approval process into the project schedule, which was really useful for getting swift remedial feedback to those who were about to hit their students with the wrong kind of questions. I think I can build on this to ensure that everyone is giving due consideration at an early stage to these aspects of their project design.
[For future reference… when I come to do my own thesis, I am supposed to sow the seeds to rationalise the methods used in the intro chapter where I set out who I am, why I’m doing this research and from what stance, and then refer back to that in the methodology section.]
Other useful bits and pieces from this session included a slide on the 9 steps of research design, and a simple table comparing quantitative and qualitative research. Either column would make a good checklist for proposal-writing.
This session also helped me to appreciate that I actually *did* learn something from the Research Methods in Education unit on the MA (the one where I just picked the easiest-sounding essay title, like the strategic learner I was back then). I think writing a long essay on validity, reliability and triangulation might have enhanced my understanding of what questions will get you the information you need… or this might have just come from years of carefully constructing student feedback questionnaires. Who knows. It probably didn’t do any harm 😉