Finally got back to my Masters Dissertation this week, and began by reading a thought provoking paper, by Lydia Plowman and Christine Stephen, of the Interplay project based at The University of Sterling. I had the priviledge of meeting Lydia a few years ago, at the CAL'05 Conference. Some colleagues and myself, had also presented our emerging work from a Research Project based at The Graduate School of Education in Bristol, as a symposium. Our methodology then as mine does now involved the analysis of video data collected in my classroom, and since this medium was also used by Interplay I was very interested to hear what Professor Plowman had to say. In the case of my current project, the data was collected by colleagues during the InterActive Project. It involved video capture during 8, Year 4 numeracy hours, where the students used Excel in guided investigations, to support exploration of the functions of graphs and charts. As my Tutor can verify, and I have to say she has been incredibly patient with my perfectionist attitude, I have spent a considerable time stumbling around the methodological process, not the analysis per se, but the sheer volume of data collected, and how I am actually going to write it up. There is also, an at times, alarming character based facet to this, in that everytime I come back to it I find something new, and with this a new set of questions emerge. This is an unfortunate by product of a rambling mind and not a lack of focus in my questions.
We have used the idea of "iterative cycles of review" as a way of facilitating "ways of seeing." Video at first hand seems to offer a cornucopia, but it is easy to make assumptions that what we see is what we get, and to assume that everyone will read the data and see the same things that we do. This however is frequently not the case. Lets take the example of a visit to the cinema with friends. On leaving we may have all watched the same movie, yet from the discussions which emerge you could be mistaken for believing differently, as the group identify different parts of the film to talk about. Taken out of context, by an outsider listening in, these apparently disperate episodes would seem disjointed, even alien and have little meaning. There may be common high impact moments, but more often than not we highlight very specific individual moments of interest, parts that stood out and had personal impact and meaning for us, and which are very different to those of our friends. Even the same episode, might be described from a different perspective, the visual effects, the protagonists reaction and so on. As the discussion evolves we might eventually come to focus on specific moments, as the outing was a social event and we seek to create a common experience from the whole we have shared. Our approach to "Iterative cycles of review," involved a similar idea, a number of "viewers" worked together collaboratively to identify common themes within data collected, and using our shared experience over time set out to negotiate common meanings through discussions mediated by the data we had engaged with, attempting to tell a shared story based on the common experiences presented in the evidence we had evaluated. Within the analytical process for the symposium above we also used the Studio Code software environment, to tag episodes from the video which corresponded to instances of events and actions within categories we had agreed upon and identified in the data. This introduced some interesting questions for me around perceptions of what happens when children are learning in teacher designed ICT mediated situations, the role of the teacher, and what learning looks like during ICT supported tasks, and so my return to the original InterActive Project data I am using for this project.
In the Plowman and Stephen paper, the authors have used Plasq's Comic Life, to present screen captured video data alongside captions and examples of dialogue between adults and students in the preschool/foundation stage playroom to exemplify particular forms of what they call "guided interaction." They use comparisons between ICT and non ICT based activities as frames to support "ways of seeing," and presenting how interactions observed within more traditional activities, might be developed, evolve and compare with interactions mediated by ICTs as learning tools. The use of the comic strip format sets out to share what has been learnt from their research and as well as forming part of the frame used in data analysis, is intended to support the dissemination process of findings to pre school staff, helping them to identify examples of guided interaction within their use of technology.
I am very much an amateur Mac user, and "emergent researcher." My Mac has largely been employed for multimedia work, video editing and the like, but I was so inspired by what I read, I felt I ought to try out the software, which I feel may help me present the data I want to discuss in my project submission. It also has the potential to extend my current assessment for learning practices and evidence collection activities as I prepare my porfolio for NAACE Mark assessment. In previous assignments for my degree I have presented data from video as "snippets from Learning Conversations." or as "Narratives of Learning." using text supported by images captured from video, or linear presentations using PowerPoint to support observations, analysis and presentation of visual evidence. These digital PowerPoint learning stories (Narratives of Learning), have proven to be useful tools in presenting classroom learning as it unfolds through linked events emerging from the class as a community of practice. I have however become increasingly aware through engagement with video captured of my students at work of the dangers of focussing solely on "the moment," and using dialogue and conversation as evidence when observing students using ICTs. I have also begun to think about what I have come to see as "meaningful inactivity," times when students seem not to be involved physically or verbally with a task at the computer, but when asked about this are able to discuss what they and their partner have been doing or what they have learned from the experience. This type of action has drawn into focus, some practical everyday issues, like those annoying moments when a student who has apparently not engaged with a class task, is then able to relay what has been happening around them. There are also occasions when single word utterances between students seem enough to change the direction of a computer ( or non computer) based task, however when these utterances are viewed within the original context of the video, gestures and visual feedback from the screen become part of a more complex picture than this apparently simplified student discourse would tend to suggest. Language may be a key support in the movement of a task, but the other tools used also seem to mediate and facilitate changes in direction through feedback, with the gesture and motion of the participants communicating what they would like to happen. The richness of the visual and verbal context presented in the video from the activity system which evolves around in this case the computer presents a much broader context to the activity from which so much can be lost, as elements are removed during analysis. And here in lies my conundrum, how to present this richness. I have used a range of methods, including transcription and an emerging framework for multimodal analysis of the data so far, but this is heavily dependent on detailed description, and reliant on the written word to provide context for the activities and actions of those involved. Maybe a multimodal approach to presenting data, will help support the refining of the analytical and presentation process. It will be interesting to see whether using the comic strip format will enable me to engage with the data and tell this particular set of learning stories, while enabling me to visualise and formulate more clearly what the evidence seems to be telling me about what learning looks like as action in ICT mediated situations.