Bundling some things I posted elsewhere, exploring

1 Comment

Feedback on a text via screencasting? is the first example illustrating the Video For All project, part of the European Commission’s Lifelong Learning program. It explains how screen capture software can be used to provide feedback on a text via screencasting.

The embedded illustrating video is indeed pleasant …
… if you can see and hear. However, if you are blind, you won’t know what the commenter is talking about when he uses deictic words (like “here”), and if you are deaf, you won’t know what he says, because the video is not captioned: i.e. this way of providing feedback is not consistent with the EC’s promotion of accessibility, let alone with educational commonsense.

Moreover, when even a hearing and seeing student wants to integrate the audio feedback in his/her text, s/he’ll have to keep switching between the video file and the original textual file, and that’s time-consuming, especially if you can’t use your hands and have to use voice commands.

What a blind person would perceive

Yeah, I just want to provide you with a little bit of feedback on your essay, and the first thing I want to point out is, if we come down to here, I was noticing that you were talking about –I think you’re not describing very clearly, exactly what screen capture does, and I think its very important that we explain that if you highlight anything, if you write anything, if you move your cursor on the screen, on the student’s written work,then that comes out in the essay: because I don’t think you’re making that point very clearly, here. So I think you might need to re-think that point. Another little point I just want to bring up is here: you used the word “live”. Now, I think “live” is not really the right word, I think that you need to explain it’s not a live recording, because obviously, the student is playing it back after the teacher has made the recording, but what it is more is a conference, it’s very similar to conferencing in a way that it kind of feels like the teacher is sitting next to you,going through your work and explaining some of the mistakes. So, maybe I think, here it’s the wrong choice of word and you might need to think about that as well. I think, one of the things here is you’ve pointed out different screen capture tools, and it might be a good idea that when you point out these tools, to talk about which tools are free and which tools are paid, because JING is a free tool, but Camtasia certainly will cost teachers a hundred and — I think it’s about 170, 180 dollars to buy. So I think it would be useful to provide that information in your essay.

What a deaf person would perceive

Abstract part of an essay entitled “A revolutionary way to provide feedback”. The cursor moves around randomly. The text scrolls down, then stops. The cursor highlights Every move you make on the screen, every web page you open, picture you view etc is simply recorded as a video in blue. The highlight color gets changed to yellow. The text scrolls down further, then stops. The word “live” is highlighted first in blue, then in yellow. The text scrolls down further, then stops. The words JNG, Screenr, Camstudio and Camtasia are highlighted first in blue, then in yellow.

How can these barriers be removed?

Captioning and audio description

It may seem paradoxical at first to illustrate a project called Video For All with a use of video that excludes deaf and blind people, and is anyway unwieldy to apply to the textual object of the video, particularly for people with motor disabilities.

True, the video result could be made accessible to all by integrating an audio description of the visual elements, and captions for the audio comment.

However, while captioning videos can be done very easily with freely available desktop and online tools (I’ve done it here with the online tool for the video of the example), integrating an audio description is more tricky: see the “Now, on to Audio Descriptions” part in Greg Kraus’ HTML 5 Video, Text Tracks, and Audio Descriptions Made Easy (or at least easier) (NC State University IT Accessibility Blog, June 14, 2011). It’s not only a matter of tech difficulty and cost, but also of time needed to select which visual elements need descriptions, and to produce audio descriptions that will fit within the audio of the video.

Granted, the need for a separate audio description could be avoided by already describing the relevant visual elements in the comment itself. Yet even so, applying what is said in the video to the text would remain awkward.

Feedback both via video AND in the text

This is the neatest and easiest solution.

Once you have captioned a feedback video where the audio is self-standing – and you really MUST do that lest you exclude people –  the captions produce a transcript that can be copied from.

Thus, you can then go back to the written file, remove the cosmetic highlights you made during the video, and use the “Comments” tool of your text editing software/application – as I have done in this Google doc, where I first transcribed the text that appears in the video example discussed here. I actually added a first comment that says: “for a video feedback, see ” 😉








Connected Course – so glad for it

I found out about the Connected Course taking place at via Paul-Olivier Dehaye’s own introduction post for it. And I discovered his blog via his @podehaye twitter account – which I’m following because when I read about his having hidden his Coursera  Teaching goes massive: New skills required course, I thought it was such a brilliantly economic way to incite the students to apply what he had been advocating in his videos for that course.

I signed up for the Connected Course because I’ve been interested in the connective learning approach since Stephen Downes lead a discussion about a document he entitled “Learning Networks and Connective Knowledge”  in the Instructional Technology mailing list in 2006, and also because I enjoyed and learned a lot from the connective workshop/MOOC Laboratorio di Tecnologie Internet per la Scuola #ltis13 lead by Andreas Formiconi for Italian University Line last year.

Something fun happened right at the end of #ltis 13 in June 2013: one of the participants, Fabrizio Bartoli, had reblogged a post of Vance Stevens in his learning2gether blog, and Vance Stevens then invited some of us to present  #ltis13 in English there: see Fabrizio Bartoli, Lucia Bartolotti organize a discussion of the cMOOC ltis13.

But perhaps the most important thing about #ltis13 is that it continued after its administrative end, morphing into a permanent workshop, #loptis, with Andreas Formiconi’s blog at its hub. So in a way, my signing up for this Connected Course is part of my participation in the #loptis workshop – and vice-versa 😀

As to who I am: a former teacher of French and English as foreign languages, presently a pro translator and an accessibility advocate. Oh, and given my misleading first name: I’m female.