Bloglillon

Bundling some things I posted elsewhere, exploring


1 Comment

Accessibilità: facciamo che eravamo sordi, ciechi #edmu14

Nel post Cose che succedono in rete, se si lavora bene… (questioni di accessibilità)- #edmu14, Andreas Formiconi ha gentilmente citato alcune idee  di attività  un po’ alternative con strumenti di sottotitolazione che gli avevo mandato.

E proprio mentre mi stavo chiedendo cosa proporre più concretamente, lui ha pubblicato un altro post, non per #edmu14 ma per gli studenti di medicina: Un Medico all’Inferno: Medicina e dolore nell’Inferno dantesco, dove ha inserito l’omonimo video delle biblioteche dell’università di Firenze:

Ma l’ha fatto con un’aggiunta molto importante: ha trascritto normalmente nel post le informazioni scritte nel video su data e luogo dell’evento annunciatovi, rendendole accessibili ai ciechi tramite il software di sintesi vocale.

Vero che queste informazioni (meno l’ora) ci sono anche scritte normalmente nella descrizione del video su YouTube, però appunto, quando un video viene inserito altrove, la descrizione YouTube manca, perciò è importante ripeterle o riassumerle nella nuova pagina.

Però si potrebbe provare a rendere l’intero video interamente accessibile, cioè come i bambini, facciamo che eravamo sordi, che eravamo ciechi, per capire come viene percepito da altri.

Facciamo che eravamo sordi

Possiamo sì leggere i testi nel video, e fruire le illustrazioni di Doré all’Inferno di Dante – però ci perdiamo completamente “O Fortuna” dalle Carmina Burana di Orff che si sente su di esse all’inizio e alla fine: e se è stato scelto quel brano, ci sarà pure un motivo. E forse non sei sordo dalla nascita, e quel brano lo conosci bene e te lo potresti sentire in testa, sapendo che c’è.

Poi da 0:12 a 1:51, per la recitazione dei passi dell’Inferno, al limite possiamo attivare i sottotitoli automatici, ma se sei sordo, ci vuole una serie dose di senso dell’umorismo dissacrante per apprezzarne le distorsioni: più verosimile che t’arrabbi.

Facciamo che eravamo ciechi

La musica di Orff e  la recitazione dell’Inferno le sentiamo, ma ci perdiamo le informazioni scritte e non sappiamo che ci sono le illustrazioni di Doré, che forse ricordiamo per averle viste prima di diventare ciechi.

Proposta

Proviamo a rendere tutto il contenuto del video fruibile da tutti. A questo fine, ne ho utilizzato l’URL (indirizzo web) per aprire la pagina http://www.amara.org/en/videos/2hvKFS5lsko2/info/un-medico-allinferno-medicina-e-dolore-nellinferno-dantesco/ . E lì ho già iniziato 3 piste di sottotitoli, ciascuna per fare una cosa diversa:

  1. Metadata: Geo che è una delle lingue inesistenti di Amara, dove ho caricato tali quali i sottotitoli automatici di YouTube, e dove li potremo correggere: senza star li a ritrascrivere tutto, si può usare la versione digitale dell’Inferno in http://it.wikisource.org/wiki/Divina_Commedia/Inferno .
  2. Latin (latino) per sottotitolare le parti con “O Fortuna”: anche lì, senza stare a trascrivere il latino cantato, che non è facile, si può usare il testo in http://www.tylatin.org/extras/cb1.html
  3. Metadata: Audio Description: In un prossimo futuro, i browser saranno in grado di riconoscere le piste di audio descrizione e di farne la sintesi vocale, fermando il video per il tempo necessario, ma non ci siamo ancora. Perciò per ora si tratta soprattutto di produrre una descrizione testuale continua che contenga le informazioni scritte e le descrizioni delle illustrazioni di Dante.
    Certo, si potrebbe anche piratare il video e aggiungervi una versione veramente audio di questa descrizione, ma cozzerebbe con la musica di Orff.

Quella che non c’è per ora è la pista dell’italiano: quella la faremo una volta fatte le altre cose, traducendo “O Fortuna” dalla pista latina, e ricopiando semplicemente la versione corretta dei sottotitoli di Dante dalla pista “Metadata: Geo”

Poi alla fine, potremo scaricare tutte le trascrizioni risultanti da tutti i sottotitoli, scaricabili come TXT dalle rispettive piste, per poi farne un unico documento testuale, illustrabile.

Ci sono interessati?

 


Leave a comment

Coursera’s Global Translator community: Transifex subtitle translation interface

NB this post is made to illustrate a reply to Transifex Support about the video player in the subtitle translation interface for the Global Translator Community. Transifex Support sent me a link to their Translating Subtitles in Transifex resource that explains about

It would be great if that were how it works in Coursera’s GTC projects, but it doesn’t.

From November 1 to November 18 2014, in GTC projects, the video player in the translation interface appeared like this (1):

screenshot of a Transifex Italian translation page for GTC subtitles

i.e.  not playing anything, just a black rectangle with the original subtitle overlaid

And since Nov. 19, 2014, the player has entirely disappeared.

[update Nov. 27, 2014: screenshot of the same https://www.transifex.com/projects/p/coursera-android/translate/#it/43/21811835 translation page, without any player:

screenshot of the page indicated above, taken on November 27, 2014

/update (1)]

Now the Translating Subtitles in Transifex resource also says:

If you want to see the default editor view without the video, click the gear icon and uncheck the box next to “enable video editor.”

So I checked if I had done that inadvertently, but I didn’t: the box is checked.

Could it be that the issue lies with the first part, i.e. that the maintainers of Coursera’s GTC projects made an error in adding the video links to create players?

(1) click on the picture to enlarge it


1 Comment

Amara Trascrizione generata dai sottotitoli: viva il testo!

(in progresso) Inizialmente ho fatto questo post perché volevo inserire la cattura di schermo sotto:

cattura di schermo del player Amara con trascrizione

in un commento al post Un podcast per iniziare Editing Multimediale – #edmu14 di Andreas Formiconi. Ora nei commenti un’immagine va inserita in un pezzo di codice html, nel quale bisogna indicarne l’indirizzo web (URL). Perciò l’ho caricata qui, nel mio blog, per ottenere quell’URL. Poi siccome non ero sicura che un’immagine soltanto caricata nella “galleria” dei file caricati nel blog sarebbe stata visibile agli altri, l’ho anche pubblicata in questo post.

Altro vantaggio: anche se so quasi a memoria il codice HTML per inserire un’immagine, e so davvero dove andare a ripescarlo, mi è più facile inserire l’immagine come la voglio in un post con l’editore “ricco”, poi passare in modalità “testo” (su WordPress; su Blogger: modalità HTML) e copiare il codice.

E c’è una caratteristica che voglio sempre quando inserisco un’immagine: la descrizione alternativa che viene letta ad alta voce oppure trascritta nella barra braille dal software adoperato dai ciechi che l’immagine non la possono vedere: WordPress ci incita ad aggiungere quella descrizione anche in testo ricco ma purtroppo Blogger no.

Peggio ancora, certe piattaforme di insegnamento non ti danno questa incitazione. Forse il caso più grave è Coursera.org, dove hanno semplificato l’inserimento di immagini, consentendo di farlo “in un solo clic”: da qui un proliferare di illustrazioni che escludono i ciechi.

Certo, si può sempre ripassare in modalità codice per aggiungere quella descrizione. Nell’immagine sopra, essa appare in codice come: alt=”cattura di schermo del player Amara con trascrizione”, ed è semplice ricordare come fare la stessa cosa con tutte le immagini. Ma ci sono maggiori probabilità che la gente la aggiunga se viene spinta dall’editore ricco.

***

Però nel caso presente, questa descrizione  alt=”cattura di schermo del player Amara con trascrizione” in realtà non è granché d’aiuto a chi la cattura di schermo non la può vedere. Nemmeno con la frase “adesso lì in Amara possiamo sfruttare la trascrizione interattiva generata dai sottotitoli (attivabile dal pulsante a righine nel player) per navigarci più facilmente” che ho messo prima della cattura nel commento al post di Andreas: quel pulsante a righine i ciechi non lo vedono.

Perciò se la cattura di schermo serve a far capire ai non ciechi come fare qualcosa con un’applicazione digitale, bisognerebbe capire e spiegare nella descrizione alternativa come quell’applicazione viene presentata dai software ausiliari di lettura di schermo adoperati dai ciechi. E padroneggiare quei software non è mica tanto facile, soprattutto se uno non vi è spinto dalla necessità.

Tuttavia, per quanto riguarda le applicazioni web che si vogliono spiegare, esiste una componente aggiuntiva di Firefox che emula pressappoco quel che verrebbe letto da quei software ausiliari: Fangs – the screen reader emulator . L’ho quindi usata con la pagina http://www.amara.org/en/videos/bneLYCd5i79z/info/tutorial-sottotitolazione-video-in-amara/ di cui avevo fatto la cattura di schermo.

Beh, quanto ai pulsanti del player, Fangs non dice niente, anche se sono etichettati (quello per attivare la trascrizione: “toggle transcript viewer”, cioè attiva/disattiva il visualizzatore di trascrizione) e le etichette si vedono quando si passa sui pulsanti col mouse. Però come fa un cieco a trovare dove passare col mouse?

Ma Fangs è un emulatore approssimativo, quindi può darsi che una vera applicazione di lettura di schermo possa descrivere quei pulsanti. Ogni tanto, chiedo a un conoscente cieco di provarci. Però in questo caso, persino Fangs in realtà legge la trascrizione senza che sia necessario attivarla.

Ciononostante, dovrei riformulare quella descrizione alternativa. Invece di alt=”cattura di schermo del player Amara con trascrizione”, mettere alt=”cattura di schermo del player Amara con trascrizione; con un lettore di schermo, essa inizia da ‘Qui diamo le istruzioni per comporre il testo’.” …

***

Se mi impegno per l’accessibilità in generale, è perché è normale non escludere nessuno. Però nel caso dell’accessibilità delle cose digitali, mi piace anche il fatto che i mezzi per raggiungerla sono testuali: vuoi quando trascrivi l’audio di un video per sottotitolarlo per i sordi, come nell’attività proposta da Andreas Formiconi in Un podcast per iniziare Editing Multimediale – #edmu14, vuoi appunto nel creare descrizioni alternative di informazioni visive che consentano veramente ai ciechi di fare le stesse cose che i non ciechi.

E una volta che un’informazione uditiva o visiva è stata riformulata in un testo, è molto più facile – per tutti – studiarla e anche ottenerne una traduzione automatica, oppure tradurla normalmente, in altre lingue.

Aggiornamenti 18 novembre 2014

Sta andando proprio forte la sottotitolazione collaborativa in http://www.amara.org/en/videos/bneLYCd5i79z/info/tutorial-sottotitolazione-video-in-amara/ . Ero un po’ preoccupata per via del cambiamento di interfaccia dello strumento Amara, dall’ultima attività di sottotitolazione fatta in un corso di Andreas nel 2013. Ma pare che chi vi aveva partecipato non abbia problemi: è vero che allora, se l’erano cavati alla grande quando il lavoro era stato colpito dall’Abominevole Bug dei Caricamenti e Ripristinamenti (vedi i post taggati bug) che per fortuna è stato debellato dalla nuova interfaccia.

***

Paradosso: stamane, nel gruppo LinkedIn “Response To Intervention & Universal Design For Learning Central” (privato) qualcuno promuove un servizio a pagamento di SuccessEd che consente agli insegnanti di semplificare il riempimento delle scartoffie relative alla legge US sull’accessibilità in ambiente educativo, SE 504. Però quella pagina SE 504 è illustrata da un video

cioè in uno stile misto tra i disegnini ritagliati spostati dei tutorial Commoncraft “in plain English” di Lee LeFever, i disegnini in tempo reale della RSA (vedi RSA Animate – Changing Education Paradigms) … però con tutta l’nfo in testi filmati nel video, come nel famigerato Web 2.0 … The Machine is Us/ing Us di Michael Wesch ma con una musichetta ripetitiva irritante e senza la trascrizione di cui Wesch dàva il link nella descrizione.

A parte il fatto che rappresentare la gestione dell’accessibilità come una macchina con un imbuto dove vengono inseriti pezzetti di puzzle che poi escono su un nastro trasportatore non è il massimo, perché promuovere un coso che dovrebbe facilitare questa gestione in un video di cui i ciechi sentiranno soltanto quella musichetta noiosa?

Allora ho messo il video in http://www.amara.org/en/videos/v10s0Upm1RqF/info/successed-se-504/ poi ho adoperato la pista Metadata: Audio Description per trascrivere le info. Il risultato è di una banalità travolgente:

These are the faces of 504 coordinators who are writing their plans by hand, the old-fashioned way.

(a bunch of people with fed-up faces)

The paperwork is insurmontable and growing…

Wish you could access your district’s 504 data online any time you needed it?

Now you can!

Introducing…SE 504 – A partnership betweenSuccessEd & CESD Section experts Jose Martin & Dave Richards

Benefit #1: no more piles of paperwork that lead to compliance oversights

Generate 504 reports in Adobe, Excel & more!Benefit

#2Benefit

#3: Save time & trees by reducing effort & waste.

How does it work?

Enter data and let SE 504 auto-populate your forms.

Work smarter!

(A contraption where puzzle pieces are fed into a funnel and exit on a conveyor belt)

Same language as CESD forms + OCR-compliance = easy transition for staff

That makes for a lot of happy teachers, counselors. And remember those 504 coordinators’ faces?

SuccessEd (the logo hits the fed-up faces with a magic wand)(coordinators now smile and raise their thumbs)

Ask us how to sync your district’s SIS data to SE 504 with our “connex” product by SuccessEd

504 forms are just a click away…(Web page titled “Student Forms”)(Page titled Section 504 Initial Evaluation & Periodic Re-Evaluation)

SE 504 forms are user-friendly.(Page: Edit Student Information – Amberly Farmacka: ID YF1201169)
Quick access to student demographics (Page: Reports)
SE 504 reports save you time!

Dave Richards > < Jose Martin

JM: You’ve asked for our forms online

DR: So here they are!

Built by educators for educators.

We feel your pain!

For details & Demos just visit our website:SuccessEd http://www.successed.net

Però appunto: badare sul serio all’accessibilità serve anche a rendersi conto di cosa non va nel contenuto del messaggio…


Leave a comment

Coursera’s Global Translator Community: puzzling Translator Agreement(s)

Second update Nov. 6 2014

Access to http://translate-coursera.org/lander/terms.html, mentioned in the description of the D version of the Translator Agreement, has now been blocked, as well as all the other pages of the http://translate-coursera.org/lander subdirectory.

Update Nov. 6 2014

Yesterday, I found out that there is a fourth version of the Translator Agreement. I’m updating this post in italics accordingly

Three (actually four) Translator Agreements

There have been three versions of the Translator Agreement in the 6 months of the GTC:

(For easier comparison, I had put the  first three versions side by side in a table, which can be downloaded from https://pdf.yt/d/9VA7ZY8lU1PYUYnb. Should there be differences between C and D, I’ll make a new table).

While B only corrected some obvious mistakes (typo, wrong contact address), C’s has introduced several important changes. Here I’ll concentrate on those to the part defining the property of volunteers’ translations.

Ownership of volunteer’s translations and “work for hire”

C adds a new sentence at the beginning of this part:

Coursera’s licensors place strict obligations on Coursera to protect their intellectual property rights in the licensed content included as part of Coursera courses.

This addition leads to a rewording of the following sentence: where A and B formerly read:

As between Coursera and you, Coursera owns all right, title, and interest to:
1) the copyright or other intellectual property or proprietary right to the translations and translated works (collectively “Translations”), and
2) the Coursera name and logo.

C now reads:

As a result, Coursera must require its translators to agree under this Agreement, that Coursera owns all right, title, and interest to: [rest of the sentence unchanged]

So what was presented as Coursera’s decision in A and B is now presented in C as something imposed by its licensors, i.e. by the universities and institutions who actually provide the courses.

C also changes significantly the part defining volunteers’ translations as “work for hire”.  A and B read:

YOU EXPRESSLY AGREE THAT ANY TRANSLATION SERVICES YOU PROVIDE WILL BE DEEMED A “WORK FOR HIRE,” UNDER SECTION 101 OF THE U.S. COPYRIGHT ACT, IN EXCHANGE FOR GOOD AND VALUABLE CONSIDERATION, THE SUFFICIENCY OF WHICH IS ACKNOWLEDGED.

But the corresponding part in C says

To the extent that you are creating translations and translated works (collectively the “Translations”) at the request and for the benefit of Coursera, you agree that the Translations will be works made for hire to the extent permitted by applicable law, and Coursera will retain all copyright, patent, trade secret, trademark and any other intellectual property or proprietary rights (“Intellectual Property Rights”) in the Translations

So C replaces the reference to Section 101 of the U.S. Copyright Act with the vaguer “to the extent permitted by applicable law”.In fact, Section 101 of the U.S. Copyright Act actually says:

A “work made for hire” is—
(1) a work prepared by an employee within the scope of his or her employment; or
(2) a work specially ordered or commissioned for use as a contribution to a collective work,(…) if the parties expressly agree in a written instrument signed by them that the work shall be considered a work made for hire.

Definition (1) does not apply to GTC volunteers, as section “8. Relation” says “Nothing in this Agreement creates or will be deemed to create an employer-‐employee, partnership, joint venture, or agency relationship between Coursera and you.” And as to definition (2), Coursera never provided volunteers with a written instrument each should agree to and sign. Therefore the reference to that section of the U.S: Copyright Act did not make sense.

However, after that, the Agreement says:

If any of the Translations do not qualify as works made for hire, you hereby assign to Coursera all right, title and interest and all Intellectual Property Rights in the Translations, and if requested by Coursera will deliver a written assignment any other documents necessary to establish Coursera’s Intellectual Property Rights.

(from the C version, which summarizes the same concept, expressed in a heavier and more long-winded way, in A and B)

In other words, whether their translations qualify or not as work for hire, volunteers must cede all rights on their translations, including their moral rights, to Coursera.

No more “good and valuable consideration”

And then,  C scraps “IN EXCHANGE FOR GOOD AND VALUABLE CONSIDERATION, THE SUFFICIENCY OF WHICH IS ACKNOWLEDGED.”

This  is a soberly realistic deletion, because as explained in the course shell / collaboration platform that Coursera has chosen for the GTC: no recognition at all is granted to volunteers who translate fewer than 2500 words “of consistent  high-quality”.

For those who meet this requirement, there is a mention in the public Meet Our Translators page. That mention is not likely to impress a potential employer. Anyway, that page only lists 1011 [unchanged Nov 2] * volunteers, against “14064* persons contributing” indicated  by Transifex, the online collaborative translation tool chosen by Coursera for the GTC,  except for the 12183* Russian translators, who use a tool made by ABBY LS – see http://coursera.abbyy-ls.com/En – and are therefore not included in the Transifex figure.

Hence, that  “Meet Our Translators” only lists  less than 4% of volunteer translators.  Even by a very conservative estimate of only 100 words translated in average by each of the 24033 unmentioned volunteers, Coursera is getting over 2’400’000 words translated for free, without offering any kind of recognition to their translators.
Then for volunteers who translate 15’000 words, there is a Statetement of Accomplishment, and above 75’000 words, there is a Statement of Accomplishment with Distinction. However, these are the same Statements of Accomplishment that Coursera is presently scrapping for course participants, and which Daphne Koller merrily debunked as mere personal mementoes and explained how to easily fake in a hangout for the GTC on May 15, 2014 (from 18:37 in the YouTube archived version).

Moreover, Coursera uses a script that is meant to assign these forms of recognition by elaborating the statistical data provided by Transifex. Hence the Russian volunteers, who don’t use Transifex, are not covered, and even for the Transifex-using volunteers, this script often doesn’t work properly.

Therefore, since October 23, 2014, Coursera is asking volunteers who are entitled to a form of recognition but didn’t get it, or who translate into Russian, to fill a recognition request … in a Google Drive form: in spite of the fact that it recognizes in its public Web page describing the GTC that these Google Drive forms can’t be accessed in China and in other countries.

So all in all, it does indeed make sense to stop pretending that Coursera offers translators “good and valuable consideration” for their work.

Self-contradiction

All three versions of the Translator Agreement say, under “10. Miscellaneous”:

Except where expressly provided otherwise, the Agreement may only be amended in writing signed by both you and Coursera.

And Coursera has not submitted to GTC volunteers any written text about amendments for them to sign before changing the Translator Agreement from version A to B, and then to C.


* As of Nov. 4, 2014. However the 4% proportion of volunteers getting any form of Coursera recognition has remained fairly stable so far.


Leave a comment

Coursera’s Global Translator Community: background

This is meant to be the first of a series of posts about the Global Translator Community (GTC), which Coursera launched on April 27, 2014: see their Introducing Coursera’s New Global Translator Community blog post.

The GTC is not Coursera’s first attempt at having the subtitles of course videos “crowdtranslated” by unpaid volunteers. So here is some information about its previous attempts, as I experienced them.

Coursera’s Amara team (August 2012 – February 2013)

I first joined a Coursera course because I was puzzled by the issues volunteer subtitle translators were reporting  in the Amara Help forum, mainly about the provided unusable original subtitles automatically produced by voice recognition.

Actually, I first tried to join Coursera’s Amara team, but got no reply to my application. From the outside, it looked like an attempt to copy the settings of  TED’s Amara team, with a workflow of translating-reviewing-approval tasks, but without any of the resources and feedback opportunities TED offers volunteers, and with a big hurdle: the unusable original subtitles that got automatically added (see Ambrose Li’s Things to watch out for if you want to work on Coursera’s subtitles:)

Then at the end of 2012, Coursera staff removed this workflow: anyone with an Amara account could now edit and translate the subtitles of their team’s videos. When they also stopped adding the unusable automatic original subtitles, volunteers were finally able to work normally (see the Amara autocaptions for Coursera videos topic in the Amara – Deaf & Hard of Hearing Discussion List for more details).

However, Coursera staff deleted the Amara team at the end of February 2013, telling volunteers they might copypaste the .srt  subtitle files they produced in subpages of the private Coursera students’ https://share.coursera.org/wiki/ (accessible only when logged in with a Coursera ID), adding that Coursera techies might or might not use them in the courses. Some volunteers complied, some found that a bit too daft and continued using Amara, and linked to the Amara subtitle pages instead.

Coursera staff also announced that they were working on a single tool for translating the site’s interface and the subtitles.

Coursera’s Global Translation Partnership (May 2013 – end unclear)

Then there were no news until May 14, 2013, when they wrote the Coursera Partnering with Top Global Organizations Supporting Translation Around the World blog post: instead of developing their own tool, Coursera was going to use Transifex for both things, and involve only said Top Global Organizations, and only for translations “many of the most popular language markets reflected by Coursera students: Russian, Portuguese, Turkish, Japanese, Ukrainian, Kazakh, and Arabic. Each Coursera Global Translation Partner will begin by translating 3-5 select courses, with the majority of translated courses being available by September 2013. ”

By September 2013, that goal was very far from being reached, possibly because Transifex is a great tool for translating interfaces, but though it can cope with subtitle files, which are just text files with a funny extension, it’s not a subtitle translating tool: no video player where you can check your work on the video. Therefore, as I had pointed out in a May 31, 2013 comment to the mentioned blog post, providing translators with with human-made, accurately revised original subtitles would be essential if they were to be translated with Transifex.

So maybe Coursera didn’t do that, and/or maybe the Top Global Organizations had entered that partnership without realizing how much work in such inappropriate conditions would be entailed. Just a hypothesis, as Coursera hasn’t made public any post-mortem about that partnership – nor about the Amara team initiative, for that matter – and never officially announced its end.

The GTC compared to these previous initiatives

On the one hand, the GTC is a return to the Amara team’s crowdtranslation by any volunteers. However it also has “partners” for some languages, though except for the Lemann Foundation (Brazil), which was already partner in the Global Translation Partnership, the other partners have changed.

As to the original subtitles meant to be translated by volunteers, Coursera has changed paid provider since the Amara team,  now using one who  crowdsources subtitling to humans, as Coursera Staff explained in the May 15, 2014 Global Translator Community Hangout with Daphne Koller (from 50:00 ca – see also the captioned version with transcript), but without telling who is this new partner, let alone how much the crowdsourced subtitlers get paid, if anything.

As with the Global Translation Partnership, the GTC’s  tool for translating subtitles is Transifex.

Another very important convergence between the Global Translation Partnership is the GTC’s Translator Agremeent that volunteers must accept when they apply to join.

That will be for another post: because this agreement is worth a post of its own , and also because presently its page, which is linked to on “terms and conditions” in Coursera’s public Web page describing the GTC, now only yields an XML Access Denied message :

<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>DC7BD639FE7B7801</RequestId><HostId>Q8soFADQDwa2H+pNlwRFyvBHAAlaIFaRzZsbVbzrj/amDhg62HJko0ajmdpifg8s</HostId></Error>  .

Update November 4, 2014: though I notified both Coursera’s support and the GTC’s admins of this issue on Oct. 28, the link on “terms and conditions” in Coursera’s public Web page describing the GTC continues to produce this AccessDenied message.

What happened is that Coursera assigned a new URL to a new version of the Translator Agreement. They did put the correct short URL – http://goo.gl/W5WY0J redirecting to https://d396qusza40orc.cloudfront.net/translations/updated_Coursera_translator_TOS.pdf – on the 4th page of the Google Drive form for subscribing to the GTC.

However they kept the old obsolete one in the link on “terms and conditions“.


1 Comment

Feedback on a text via screencasting?

http://videoforall.eu/example_01_feedback_essay/ is the first example illustrating the Video For All project, part of the European Commission’s Lifelong Learning program. It explains how screen capture software can be used to provide feedback on a text via screencasting.

The embedded illustrating video is indeed pleasant …
… if you can see and hear. However, if you are blind, you won’t know what the commenter is talking about when he uses deictic words (like “here”), and if you are deaf, you won’t know what he says, because the video is not captioned: i.e. this way of providing feedback is not consistent with the EC’s promotion of accessibility, let alone with educational commonsense.

Moreover, when even a hearing and seeing student wants to integrate the audio feedback in his/her text, s/he’ll have to keep switching between the video file and the original textual file, and that’s time-consuming, especially if you can’t use your hands and have to use voice commands.

What a blind person would perceive

Yeah, I just want to provide you with a little bit of feedback on your essay, and the first thing I want to point out is, if we come down to here, I was noticing that you were talking about –I think you’re not describing very clearly, exactly what screen capture does, and I think its very important that we explain that if you highlight anything, if you write anything, if you move your cursor on the screen, on the student’s written work,then that comes out in the essay: because I don’t think you’re making that point very clearly, here. So I think you might need to re-think that point. Another little point I just want to bring up is here: you used the word “live”. Now, I think “live” is not really the right word, I think that you need to explain it’s not a live recording, because obviously, the student is playing it back after the teacher has made the recording, but what it is more is a conference, it’s very similar to conferencing in a way that it kind of feels like the teacher is sitting next to you,going through your work and explaining some of the mistakes. So, maybe I think, here it’s the wrong choice of word and you might need to think about that as well. I think, one of the things here is you’ve pointed out different screen capture tools, and it might be a good idea that when you point out these tools, to talk about which tools are free and which tools are paid, because JING is a free tool, but Camtasia certainly will cost teachers a hundred and — I think it’s about 170, 180 dollars to buy. So I think it would be useful to provide that information in your essay.

What a deaf person would perceive

Abstract part of an essay entitled “A revolutionary way to provide feedback”. The cursor moves around randomly. The text scrolls down, then stops. The cursor highlights Every move you make on the screen, every web page you open, picture you view etc is simply recorded as a video in blue. The highlight color gets changed to yellow. The text scrolls down further, then stops. The word “live” is highlighted first in blue, then in yellow. The text scrolls down further, then stops. The words JNG, Screenr, Camstudio and Camtasia are highlighted first in blue, then in yellow.

How can these barriers be removed?

Captioning and audio description

It may seem paradoxical at first to illustrate a project called Video For All with a use of video that excludes deaf and blind people, and is anyway unwieldy to apply to the textual object of the video, particularly for people with motor disabilities.

True, the video result could be made accessible to all by integrating an audio description of the visual elements, and captions for the audio comment.

However, while captioning videos can be done very easily with freely available desktop and online tools (I’ve done it here with the Amara.org online tool for the video of the example), integrating an audio description is more tricky: see the “Now, on to Audio Descriptions” part in Greg Kraus’ HTML 5 Video, Text Tracks, and Audio Descriptions Made Easy (or at least easier) (NC State University IT Accessibility Blog, June 14, 2011). It’s not only a matter of tech difficulty and cost, but also of time needed to select which visual elements need descriptions, and to produce audio descriptions that will fit within the audio of the video.

Granted, the need for a separate audio description could be avoided by already describing the relevant visual elements in the comment itself. Yet even so, applying what is said in the video to the text would remain awkward.

Feedback both via video AND in the text

This is the neatest and easiest solution.

Once you have captioned a feedback video where the audio is self-standing – and you really MUST do that lest you exclude people –  the captions produce a transcript that can be copied from.

Thus, you can then go back to the written file, remove the cosmetic highlights you made during the video, and use the “Comments” tool of your text editing software/application – as I have done in this Google doc, where I first transcribed the text that appears in the video example discussed here. I actually added a first comment that says: “for a video feedback, see http://videoforall.eu/example_01_feedback_essay/ ”😉

 

 

 

 

 


5 Comments

Connected Course – so glad for it

I found out about the Connected Course taking place at connectedcourses.net via Paul-Olivier Dehaye’s own introduction post for it. And I discovered his blog via his @podehaye twitter account – which I’m following because when I read about his having hidden his Coursera  Teaching goes massive: New skills required course, I thought it was such a brilliantly economic way to incite the students to apply what he had been advocating in his videos for that course.

I signed up for the Connected Course because I’ve been interested in the connective learning approach since Stephen Downes lead a discussion about a document he entitled “Learning Networks and Connective Knowledge”  in the Instructional Technology mailing list in 2006, and also because I enjoyed and learned a lot from the connective workshop/MOOC Laboratorio di Tecnologie Internet per la Scuola #ltis13 lead by Andreas Formiconi for Italian University Line last year.

Something fun happened right at the end of #ltis 13 in June 2013: one of the participants, Fabrizio Bartoli, had reblogged a post of Vance Stevens in his learning2gether blog, and Vance Stevens then invited some of us to present  #ltis13 in English there: see Fabrizio Bartoli, Lucia Bartolotti organize a discussion of the cMOOC ltis13.

But perhaps the most important thing about #ltis13 is that it continued after its administrative end, morphing into a permanent workshop, #loptis, with Andreas Formiconi’s blog at its hub. So in a way, my signing up for this Connected Course is part of my participation in the #loptis workshop – and vice-versa😀

As to who I am: a former teacher of French and English as foreign languages, presently a pro translator and an accessibility advocate. Oh, and given my misleading first name: I’m female.

Follow

Get every new post delivered to your Inbox.

Join 65 other followers