I had a lot of people send me the link to the news release about two young inventors who have supposedly discovered a way to translate ASL into spoken English. The HuffPo article about it is linked here.
Confession: I rolled my eyes when I saw it.
I just couldn’t believe that something like that could actually do what it claims it will. I mean, how on earth could it ever capture the facial expressions that are so necessary to signed language? How? At best, it seemed to me to be a sort of tangible YouTube captions, taking overly exaggerated gestures and maybe sometimes getting it right.
But did I say that to anyone who sent me the link?
Why no. No, I didn’t.
I am pushing myself to stay as positive as possible about things, and trying to be all, “yay!” instead of “nay.”
So I didn’t say anything besides the fact that the invention looks cool and I hope it will work. Both are true: I do think the invention looks cool and I hope it will work. I just don’t personally think it will.
Some new articles by people far more expert than I have emerged and are worth reading. This is one: Ten reasons why sign-to-speech is not going to be practical any time soon. It’s really fantastic, and kind of gives me the balls to talk a bit about something that bothers me.
I don’t actually think that implementing some type of rudimentary, less-than-perfect technology as a part of disability access is all that helpful. I think you should get it right, or test it more.
This is the problem, as I see it: when something is invented or created as a temporary access solution even though it is far from perfect, too often the real solution is placed on the back burner. It becomes like, “yeah, well, it’s better than nothing,” so the permanent ramp isn’t placed, correct captions are not developed, appropriate class supports are not implemented. Nothing to me illustrates this better than YouTube captions.
YouTube captions take speech and auto-caption it. Have you ever gone there, turned the sound off completely (if you are hearing) and relied solely on the auto-captions to guide you through what is happening?
If you have, then you know it’s a headache. It’s confusing. It’s often gibberish. It’s real work on my part to fully discern content.
But let me tell you! When I ask for captions for videos, I am told more often than not that the “YouTube captions are there!” People don’t bother to caption their videos because they are relying on those crappy YouTube auto-generated ones, which is supposed to be better than nothing. I personally think they are worse than nothing, because they make people try less and put the full burden of figuring content out squarely on the person who needs the captions.
Most of the time, when I see that my only course of action is to use the auto captions, I quit. I won’t even go there anymore, I’m just too sick and tired of trying to figure out a bunch of content that makes no sense.
Given that, it’s not better than nothing for me; it is nothing. And it’s a nothing without recourse – I can’t knock politely on the video creator’s door and ask for captions because they simply say, “but YouTube captions are there!”
This idea of creating gloves to translate ASL might be a great one. But I sincerely hope they don’t come remotely close to marketing it unless and until they actually have it down. Sending yet-another thing out into the disability community that doesn’t actually do the job doesn’t make our lives easier; it makes it harder. Because not only are we going to have to still try and figure out how to access content, but we’re going to have to battle the notion that it’s done already by the imperfectly designed “better than nothing.”
:// end rant.
Meriah Nichols is a counselor. Solo mom to 3 (one with Down syndrome, one on the spectrum). Deaf, and neurodiverse herself, she’s a gardening nerd who loves cats, Star Trek, and takes her coffee hot and black.