Saturday 24 August 2019

On Cool Tech, Disability, and a Cyberpunk Future

coldalbion:

So, this post is a post of of something I already replied to on a reblog, Because I felt like it needed saying. For anyone who comes across this randomly? I’m a severely disabled wheelchair using cripple who is also a partial-foot amputee. I’ve been crippled from birth, coming up nearly forty years now.

Every time someone posts a new thing on exciting technology from a TED talk or something, which shows disabled or impaired people doing funky or just plain normal things? I wonder about the so-called Cool Tech in question, and someone, somewhere, will inevitably make a comment about a Cyberpunk future.

So, for your edification, I present the criteria for what constitutes Cool Tech  that my crippled arse uses. The reason for this is, because the majority of people don’t think about such things:

It’s cool tech IF you can afford it. IF you can repair it cheaply/yourself. IF when it breaks or goes wrong, it doesn’t hurt you, or become an albatross around your neck. IF it doesn’t rely on certain pre-requisites which may not be available in an emergency.

It’s cool tech IF it’s not proprietary and has no planned obsolescence. IF you can do what you like to it. IF parts are easy to source. IF it serves a genuine need/desire of disabled people rather than just attempting to normalise us or erase our identities. Or use us as inspiration-porn.

It’s cool tech IF there is more than one source (i.e. the technology becomes generic) so that if one supplier shuts down, you can still get it.

Without the above it’s a potential nightmare - the kind of nightmare that the Cyberpunk genre was meant to warn against.  



from Technoccult https://technoccult.tumblr.com/post/187240250466
via http://technoccult.tumblr.com/rss/

Tuesday 20 August 2019

Audio, Transcripts, and Slides from "Any Sufficiently Advanced Neglect is Indistinguishable from Malice"

afutureworththinkingabout:

Below are the slides, audio, and transcripts for my talk ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019. (Cite as: Williams, Damien P. ’“Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]

All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:


The Audio: …
[Direct Link to Mp3]

And the Transcript is here below the cut:


Read the rest of Audio, Transcripts, and Slides from “Any Sufficiently Advanced Neglect is Indistinguishable from Malice” at A Future Worth Thinking About



from Technoccult https://technoccult.tumblr.com/post/187147555216
via http://technoccult.tumblr.com/rss/