How does a computer discriminate
How does a computer discriminate?
Similar to last month - this is a podcast episode where an incredible author is interviewed about their book, and it’s a great listen. This one is an interview with Safiya Noble who is the author of the book Algorithms of Opression: How Search Engines Reinforce Racism. The difference here is this book was written in 2018 but seems incredibly relevant as we head towards the end of a year that has been all about AI.
As folks working in the technology industry and working at a company that is producing software for AI, we felt this was a particularly prescient topic for us to be aware of. It fits hand in hand with Microsoft’s push for responsible AI practices.
What’s nice about this interview and the book is that it doesn’t shy away from the dark side of what AI and algorithms can accidentally re-inforce and amplify if not designed responsibly. We’d love for all of us in the technology industry or in any industry going forward to take this to heart to help make sure the future doesn’t turn into a Black Mirror episode (a hard to watch, but must watch TV series for technologists in my opinion).
Suggested discussion questions (if you need something to start the conversation):
Did the examples given in the podcast episode surprise you? And why?
Are there areas you think your workplace should be cautious about when it comes to AI and the future?
Tricky topic, but does this episode cause you to consider more or less need for regulation for AI?