A quick overview of what I'm starting to think about during my dissertation. Discussions about how 'AI Safety' is being defined, and why that is interesting.
Meta stole a ton of books to train their AI models. Thinking a bit about the politics of such a maneuver.
A definition of 'Open' AI was offered by the Open Source Initiative, and it upset some folks. Some STS ruminations on definitions, and why they matter.
Examining how AI-powered targeting systems obscure human values behind algorithmic objectivity, and what this means for accountability in automated warfare.
Thinking about how we relate to AI-generated art, and why we might not think about it as another new tool in an artist's toolbox
We are excited about the capabilities of AI, but are willingly leaving malicious code on the internet. This code is probably getting scooped up by adversarial actors, so what do we do?