Maddy Muscari
I am a 20 year career professional in software development, with a background in computer science, electrical engineering, and machine learning, from the days before deep learning. I worked on early iterations of recommender systems and quickly realized that content recommendation is a race to the bottom for the most salacious clickbait. From there I moved on to b2b SaaS products and have worked as a platform engineer in a number of organizations since then. In recent years, I have moved back into AI and now work on generative AI tools for code generation and analysis. On the side, I am building distributed AI systems for ethically moderating social media.
Session
Remember gamergate? The phrase "It's about ethics in AI alignment" should be a joke—but it's not. AI alignment is shaping who gets to speak, what gets remembered and how truth is defined.
Application developers, not AI researchers, are the ones implementing these systems—which means AI governance isn’t just a policy problem; it’s a codebase and product problem.
This talk deconstructs alignment through a lens of control:
- How authoritarian data controls like constitutional classifiers and red-team RLHF training are damaging reasoning models.
- The "slippery slope" of allowing these kinds of classifiers to find their way into human communications
- Why AI alignment is not just about safety—it’s about deciding who gets to define epistemic reality.
- How engineers can identify, challenge, and counteract AI-mediated censorship in the tools they build.
If alignment becomes invisible, it becomes unchallengeable. If practitioners don’t resist now, AI won’t just shape knowledge—it will shape what can even be thought.
Key Takeaways:
✔ Understand how alignment architectures are being adapted for human speech control.
✔ Recognize when AI alignment becomes silent censorship.
✔ Resist epistemic capture by designing systems that preserve interpretability and agency.