13 Comments
User's avatar
Daniel's avatar

GenAI will always be part of our life now. What’s needed are clear norms on transparency, ethics, and where human input matters most

Expand full comment
Nada's avatar

I agree with your comment regarding the GenAI and I am particularly interested about ethics in human professions like Health and Institutions like Health Institutions . Whim they are ascribed to ?

Expand full comment
Michael Spencer's avatar

I like this digital anthropology you are doing. There are times when the growth strategies of a platform intersect with AI and plagiarism, where viral growth is facilitated by commercial incentives of actors that may not be what they seem.

If you were a growing platform, what lengths would you go to, to attract a younger female population? We are about to witness many different kinds of deep fake actors, including State actors, corporate actors, and off platform creators hacking these systems were content moderation is not prioritized.

Expand full comment
Natalia Cote-Munoz's avatar

For sure! Definitely something worth looking into. And thanks for reading :)

Expand full comment
Techintrospect's avatar

this is a MUCH needed elevation of of the AI dialogue on substack.

Expand full comment
Natalia Cote-Munoz's avatar

Thanks! Glad you appreciate

Expand full comment
Rachel Maron's avatar

Excellent follow-up! Not just for where we are in the discourse, but for where we need to go. Part 1 exposed the moral panic and misplaced scrutiny of individual GenAI use. But Part 2 does something just as urgent: it dissects the architecture of that panic, the biases, fallacies, and emotional spillover that have turned a nuanced debate about tools and authorship into a trench war about identity, legitimacy, and craft.

The Substack AI report didn’t tell us that AI is replacing creativity, but it showed that AI use is fragmented, contingent, and deeply contextual. But the response it provoked reveals how tightly creativity is woven into an individual's identity, personal worth, professional survival, and cultural standing. For writers, AI isn’t just a tool. It’s an existential threat to the very act of being a writer.

That’s why I appreciated your framing of the "cyborg" vs. "centaur" models of use. These metaphors matter. They give us a language for differentiating between passive automation and active augmentation, between ceding control and directing it. And that difference isn't semantic; it's structural.

Which brings us to the most powerful throughline of your piece: structure vs. individual responsibility. The comment wars reflect real anxieties, but they misfire when they target fellow writers rather than the systems that make this terrain so treacherous. Shaming someone for using a tool to overcome disability isn’t a moral defense; it’s misplaced aggression. And it undermines the very transparency and trust that ethical AI usage depends on.

As I noted in my own essay on the AI survey, authorship is increasingly a signal, not a guarantee. We're entering a world where simulated voice is everywhere, and the actual origin is invisible. That makes disclosure critical, but it also makes punishment for honesty devastatingly counterproductive.

You’re right: we need frameworks. But more than that, we need coalitions and alliances that cut across ability, craft, tool preference, and business model. Because AI’s biggest harms aren’t coming from a single user’s choices, they’re being baked into models, monetization structures, and labor displacement at scale.

This series is doing the hard, necessary work of disentangling judgment from justice, disclosure from deception, and personal use from systemic abuse. I hope others read it not as a verdict, but as an invitation to rethink what authorship can mean in an age where machines can mimic us, but only we can be accountable for the meaning behind our words.

Expand full comment
Natalia Cote-Munoz's avatar

Thanks so much for reading, and for your thoughtful comment! Can’t wait to read your piece.

Expand full comment
Ryan Sears, PharmD's avatar

Thanks for digging into this important issue. I think a good first step is for individual authors to disclose their own policies. I’ve included an example of my own here: https://open.substack.com/pub/phillysaipharmacist/p/ai-use-policy?r=5wvhih&utm_medium=ios

Expand full comment
Natalia Cote-Munoz's avatar

100%

Expand full comment
Maureen harrington's avatar

Fantastic

Expand full comment
Natalia Cote-Munoz's avatar

Thank you, Maureen! That means a lot

Expand full comment
Maureen harrington's avatar

As with your first essay this is comprehensive and measured. I will have to reread to absorb it all. Much to think about, Natalia. Again Brava!

Expand full comment