Unfortunately, a trend I noticed lately is that Substack is enabling bullying. This has a lot to do with the dynamics of social media being created at Notes.
Not to say that it was a utopia back before Notes, but sometimes, it's not good to have your content exposed to everyone, especially if said everyone is not able to accept different points of view and decides that bullying is a righteous act.
In a social media ecosystem dominated by US voices, and especially one with a culture as divisive and argumentative, it's especially "dangerous" to share minority opinions that differ from the majority. Not only is US social media argumentative and obsessed about tribes and labels, it has a puritanical streak susceptible to moral panics.
I have come to the conclusion that if you post anything counter culture to the accepted narrative of this platform (on any subject) be ready for bullying or being preached at.
It is a frustrating thing for non-US folks to be dragged into American culture wars, but that's the reality of it.
What your essay does so well is hold space for the full complexity of authorship: the spectrum of collaboration, the long tail of assistive technologies, and the impossibility of drawing clean ethical lines when authorship has never been pure. The AI panic is real, but it too often collapses this nuance into reactionary judgment. What we’re witnessing isn’t just fear of bad writing. It’s a cultural identity crisis about what it means to be the origin of something.
As I wrote in my own piece ( https://www.trustable.blog/p/the-human-premium-substack-ai-and ) responding to the Substack AI Report, we’re entering an era where the appearance of authorship is easy to simulate, and where human voice becomes a premium signal in a sea of synthetically smooth prose. But instead of using that as a reason to deepen our understanding of the process, we’re slapping on purity tests. What we actually need are clarity, disclosure norms, and systemic safeguards against the REAL threats: unconsented data training, exploitative labor shifts, and opacity at scale.
Elizabeth Tai’s case is a heartbreaking proof point. Her transparency was punished because we’ve confused authorship disclosure with authorship deception. And as you rightly point out, this kind of moral panic doesn’t protect writers. It isolates the very ones we should be learning from, those who are using tools ethically, transparently, and as an extension of their voice.
The real conversation isn’t “Is AI writing bad?” It’s “How do we define voice, labor, and legitimacy in a creative economy built on increasingly invisible collaboration?” And if we can’t hold that complexity, we’re not defending art, we’re just replacing gatekeepers with purity mobs.
Looking forward to Part 2. I think this series is going to be foundational for how we talk about trust, authorship, and responsibility in this new creative terrain.
You brought out a really good point with the Ghost writing. Is it ethical to put your name on someone else's work namely that of the ghost writer. I've never thought about using a ghost-writer, mainly because if I paid one, I'd want to make them a co-author. I have this thing about giing credit where it is due.
I've tested the AI and found it lacking. it's prone to hallucinating, it lies, and I have to take the output with a grain of salt. But if I ask it to analyze something, it will analyze that thing and tell what it can.
Would I use an AI to write a story? No, but I'm not about to call other people name for using it, especially if they use it like Elizabeth did, in an attempt to break a writer's block.
I don't think we'll be able to escape AI intruding into everything. It's already infiltrated Microsoft word, Powerwritingaid, grammarly, and scrivner.
You can refuse to use it, which is fine. But would you have refused to read a paper written by Steven Hawking, who used a primitive AI to speak and write, and if he had used a modern AI, would you have spoken out against him using one?
Thanks for writing this and being brave, honestly. Our discussion, and the one in my newsletter made me aware of writing community's rather idealistic and puritanical view of the writing craft and process. As a long-time writing person in the corporate world and media, I had no illusions; my works were often the result of many hands, even fiction. AI was just another partner, though not a human one. And ghostwriters! I've always wondered about ghostwriters, but AI is just another, though I don't think it's a good one lol.
Some voices, not just on Substack but on social media, told me I should stop poking the bear by talking about this. They are mostly confused why I have this need to share such a topic that is offensive to the writing community and will get me cancelled. I told him that I was so happy that there is a chance I could overcome a long-time inability to finish stories that I wanted to share the happiness with the world so they can discover it for themselves. It's sad that this puritanical streak in English US social media is trying to suppress voices that don't align with the majority.
Sadly, I think people like us can only discuss and talk about this issue in safer spaces, and advertising our work on Substack Notes will invite hostility. But yet, if not for me stumbling on Michael's Note on Substack, I may not have read your newsletter. Alas, double edged sword.
Thanks for sharing! Ultimately, I don't mind AI policies that bar or limit the use of GenAI in writing, and I empathize with people's general distaste for it. But writing is used in so many contexts by so many different people -- you can't have blanket judgments on what should or should not be used without looking at the context.
The podcast was really validating. I went, man, that's me! Then I got a tad envious that theirs was a temporary condition while mine was, well, just there lol. But! I love how they validated a lot of the things I did unconsciously, and I do love how they say fiction writing has gotten so fun thanks to AI, and I have to agree. I feel as if my love for fiction is being revived.
I’ve rarely read a more grounded, battle-hardened account of AI in the DevOps trenches!
The midnight triage, the AI that actually helps, the cultural shift from "heroes with pagers" to "orchestrators of intelligent systems." It’s all here, and it’s not hype. It’s tactical, human, and slightly hilarious (as it needs to be).
What stood out most wasn’t the tooling (though Datadog, Claude, Bedrock, and Copilot are doing serious heavy lifting), it was the quiet shift in role identity. We’re not replacing engineers. We’re replacing firefighting with foresight (what a thought!). And if we get it right, we’re not just keeping systems online, we’re rebuilding the definition of operational trust.
DevOps is no longer just digital plumbing. It’s becoming the neural network of enterprise stability.
Hell of a write-up, Josh. Bookmarking this for every AI-skeptical SRE I meet.
Natalia have you seen this yet? It's an insights report on how Substackers are using AI. 2000 participants.
really interesting in that it documents the ND use cases! They emerged organically in the data, very telling.
https://open.substack.com/pub/on/p/the-substack-ai-report?r=5oy8bz&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
i've been meaning to look at it more closely, thanks for flagging! lots of interesting stuff there to digest
Unfortunately, a trend I noticed lately is that Substack is enabling bullying. This has a lot to do with the dynamics of social media being created at Notes.
Not to say that it was a utopia back before Notes, but sometimes, it's not good to have your content exposed to everyone, especially if said everyone is not able to accept different points of view and decides that bullying is a righteous act.
In a social media ecosystem dominated by US voices, and especially one with a culture as divisive and argumentative, it's especially "dangerous" to share minority opinions that differ from the majority. Not only is US social media argumentative and obsessed about tribes and labels, it has a puritanical streak susceptible to moral panics.
I have come to the conclusion that if you post anything counter culture to the accepted narrative of this platform (on any subject) be ready for bullying or being preached at.
It is a frustrating thing for non-US folks to be dragged into American culture wars, but that's the reality of it.
A compassionate and thorough look at collaboration. It should be discussed by all of us who research, write and edit. Brava, Natalia!
Thanks for reading and commenting!
What your essay does so well is hold space for the full complexity of authorship: the spectrum of collaboration, the long tail of assistive technologies, and the impossibility of drawing clean ethical lines when authorship has never been pure. The AI panic is real, but it too often collapses this nuance into reactionary judgment. What we’re witnessing isn’t just fear of bad writing. It’s a cultural identity crisis about what it means to be the origin of something.
As I wrote in my own piece ( https://www.trustable.blog/p/the-human-premium-substack-ai-and ) responding to the Substack AI Report, we’re entering an era where the appearance of authorship is easy to simulate, and where human voice becomes a premium signal in a sea of synthetically smooth prose. But instead of using that as a reason to deepen our understanding of the process, we’re slapping on purity tests. What we actually need are clarity, disclosure norms, and systemic safeguards against the REAL threats: unconsented data training, exploitative labor shifts, and opacity at scale.
Elizabeth Tai’s case is a heartbreaking proof point. Her transparency was punished because we’ve confused authorship disclosure with authorship deception. And as you rightly point out, this kind of moral panic doesn’t protect writers. It isolates the very ones we should be learning from, those who are using tools ethically, transparently, and as an extension of their voice.
The real conversation isn’t “Is AI writing bad?” It’s “How do we define voice, labor, and legitimacy in a creative economy built on increasingly invisible collaboration?” And if we can’t hold that complexity, we’re not defending art, we’re just replacing gatekeepers with purity mobs.
Looking forward to Part 2. I think this series is going to be foundational for how we talk about trust, authorship, and responsibility in this new creative terrain.
Thanks so much for the thoughtful comment! Agree 100%. Part 2 is out here: https://artificialinquiry.substack.com/p/what-is-authorship-in-the-age-of-0c1
Most of what you are seeing is virtue signaling.
You brought out a really good point with the Ghost writing. Is it ethical to put your name on someone else's work namely that of the ghost writer. I've never thought about using a ghost-writer, mainly because if I paid one, I'd want to make them a co-author. I have this thing about giing credit where it is due.
I've tested the AI and found it lacking. it's prone to hallucinating, it lies, and I have to take the output with a grain of salt. But if I ask it to analyze something, it will analyze that thing and tell what it can.
Would I use an AI to write a story? No, but I'm not about to call other people name for using it, especially if they use it like Elizabeth did, in an attempt to break a writer's block.
I don't think we'll be able to escape AI intruding into everything. It's already infiltrated Microsoft word, Powerwritingaid, grammarly, and scrivner.
You can refuse to use it, which is fine. But would you have refused to read a paper written by Steven Hawking, who used a primitive AI to speak and write, and if he had used a modern AI, would you have spoken out against him using one?
Excellent points!
Thanks for writing this and being brave, honestly. Our discussion, and the one in my newsletter made me aware of writing community's rather idealistic and puritanical view of the writing craft and process. As a long-time writing person in the corporate world and media, I had no illusions; my works were often the result of many hands, even fiction. AI was just another partner, though not a human one. And ghostwriters! I've always wondered about ghostwriters, but AI is just another, though I don't think it's a good one lol.
Some voices, not just on Substack but on social media, told me I should stop poking the bear by talking about this. They are mostly confused why I have this need to share such a topic that is offensive to the writing community and will get me cancelled. I told him that I was so happy that there is a chance I could overcome a long-time inability to finish stories that I wanted to share the happiness with the world so they can discover it for themselves. It's sad that this puritanical streak in English US social media is trying to suppress voices that don't align with the majority.
Sadly, I think people like us can only discuss and talk about this issue in safer spaces, and advertising our work on Substack Notes will invite hostility. But yet, if not for me stumbling on Michael's Note on Substack, I may not have read your newsletter. Alas, double edged sword.
That said, I thought you may find it interesting (and comforting) to know that major names in the indie publishing world is using AI to write their fiction, some more aggressively that I do. This podcast from Joanna Penn will definitely interest you: https://www.thecreativepenn.com/2023/08/11/how-ai-tools-are-useful-for-writers-with-disabilities-and-health-issues-with-s-j-pajonas/
Thanks for sharing! Ultimately, I don't mind AI policies that bar or limit the use of GenAI in writing, and I empathize with people's general distaste for it. But writing is used in so many contexts by so many different people -- you can't have blanket judgments on what should or should not be used without looking at the context.
blanket judgements is what social media is good at hahaha
And thanks for sharing the article!
The podcast was really validating. I went, man, that's me! Then I got a tad envious that theirs was a temporary condition while mine was, well, just there lol. But! I love how they validated a lot of the things I did unconsciously, and I do love how they say fiction writing has gotten so fun thanks to AI, and I have to agree. I feel as if my love for fiction is being revived.
https://open.substack.com/pub/hamtechautomation/p/a-battle-tested-sredevops-engineers?r=64j4y5&utm_campaign=post&utm_medium=web
I’ve rarely read a more grounded, battle-hardened account of AI in the DevOps trenches!
The midnight triage, the AI that actually helps, the cultural shift from "heroes with pagers" to "orchestrators of intelligent systems." It’s all here, and it’s not hype. It’s tactical, human, and slightly hilarious (as it needs to be).
What stood out most wasn’t the tooling (though Datadog, Claude, Bedrock, and Copilot are doing serious heavy lifting), it was the quiet shift in role identity. We’re not replacing engineers. We’re replacing firefighting with foresight (what a thought!). And if we get it right, we’re not just keeping systems online, we’re rebuilding the definition of operational trust.
DevOps is no longer just digital plumbing. It’s becoming the neural network of enterprise stability.
Hell of a write-up, Josh. Bookmarking this for every AI-skeptical SRE I meet.
Thank you so much!