What is Authorship in the Age of GenAI? Part 2
Dissecting the Substack AI Report's Comment War: Legitimate Concerns, Biases and Fallacies, and the Need for Thoughtful Debate and Cooperation
This is the second in a series exploring authorship and AI.
In part one, I explore the complex landscape surrounding authorship and tool usage.
In part two, I break apart the main uses of AI by Substack writers and the main arguments for/against this use.
In part three, I propose a dynamic framework to judge the ethics of AI writing assistance. With the help of Claude, I've also made a cheat sheet/handout of this post for reference.
In part four, I provide AI writing assistance case studies that I have personally used and can vouch for.

While I originally envisioned a two-part authorship and AI series, it will likely be longer. The same day I posted my first article on authorship and AI, Substack published its own AI report, which showcases the findings of a survey examining Substack publishers’ AI usage and attitudes. The findings were interesting, but what was most remarkable, however, were the reactions to the report. Both the report findings and the reactions well the points I made in my first article are worth dissecting.
I. Highlights From the Substack AI Report
Before diving into the reactions, it's important to clarify what the report did and did not say. And also to contextualize it all.
Nearly Half of Substack Uses AI
Of the 2000 publishers surveyed, 45.4% use AI, 52.6% do not, and 2% were unsure (I don’t understand how this would be; can anyone illuminate for me?). However, this involves any sort of AI usage. This doesn't mean that nearly half of Substack is prompting ChatGPT to churn out essays after one prompt that they then copy-paste and mindlessly publish. The top two uses were research (2/3 of AI users) and ideation/brainstorming (slightly over half of AI users). Writing Assistance came third, with a bit under half of responding AI users using it that way.
The tricky part is that what is meant by “Writing Assistance” is unclear, and it can fall under a variety of areas—some where the human author is contributing significantly, and some in which they are not. “Writing Assistance,” without more details than that, can include any of the variety of writing supports that I detailed last week, including:
translating
copy-editing and grammar
basic developmental editing / Socratic feedback
line editing and rewriting
developing substantive arguments/plots
ghostwriting
Depending on the industry, the stated goal of the article, and transparency, all of these issues can vary in their appropriateness. That said, Substack did seek to draw a line between assistance and generation:
When AI-using publishers were asked to estimate how much of their published content in the past three months had been AI-generated, “including AI-generated sentences, paragraphs, images, and videos,” most reported that they hadn’t used generative AI at all, or used it sparingly—though there is a small cohort that reported using it in 100% of their posts.
The vast majority use GenAI to generate text very sparingly. Eyeballing the chart, it seems like around ~20% of AI users have over half of their content generated by AI, with about ~4% using it 100% of the time. That would mean that about ~9% of total respondents (180 people) use AI to generate more than half their content, and about ~2% (40 people) use AI to generate 100% of their content.
Demographics and Industries of AI Users
A slight majority of AI users were over 45 years old, and over half of the AI users were men. They were also less likely to express concerns about AI usage compared to women (47% of men expressed concerns compared to 67% of women).
These demographics track with the Substack categories that tend to use AI the most — Technology, Business, and Finance tend to be male-dominated fields. They are also fields where writing is seen less as a craft and more as a vehicle to communicate effectively, and where AI usage is rampant and highly encouraged. It's no wonder that they show more AI usage.
In my view, this is also an illustration that AI is not necessarily destroying the humanities. These industries are not necessarily known or valued for their literary aptitudes or creativity. And there's nothing wrong with that—they bring different value to the table and function under different logics, with different work cultures.
In contrast, and unsurprisingly, the categories where authorship, creativity, and craft are more highly valued and strict (e.g., literature) have less GenAI usage. This shows that the Substacks churning out slop to replace literature are probably less common than people assume.
What's more, AI Users, who tend to skew towards issues/industries that are less threatened by AI, were more optimistic about AI's impact on their work in the next 5 years.
If anything, this graph shows the bubbles different people live in, depending on their industry, attitudes towards technology, how useful GenAI is for their needs, and more. This makes discussion, nuance, and empathy across users all the more important. And that's what I will look into in the next section — dissecting the different reactions and arguments for and against AI usage on Substack.
II. Dissecting the Battle in the Comments
The report unleashed a firestorm in the comments that was representative of the frustrations, arguments, and attitudes often seen to critique AI usage on Substack. Unsurprisingly, the vast majority of reactions were negative, although there was a small but vocal minority that was positive about AI usage, including those who use it as assistive technology. I’ll break apart some of the most common arguments for and against AI, as well as their validity.
Top Arguments and Issues Discussed
Intellectual Property: "IP Theft" vs. "Fair Use"
The most visceral reactions centered on intellectual property violation. The frustration and sense of powerlessness are likely compounded by the inadequacy of technical and legal protections. This landscape is dangerously unsettled. Early court decisions favor the fair use argument used by AI developers, but cases with clear market harm (like The New York Times v. OpenAI, where the model reproduces paywalled content) present much stronger claims for plaintiffs. The immense financial stakes are driving AI companies toward licensing agreements as risk management, potentially creating a tiered system where large entities negotiate lucrative deals while smaller creators lack leverage.
Corporate Control: Billionaire Dominance vs. Democratic Innovation
Multiple commenters expressed concern about "giving your IP for free to train AI owned by billionaires." One person described AI as a "long con" where AI companies create dependency to raise prices later. This is not unfounded – the development of frontier AI models is overwhelmingly dominated by for-profit motives and corporate giants. Only these entities possess the necessary capital, data, and computational infrastructure. The symbiotic relationship between AI labs and cloud providers creates a closed loop concentrating power. While open-source alternatives like Mistral and DeepSeek provide a counterbalance, the primary trajectory is set by the strategic and commercial interests of powerful corporations. The concern isn't profit motive existence, but its overwhelming dominance, potentially sidelining safety, equity, and transparency.
Human Creativity: "Creative Amplifier" vs. "Creative Atrophy"
There were several debates on whether AI atrophies or promotes creativity. The reality is that it can do both depending on how it’s used, which I discussed in my article, breaking down the infamous “MIT Brainrot Study” actually said. Passive, uncritical use (prompting AI and accepting output as finished) doesn’t use your brain in the same way, and if that muscle isn’t exercised, it could potentially atrophy—the "cyborg" model where humans become dependent. Active, intentional use (treating AI as a brainstorming partner or technical assistant after one has thought first) can enhance creativity—the "centaur" model, where humans provide strategic direction. The key differentiator is the user's ability to guide, question, and iterate upon AI output.
Environmental Impact: Sustainability Crisis vs. Innovation Solution
There is no doubt that data centers used to train and power GenAI are damaging to the environment. In addition to large energy uses, they overheat, requiring gallons of water, and sometimes poison water supplies. While this is an area that tech companies are investing in (it’s not in their interest to have data centers that overheat and are inefficient, either–it’s costly, albeit nothing compared to the people living near the data centers), it’s still nowhere nearly enough to make up for the environmental cost.
That said, it’s important to understand the drivers behind this energy consumption, because it’s not mainly due to individual usage of GenAI. The most energy-intensive use of AI is through training, developing new models, and enterprise-wide deployments. Due to the intensive competition in the AI space, companies are incentivized to rapidly develop and put out new models as much as possible. This further strains energy usage. If you think of it, while the differences between one model and another could be impressive, they’re not necessary–newer models are rarely the game-changers they’re being portrayed to be for most AI users. It’s similar to Apple coming out with a new iPhone every year–they’re only marginally better. If AI development were slowed down, it would be more sustainable and still meet most users’ needs.
That’s not to say that individuals should not be thoughtful about their AI usage, or their carbon footprint more generally. But they’re probably not the most effective targets in addressing AI’s energy consumption, and AI is perhaps not the biggest fish to fry if seeking to reduce individual carbon footprint. For instance, reducing red meat consumption, using public transportation, and using solar power for your home are some potentially more impactful individual-level changes to reduce one’s carbon footprint, depending on individual consumption levels. Rather than focusing solely on stopping or reducing AI usage, it would be more effective as one of many individual-level changes to advocate for to incentivize people to live more sustainably.
Despite the prominence of these legitimate arguments, there were also many issues with the arguments put forward that reveal more about how deeply personal this issue is to people than AI itself.

Cognitive Biases and Fallacies in the AI Debate
There was a lot of misinformation, misunderstanding, and cognitive biases present in the comments. This sometimes made me wonder whether people read the article, comprehended it fully, or fully understood what AI usage means and the effects it has. There were similar attitudes within the comments section–a lot of bile and not a lot of willingness to understand different perspectives.
To some extent, this is understandable–AI usage debates are emotional topics. This is because, both for people who are anti-AI and pro-AI, AI usage either threatens or enhances their self-conception and identity. For the former, AI usage can often represent the death of their craft, economic insecurity, and a big, scary unknown. For the latter, if AI has allowed you to express or explore parts of yourself that had never been explored before, then attacking AI usage could be interpreted as attacking your ability to grow or be yourself. In any case, the emotional reaction, while legitimate and oftentimes built on completely founded reasoning, clouded the judgment of many commenters, leading to reactivity rather than discussion or debate.
In order to reduce the existing and potential harms of AI, we need to be able to have rigorous debates. Below are some of the most common biases/fallacies I noticed, not to shame people who made those arguments, but to raise awareness and push towards more substantive and nuanced debates. Note that many of these are anti-AI arguments because the vast majority of the comments were anti-AI, so there were more examples for me to pull from. This doesn’t mean that people who are anti-AI are more prone to these biases or fallacies, and I’m also prone to them–we are all human, after all.
Confirmation Bias, or when you interpret/cherry-pick information in a way that confirms your beliefs
People often misconstrued AI usage with generating full posts with AI. It seems like many people read the report as “45% of Substack posts are AI-generated” instead of “45% of Substackers use AI.” This led to
Some people, both in anti-AI and pro-AI crowds, used the AI usage to say that it showed that those who use AI make more money. However, the data in the report didn’t support this–AI usage was equal amongst all revenue amounts.
Naive Realism, or when you believe your worldview is objectively correct and that anyone who disagrees is uninformed, irrational, or unintelligent; and Bulverism, a form of ad hominem where you don’t refute arguments but just assume they’re wrong, then, when explaining why a person holds the wrong belief, pointing to their motives or identity.
Anti-AI people calling AI users lazy, stupid, irresponsible, slop creators, bringing the downfall of XYZ, selling something/having economic interests, etc.
Pro-AI people saying that anybody not using AI will lose out/be losers in the future, and that AI critics necessarily do not understand AI/are ignorant/are against progress
False Cause Fallacy, or the incorrect assumption of a cause-and-effect relationship; and Intentional Fallacy, or the error of judging an action, statement, or creation based on the assumed intention of its creator rather than on available facts.
Many people interpreted Substack’s report as an endorsement of AI usage. This is odd since the report didn’t seem to be promoting anything. Some even accused Substack of using their articles to train their own AI, which there is no evidence of. Rather than assuming evil AI-driven corporate interests, this could be an indicator that they’re listening and want to take the temperature of the platform to give context to the broader debates surrounding AI usage.
Genetic Fallacy, or judging something as good or bad based solely on where or from whom it comes from (its genesis/origin).
In the case of AI, it’s dismissing the entire field of AI and its potential applications because of the unethical methods used in the origin of some of its most prominent models. First, not all AI is commercial LLMs like ChatGPT. Second, some people, sensitive to the issues surrounding commercial LLMs, train their LLMs that they host locally. This is not super common yet, but may become more and more common–in other words, you don’t know what tool is being used, and it may be a more ethical one. That said, right now, it’s safe to assume most will be commercial LLM models. Third, a lot of foundational technologies, such as the internet, have pretty sketchy and questionably ethical origins. That doesn’t dismiss the ethical concerns surrounding them, but that also doesn’t mean that their use is purely unethical.
Throwing the baby out with the bathwater
Some people said AI should never be used due to the existence of hallucinations/other issues, showing that it is completely unreliable/completely evil/etc. However, this overlooks the fact that risks related to AI can be mitigated (e.g., you need to fact-check sources and not take what AI gives you at face value) and that ultimately the person who puts their name on the article is held accountable for any errors in the work they publish. This argument also overlooks that the context in which AI is used matters – it is risky, for instance, to use unchecked AI for litigation. It’s different to publish a random Substack piece with a hallucinated footnote. Both are bad, and the latter makes you look like an idiot, but the former is clearly more concerning.
Anecdotal Fallacy, or when a personal story is used to generalize, and False Equivalencies
Many of the more pro-AI users described using AI as assistive technologies for their disabilities. However, many commenters responded with some versions of “Well, I have [insert disability, usually ADHD because nobody takes it seriously and it’s widely misunderstood], and I don’t need to use AI. You must be lazy/terrible/etc.” I find this type of argument and logic particularly problematic.
First, disabilities are not monolithic experiences. The same disability can present very differently in different people, and people with the same disability may live under different conditions, circumstances, and environments that can impact their disability in various ways. These may include:
Specific symptoms you may/may not have. For some people with ADHD, for instance, writing is a non-issue. For others, it’s almost impossible for them to sit down and write.
Co-morbidities: Many people with ADHD and dyslexia especially struggle with writing.
Gender, race, socio-economic status, environment, and support: How you are treated by society and how many resources, material, and emotional support you have impact the severity and presentation of disabilities. For example, someone with a good doctor/therapist, who has identified the treatment and tactics that work for them, is better off than someone with bad doctors/support, little access to information on how to manage their symptoms.
Life stressors: Sometimes life is hard–you may be miserable at work, a family member may die, you live in a country that’s becoming a fascist dictatorship… and that can sometimes worsen certain symptoms.
Bodily changes: Our bodies’ changes can also impact the symptoms we are presenting. For instance, women have been grossly underrepresented in ADHD studies (and are grossly under-diagnosed–this is why most women get diagnosed as adults). This means that ADHD medication, such as stimulants, doesn’t take into account women’s bodies’ reaction to them. During the luteal phase–the week before one’s period, stimulants tend not to work. This means that ¼ of the year, many women with ADHD who use stimulants are not being adequately supported.
In other words, you don’t know what others are going through or what their circumstances are, even if you have the same disability.
Second, this type of argument reeks of respectability politics and internalized ableism. You are not better or worse, stronger or weaker, for needing (or not needing), or even just benefiting from, certain types of assistance. This logic also problematically echoes the societal message many people with disabilities often get: that being disabled is a personal failure, requiring assistance reveals a lack of willpower, or that disabilities are something to “overcome” through sheer effort. Worse, putting someone down to showcase that “you are not like other disabled people” is the kind of pick-me energy that keeps disabled people oppressed, misunderstood, and undersupported.
Third, this “hustle culture” logic blurs the line between productive struggle (challenge necessary for learning and growth) and systemic friction–the exhausting, constant effort required to function in a system not built for you. Disabled people are, more often than not, not lazy, especially given how taxing their lives can be. Not all struggle is character-building, and shaming people into potential burnout, illness, and not seeking out the help they need does more harm than good. This shaming makes it so that people with disabilities feel less safe disclosing their conditions and seeking help, further pushing them to marginalization.
In short, there is a good debate to be had on identifying the contours of when GenAI is an appropriate assistive technology and on incentivizing productive struggle, but shaming strangers because they manage their disability differently from you is not conducive to them.
III. Conclusion: Systemic Problems vs. Individual Responsibility
Despite the name-calling and shaming in the comments of people using AI or not, most of the issues presented are not created by individuals. They are structural issues. Rather than sowing divisions at the individual level, the best way to tackle them is by building alliances across different people to advocate for increased fairness in AI development and usage, pushing for precision and transparency in AI usage, and creating guidelines, frameworks, and guardrails for safe, ethical, and responsible use. That includes the desire for transparency that many commenters recommended. More on all this in the next parts of this series.
GenAI will always be part of our life now. What’s needed are clear norms on transparency, ethics, and where human input matters most
I like this digital anthropology you are doing. There are times when the growth strategies of a platform intersect with AI and plagiarism, where viral growth is facilitated by commercial incentives of actors that may not be what they seem.
If you were a growing platform, what lengths would you go to, to attract a younger female population? We are about to witness many different kinds of deep fake actors, including State actors, corporate actors, and off platform creators hacking these systems were content moderation is not prioritized.