What is Authorship in the Age of GenAI? Part 1
"AI-vestigating," purity tests, and bullying of individuals won't address the structural harms of AI. They can make them worse.
This is the first in a series exploring authorship and AI.
In part one, I explore the complex landscape surrounding authorship and tool usage.
In part two, I break apart the main uses of AI by Substack writers and the main arguments for/against this use.
In part three, I propose a dynamic framework to judge the ethics of AI writing assistance. With the help of Claude, I've also made a cheat sheet/handout of this post for reference.
In part four, I provide AI writing assistance case studies that I have personally used and can vouch for.

I. Bullying People for GenAI Usage Won't Address AI's Actual Harms
I was honored and curious when
mentioned that her recent experimentation with AI for creative writing was inspired by my piece on neurodivergence and AI. I was also horrified by what happened next.Elizabeth wrote an honest, thoughtful piece about using AI to help complete a personal fiction project that had been stuck in her head for months due to ADHD bandwidth issues. She wasn't selling this work. She wasn't even planning to publish it. This was literally a hobby project for her own enjoyment—a story she wanted to read but couldn't find the executive function to finish writing. Hell, she even said she didn't even like the output and would rewrite it anyway.
The backlash was swift and cruel. "I've never unsubscribed so fast," wrote one commenter to an article that wasn't even written with AI, just describing the process. Others accused her of "disability washing" her AI use, as if her neurodivergence was somehow a convenient excuse rather than a legitimate factor affecting her creative process.
This example is evocative of a broader trend I've noticed on Substack. On the whole, I've loved my past few months here. The community is thoughtful, I've gotten to read more interesting pieces than in many established outlets, and in general, people are friendly and constructive. But the mounting “AI-vestigating” and “AI-shaming” that, rather than cleaning up the slop, is creating a witch hunt that has the potential of poisoning one of the few online platforms that aren't complete garbage (knock on wood.)
The use of GenAI for writing is, rightfully, a contentious topic. It raises questions of ethics, cognitive deficits, and the extent to which AI is a tool versus an author in itself. More practically, it also leads to real material anxieties about people being able to continue their livelihoods in already precarious and woefully undervalued fields (at least in terms of material value within the economic systems most of us live in), such as translation, copy-editing, or teaching.
But the (over-)reaction we see on Substack reveals why we need better frameworks for evaluating AI assistance in creative work. Elizabeth's transparency should have sparked productive discussions about disclosure norms and accommodation principles. Instead, it triggered moral panic that made transparency more dangerous.
What's more, these attitudes ignore the long history of complexity in authorship, which often involves more collaboration than people like to admit. It also echoes similar debates that have happened over time regarding how technological innovation affects creative fields. I explored this previously in a post on how GenAI mirrors (and doesn't) collage as an art form.
I have a lot of thoughts on this issue, so this will be a series where I aim to separate legitimate structural concerns about AI development from individual use cases, examine how AI assistance relates to existing collaborative practices we already accept, and establish principles for moving forward. Because ultimately, the question isn't whether AI use is inherently good or evil—it's when, how, and with what disclosure it can be legitimate.

II. When Honesty Becomes Heresy
The writing community's worst fear about GenAI usage is the following: pieces entirely written by AI with little to no human input, littering the internet and the already competitive publishing industry with slop, turning people into thoughtless sheep, and destroying the craft and livelihoods of people who've worked tirelessly to perfect it after exploiting their work without consent or compensation. Based on the sampling of hyperlinks in this paragraph alone, you can tell that this fear is not unfounded.
These views are especially present on Substack, where many people make a living through their writing skills and are grappling with the effects of these issues that they see impacting their medium in real time (and often on the platform itself): AI slop littering their feeds, essays that do not match a student's ways of speaking that make you doubt their origins, and an increasing pressure by broader society and workplaces to use GenAI, if not be replaced by it.
These structural problems deserve our energy and activism. Direct that energy toward policymakers, labor organizers, and companies whose practices you find objectionable.
But shaming individuals for personal use of AI tools—especially when that use involves accessibility needs or experimental creative projects that harm no one—doesn't address these structural problems. It just makes transparency more dangerous.
Elizabeth's situation illustrates this perfectly. Her process was hardly the "AI replacing human creativity" nightmare scenario that dominates these discussions. She had the complete story arc in her head. She'd already written rough drafts and dialogue. She used AI to help polish one chapter to match the quality of previous chapters she'd written entirely herself, then extensively revised the AI output to better fit her vision and voice.
The cruelty of that phrase—"disability washing"—still gets to me. Here was someone being transparent about using a tool that helped her overcome barriers to creative expression. The response was to question the legitimacy of those barriers entirely. As I wrote in a follow-up note, the reaction to her disclosure contrasts with the one I got when I discussed how I used GenAI while recovering from a concussion. There is a broader issue with what disabilities are seen as “legitimate” enough to tolerate the use of assistive technologies (AT), including GenAI use as AT that deserves a whole post unto itself.
When we attack individual users like this, we're not fighting structural AI problems. We're making it less safe for people to be transparent about their tool use. This undermines the very thing we claim to want: honest disclosure about AI assistance.

III. The Collaboration We Don't Talk About
Here's what's particularly striking about the reaction to Elizabeth's transparency: the writing world is already full of collaborative practices we either willfully ignore or accept without question, that have always treaded a difficult moral and legal ground.
Christopher Whittaker laid this out brilliantly in his response to Elizabeth's piece. Alexandre Dumas worked with a team of ghostwriters, most notably Auguste Maquet, who provided first drafts of The Three Musketeers and The Count of Monte Cristo. Dumas then reworked them in his distinctive style. Harper Lee's To Kill a Mockingbird was originally focused on Atticus and reflected his more racist attitudes. Her editor, Tay Hohoff, persuaded Lee to rewrite with Scout as the main character and was significantly involved in the development of the novel. F. Scott Fitzgerald's This Side of Paradise was substantially revised with help from editor Maxwell Perkins, transforming a bloated manuscript into his breakthrough novel.
Historical precedent goes back to Biblical times, where scribes (amanuenses) took dictation, edited, and sometimes composed letters based on general instructions. The Apostle Paul used amanuenses like Tertius, yet Paul remained fully responsible for the content. We've always had assistive writing technologies and collaborative processes.
These arrangements exist to this day. Some people use them to overcome the communication challenges presented by certain disabilities. One of the most famous cases is that of journalist Jean-Dominique Bauby's memoir, The Diving Bell and the Butterfly. After suffering from a stroke that left him with locked-in syndrome, he was only able to communicate by blinking his left eye. A transcriber helped put his words on paper after over 200,000 blinks.
Some people collaborate due to time or skill limitations. It's a given that most celebrities and politicians, limited in time and often without the training to write a book, use ghostwriters for their memoirs. This doesn't mean they don't have interesting stories to tell or that people won't want to read them.
Others collaborate due to the sheer complexity of a subject. This is especially true in non-fiction and research, where doing a deep dive would take extensive amounts of time. These collaborators often take the form of research assistants, PhD students, and teaching assistants.
However, even amongst these seemingly straightforward collaboration agreements, there is a lot of ambiguity regarding collaborators’ roles and credit. In my field—the policy world—I've seen this at many think tanks. Fellows to have one or several “research assistants” who may serve as translators, editors, and ghostwriters, despite often never being credited as such. The unspoken agreement is that the more established fellow will then provide mentorship and access proportionate to the research assistant's contribution. Needless to say, this doesn't always pan out. This type of engagement is normalized, rationalized, and compartmentalized to the point where some people in the field may scoff at the idea of hiring a ghostwriter, but in practice, use their army of researchers as such. This isn't unique to policy, however — there are versions of these ethical grey areas across all fields.
What this all illustrates is that the writing ecosystem is already incredibly complex. One person's editor is another person's ghostwriter. And credit and compensation are not always proportionate to contribution.

IV. The Spectrum of Collaboration
It's important to distinguish the levels and types of contribution to writing, noting that where “support” ends and “authorship” begins can vary significantly.
Authorship usually hinges on the concept of “substantial intellectual contribution,” but there are several different interpretations of what this means. Looking at the different roles in collaboration can help define where the existing murkiness exists.
1. Human Collaboration's Ethical Lines and Debates
The Acknowledgment Ecosystem: The creation of published works typically involves networks of contributors whose efforts, while not meeting authorship thresholds, deserve recognition. These may include contributions like securing funding, general supervision, administrative support, writing assistance, technical editing, or routine data collection.
Translation creates its own category—derivative works that transform existing content while potentially earning independent copyright protection. We accept that translators make creative choices that shape meaning without considering this collaboration with the original author.
Editing: The distinction between editing and co-authorship fundamentally hinges on the nature and extent of intellectual contribution to a work's core content. While editors refine, polish, and improve existing text, co-authorship implies deeper, more generative involvement in creating the work's fundamental intellectual or creative substance. However, there is a murky territory where editors’ contributions shift from enhancing the author's vision to supplying the foundational creative elements of that vision. Here are some levels to think about that increasingly flirt with authorship as the amount of involvement veers from refinement and questioning to supplying and generating:
Copyediting and proofreading: only correcting grammar, spelling, punctuation, format, syntax, and style guide rules.
Standard developmental editing: big picture edits/recommendations on structure, pacing, consistency. Often uses Socratic method to prompt author action.
Line editing and rewriting: from prompting/suggesting changes to implementing them more directly.
Co-plotting: developing substantive elements.
Ghost Author: editor in name, ghost writer in practice.
Ghostwriting presents perhaps the most ethically complex scenario in the collaboration spectrum, where legal permissibility under contractual arrangements doesn't necessarily equate to ethical acceptability. The core dilemma centers on potential reader deception: when credited "authors" haven't genuinely contributed core ideas, experiences, or voice, the audience is fundamentally misled. The spectrum of client involvement ultimately determines the ethical line between legitimate collaboration and fraudulent misrepresentation.
Some freelancers emphasize a useful ethical barometer: if discovery of the arrangement would cause shame to either party, it likely indicates underlying ethical problems with the collaboration.
2. Tools as Collaborators
Assistive technologies and scribes
The experiences of authors with disabilities using assistive technologies provide powerful lenses for examining core components of authorship, often separating intellectual creation from physical inscription. Jean-Dominique Bauby's composition of The Diving Bell and the Butterfly through blinking represents an extreme case where authorship was unequivocally affirmed not by physical writing ability, but by clear evidence of intellectual origination, creative control, and unique voice despite extreme mediation.
The acceptability of assistive collaboration often depends on the limitations of the author. There are also concerns, especially when it comes to deeply disabled people, to what extent and in what way their voice is accurately being represented.
Recurring Themes of Anxiety Surrounding Technologies’ Effects on Authorship
The anxieties surrounding GenAI echo those that have been debated over time on the effect of new technologies on our cognitive abilities, craft, and economies. This could be a whole post unto itself, and I wrote about a version of this comparing AI to collage, but here is a quick and dirty summary table.
Note that just because these anxieties are old, it doesn't mean that they're illegitimate, or that GenAI doesn't have specific concerns that differ from other technologies.

V. Existing Authorship Frameworks
As we saw in the previous section, the levels and types of collaboration can be murky, leading to the need for more precise frameworks, such as contracts, laws, and guidelines, to define authorship.
There is no single, universally applied quantitative scale for measuring authorship contribution across all industries. Different fields have developed formal criteria for distinguishing authorship from other contributions. Here are a few guiding frameworks and some initial ways that these can/are being applied to AI usage:
1. Legal
Copyright law provides the foundational legal framework that already handles complex authorship scenarios. In the United States, copyright protects "original works of authorship" that meet two key requirements: originality (independent creation by a human with minimal creativity) and fixation (capture in a sufficiently permanent medium). Crucially, U.S. copyright law requires human authorship—works generated entirely by non-human entities without sufficient human creative input are not eligible for protection.
The "work made for hire" doctrine demonstrates how legal authorship can be separated from direct creation. Under this framework, employers or commissioning parties become the legal authors and copyright owners, not the individual creators. This principle already governs ghostwriting contracts and offers a potential model for AI scenarios where ownership and rights might be contractually allocated among developers, users, and employers.
International copyright frameworks vary in their approach to technologically-mediated creation. While the Berne Convention establishes automatic copyright protection upon creation, specific criteria for authorship differ across jurisdictions. The UK's Copyright, Designs and Patents Act includes a unique provision allowing "the person by whom the arrangements necessary for the creation of the work are undertaken" to be deemed the author of computer-generated works—a notable divergence from the U.S. emphasis on direct human creative expression. These international variations mean that AI-assisted works could have different copyright status depending on jurisdiction, highlighting the need for ongoing dialogue to address AI's challenges to existing frameworks.
2. Industry-specific: A few examples
Academic publishing uses the most rigorous standards, with the caveat that different disciplines have their own norms, guidelines, and rules governing authorship. For instance, the International Committee of Medical Journal Editors (ICMJE) requires all four criteria for authorship:
Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
Drafting the work or reviewing it critically for important intellectual content; AND
Final approval of the version to be published; AND
Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
The Committee on Publication Ethics (COPE)'s minimum authorship definition requires substantial contribution to the work and accountability for the work that was done and its presentation in a publication. Both explicitly state that AI cannot be an author and must be disclosed in detail.
Creative industries take more pragmatic approaches. The Writers Guild of America uses quantitative thresholds—you need to contribute at least 33% of a final script to get credit, while directors need to rewrite 50% to get writing credit. They distinguish between "Story by" (narrative/character development) and "Written by" (full script contribution).
In terms of AI, the Authors Guild developed the “AI Best Practices for Authors,” emphasizing that AI should serve as a tool to support, not replace, human creativity and authorship. Among many thoughtful recommendations, writers are advised to avoid using AI to generate text directly, instead rewriting any AI-assisted content in their own voice, and must disclose AI use to both publishers and readers when significant AI-generated material is incorporated, since such content isn't considered original or copyrightable and may violate publishing contracts. Authors are so encouraged to use the Guild's "Human Authored Certification" mark for entirely human-written works and to negotiate contract clauses that prevent unauthorized use of their work for AI training, while distinguishing between problematic training uses and acceptable operational uses like editing or marketing by publishers.
VI. How Should We Think About Authorship and AI?
These historical, legal, and industry examples demonstrate that creative collaboration has always been more complex than our myths suggest. And legal fields, publishing, and authors’ organizations are working hard to create initial guidelines in a constantly evolving and dynamic environment.
But how do we apply these insights to our thinking of GenAI assistance in writing more generally? How is GenAI unique from prior examples of writing tool usage? How can we distill complex and diverse guidelines and frameworks into something more intuitive? How do we carve out when GenAI is acceptable and when it isn't? How do we push back and advocate effectively for individuals’ creative rights and livelihoods?
These are questions I will tackle in part 2. If you have any thoughts, ideas, concerns, or areas you would like me to develop, please let me know!
Natalia have you seen this yet? It's an insights report on how Substackers are using AI. 2000 participants.
really interesting in that it documents the ND use cases! They emerged organically in the data, very telling.
https://open.substack.com/pub/on/p/the-substack-ai-report?r=5oy8bz&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Unfortunately, a trend I noticed lately is that Substack is enabling bullying. This has a lot to do with the dynamics of social media being created at Notes.
Not to say that it was a utopia back before Notes, but sometimes, it's not good to have your content exposed to everyone, especially if said everyone is not able to accept different points of view and decides that bullying is a righteous act.
In a social media ecosystem dominated by US voices, and especially one with a culture as divisive and argumentative, it's especially "dangerous" to share minority opinions that differ from the majority. Not only is US social media argumentative and obsessed about tribes and labels, it has a puritanical streak susceptible to moral panics.
I have come to the conclusion that if you post anything counter culture to the accepted narrative of this platform (on any subject) be ready for bullying or being preached at.
It is a frustrating thing for non-US folks to be dragged into American culture wars, but that's the reality of it.