Note: If this article seems too long to you, ask an AI to summarize it.
Then, if you want, read it and you’ll see what you missed.
II have a friend who worked for many years at one of Spain’s leading publishing houses.
As she told me, there isn’t a book that doesn’t go through rounds of spelling, style, and rhythm editing. Editors, these professionals, often not even acknowledged in the credits, take the original manuscripts and turn them into the book that ultimately gets published.
She gave me examples of National Award winners whose original versions were very far from the final published version. I’ll keep those names to myself, but you can trust me with that.
No one escapes editorial correction.
That’s why her valuable advice, when I wrote my book Optimización SQL en Oracle, was to hire a proofreader/editor. And I did.
The editor for my book, Raquel García Rojas, appears in the credits and in the list of authors and collaborators, and she was considered a co-author, on the same level as the technical reviewers Jetro Marco and Arturo Gutiérrez.
I still keep her corrections: “this sentence isn’t clear,” “this sentence can’t be read out loud,” “here you’re saying the opposite of what you’re trying to say”.
Pure gold.
To put this in dates: it took me two years to write the book, and the technical, grammatical, and style editing took another year.
Without that, the final result isn’t publishable.
The same happens with music.
I know firsthand stories about famous musicians whose producers transformed their songs for release, resulting in versions that were completely different from the originals.
They call it arrangements.
It’s unthinkable that a studio album would be released with the band simply playing all together at once. Not a chance. Each instrument is recorded separately; each line is adjusted and equalized; and if the voice is out of tune, the pitch is corrected. No debate.
Here’s the framing that interests me, beyond any specific case.
In real life, there’s a growing tension: on one hand, we’re excited about using AI and the benefits it can bring; on the other, it’s clear we need to elevate the human voice, judgment, and experience behind what we publish.
Not because AI is “bad,” but because it’s too easy to produce content that sounds good without a real person behind it.
The referent’s responsibility.
In 2022 I was awarded the Oracle ACE Pro recognition. The ACE program promotes technical outreach by experts, recognizes their ability to communicate and transmit knowledge and technical culture, and values this altruistic work because it encourages Oracle professionals to keep growing and learning.
The award is ad honorem: there’s no financial compensation, but at higher levels, there are certain benefits, such as admission to Oracle’s annual event, certifications, and cloud credits to use in OCI, among others.
As in any program based on reputation and contributions, there are criteria for maintaining recognition. And in a space like that, where you sign with your first and last name, it’s vital that what you publish reflects your voice and your judgment as a professional.
Behind every ACE there is a real person, and it’s vital that that person’s identity stands behind all the work they do for the community—whether or not it’s recognized by a program or an award—and precisely because of that recognition, it’s on us, the ACEs, to be ethically responsible and to be role models for everyone else.
But to what extent should we limit the use of AI?
This is where I, a purist of “writing by hand with my finger-and-key,” and a defender of imperfect, natural human expression, say: “It’s becoming necessary to create an ethical code around this.”
And for extra credit: write it down and make it public.
WIRED, a leading international publication on technology and digital culture, has been that bold: they’ve put it in writing and published it on their website, making it clear under what terms they accept, and do not, AI use.
Because musicians plug their guitars into effects pedals, loopers, and so on, the guitar sounds different. Voices are equalized, compressors and reverb are used, and even auto-tune is used among the biggest artists. And writers also have editors who correct style and spelling, and translators who take their book into other languages. So why wouldn’t we consider it acceptable to use AI to translate an original article I wrote into English, or to polish the writing or style without losing my voice?
AI is flooding everything.
Oracle has even renamed its flagship product, the database, as Oracle AI Database.
As part of the program, ACEs communicate and share the transformation AI represents and how it will improve the quality of our work, our learning, our businesses, and the way we solve problems or create new solutions.
It makes sense that we, the Oracle ACEs, should set the example of a responsible and “human” use of AI.
I believe we should establish foundational principles and statements about what we consider a reasonable and ethical use of AI in content creation. Because we can’t advocate for this technology and, at the same time, not use it responsibly.
We all know AI isn’t very good at creating content from scratch, but it is great at assisting with content that is already mature.
It’s obvious we don’t want other domains to end up like Amazon’s self-publishing, where they had to limit how many books the same author can self-publish to just three… per day!, due to the avalanche of AI-autogenerated books that looked like one thing, but lacked human experience.
It happened with travel guides like “Explore Rome on foot in 5 days,” and an AI auto-generator would create books by simply swapping the city: “Explore Milan on foot in 5 days,” “Berlin,” “Madrid,” “New York”… and the reviews (once those books had sold by the thousands) pointed out the obvious: “the author probably didn’t actually walk these routes, because it’s humanly impossible to do these itineraries without dying in the attempt.”
It also happened with cookbooks full of inedible recipes, or with dangerous books labeling poisonous mushrooms as edible.
Even learning platforms like Udemy are full of AI-generated courses with similar reviews: “It doesn’t seem like the teacher knows the syllabus; it feels more like they’re reading a script automatically.”
But we need to be careful. No one is safe from being marked with the scarlet letter, even when innocent.
Ronald Vargas, a reference in the Oracle world and someone I trust a lot, recently ran Oracle’s official “New Features” documentation through an AI-content detection tool. Big surprise: a 74% score for “suspected AI content.”
It’s almost impossible that it was generated by AI; we’re not even talking about variations of existing documentation, but brand-new content. Or did technical writers use AI internally to produce the official documentation?
We’re at a point where not even official documents, from official sources, with content that doesn’t come from a previously published base, can escape a false positive.
And all of this reminds me of what happened at Stack Overflow with the wave of AI-generated answers.
We all want human content, even AI does
Stack Overflow is a tech community where experts answer questions, building reputation and prestige through their contributions. They were overwhelmed because they couldn’t distinguish which answers were generated by AI and which were from real engineers. And they suspected that many answers were “artificially generated” because users complained that the texts seemed to make sense, but technically fell apart.
As Stack Overflow publicly communicated:
The main problem is that, while the answers produced by ChatGPT and other generative AI technologies have a high rate of being incorrect, they generally appear to be good and are very easy to produce.
The solution Stack Overflow proposed was to ban and suspend users suspected of having used AI.
Even though up to that point everything seemed reasonable, they ultimately partnered with OpenAI to integrate their entire knowledge base and train ChatGPT. Result: users leaving en masse, others deleting content, and Stack Overflow reminding everyone that, even if users delete their contributions, that content remains Stack Overflow’s, regardless of how much of it was published under a Creative Commons license.
Here you go: excerpted from Stack Overflow’s terms and conditions.
Section 6. Content Permissions, Restrictions, and Creative Commons Licensing, under “Subscriber content”:
You agree that all content (…) that you provide (…) is licensed perpetually and irrevocably to Stack Overflow worldwide, royalty-free, and non-exclusive.
You grant Stack Overflow the perpetual and irrevocable right and license to access, use, process, copy, distribute, export, display, and commercially exploit such Subscriber Content, even if such Subscriber Content has been contributed and later deleted by you.
This means that Stack Overflow’s permission to publish, distribute, store, and use such content cannot be revoked.
For several years now, I’ve been writing a newsletter. Recently, a student told me that the day my content is generated by AI, my project will be over—because what matters is the person who puts their face on the line, who has “skin in the game,” who publishes a book and exposes themselves to the public judgment of Amazon reviews.
I agree. We came into this world to show our faces.
The value of my contributions lies in creating and sharing my own content. I’m the one who has tried, tested, and studied what I sign with my name. Here, I’m claiming that the creation of the work is mine.
However, I can pay for the translation, style editing, and spelling (and typographical) correction out of my own pocket; the ACE program can provide it; or I can let an AI assist me, just because everyone wants and deserves to read a smooth article, without mistakes, and if they don’t know Spanish, in English or in another language.
The 90% human rule
I don’t think I need it, but I feel like wearing a “badge” in my blog certifying that what I write and create has a human origin, and from a specific human (me).
I’m sure that whoever reads me can identify my expression. And maybe the day will come when AI-generated content won’t differ that much from human-generated content.
So I started looking for plugins for my blog that could certify that my articles are not created by AI, and all I found were “humanizers” for AI-generated text, accidental-plagiarism checkers, or AI-generation validators.
In short: tools to cheat without it showing.
I suppose what’s happening is this: people use ChatGPT to create articles and content, and then run these tools over them to remove the AI stink and make them sound more human.
If we don’t make a 180-degree turn, all content will end up being created by an intelligence with mediocre creativity and excellent writing. With enough filters, it will end up looking like mediocre “human” content without being so.
It would be as if something were made imperfectly by a human, but it isn’t.
Then I came across the “Not By AI” proposal, a movement that aims to put human content back in the spotlight, without sacrificing the benefit that a reasonable use of AI can bring to improving the quality of our work.
Not By AI proposes a badge certifying that 90% of the content has been generated by humans, not as a detector but as a statement of principles. It’s curious that this 90% rule completely leaves out (does not penalize) uses such as finding grammatical and typographical errors or translating content with AI.
It is assumed, therefore, that this is a reasonable and ethical use and that it does not constitute impersonation of the authorship of the main work.
A translation that, by the way, I also review afterwards.
That’s why I come back to the central idea: in spaces where there is personal reputation, it makes sense that we’re sensitive to impostors and to the production of “industrial content.”
That sensitivity should distinguish between “delegating authorship” and “getting editorial assistance.”
When someone delegates creation to AI, they stop being the author and therefore lose credit for that creation.
Once the work is created, the book is written, and the song is composed, then yes: let’s use those technologies that elevate the value and quality of our human contribution.
May this new framework serve to push, precisely, humans to follow our vital impulse to create—and to help us do it even more and even better.