I think for me I’m just going to accept that I won’t be reading any modern fiction, likely ever. It isn’t like there isn’t more than I could read in multiple lifetimes already out there that is pre, say, 2010. But the other side is that fiction has never been worse, because the commercial impetus to become a published fiction writer has never been lower (literally since before the 1600s, given functional literacy levels and the amount of fiction reading the average person does). The Steinbecks of the world aren’t writing novels in 2025.
To be honest, I think the main reason why films get predictable as we get older is that we've seen enough of them and it's just hard to be surprised.
I catch myself thinking that even about films / books / games that try real hard to be original. You can't surprise me with a nonlinear time loop. Oh, the protagonist is also a villain but doesn't know it? Pfft, been there, done that.
Both your perspectives are supremely short-sighted. There are enough good films coming out to literally watch 1 or 2 every week. Of course if you limit yourself to superhero slop or Hollywood slop then yes, you're gonna have that impression.
It's entertainment; what's lost if someone decides to only consume stuff from before a cutoff point? As long as they're finding stuff they enjoy, they've already "won."
There's a level of involvement that most people have when it comes to entertainment. The more difficult that finding something you'll enjoy gets, the less interested people will be. Discovery is not a fun part of consuming media for most, I'd imagine
Unfortunately it is an "Eff you, I got mine" thing. If someone's already resigned to sticking with things 20+ years old, they're not affected by media being bad. It's on culture to get people to buy in, not on individuals to contribute to voting on what new media is good or bad.
Sad but true, and I'm in the same boat. I'm sure you and I will miss a few gems of contemporary fiction, but wading through so much garbage and over hyped mediocrity just isn't worth it. The dreck of the past is mostly filtered out for us already, simply by the passage of time and the survival of quality.
I mean this is why things like the LRB and the Nobel, Booker, Pulitzer, and Hugo awards exist. Denying yourself the likes of Hilary Mantel or Zadie Smith or Colson Whitehead on the basis of chronology is the antithesis of what being a lover of literature is about.
Not that I like Goodreads — I don’t even have an account — but I always check the rating. Anything above 4 out of 5 with thousands of reviews is usually worth reading.
It’s very easy to filter out the weeds: read the classics and any book that breaks through the noise with consistently high reviews. We don’t need to waste time on low-quality literature or AI-generated slop.
So I see someone has taken Naomi Kanakia’s essay, “Contemporary Literary Novels Are Haunted by the Absence of Money”, to heart.
In fact, far from the contention that "Steinbecks" of the world no longer exist - they are prolific, and commercially successful, in a wide variety of Genres. Indeed, given Saul Bellow only published his last novel in the last 25 years, it seems somewhat callous to bifurcate the great from the good so chronologically.
Percival Everett immediately comes to mind - with Pulitzer Prize-winning James, a nuanced and insightful retelling of Huckleberry Finn, or 'I Am Not Sydney Poitier' which works almost as an homage to Steinbeck.
'The Nickel Boys' by Colson Whitehead I'd argue surpasses most of Steinbeck's more popular canon (Of Mice and Men, Cannery Row etc...). A magnificent novel in the very best of the american tradition.
'A Visit from the Goon Squad' by Jennifer Egan is probably tied for the best 21st Century Pulitzer winner with Jayne Anne Phillips' 'Night Watch' -both Steinbeck-esque in their charting of social mores in the face of an ever-changing culture rendered as the symbol and signifier drenched shadows of capitalism against the cave wall of society.
Looking at the Booker Prize since Paul Beatty, I'd also highlight 'Shuggie Bain' - Douglas Stuart's opus about growing up with an alcoholic mother in the working class Glasgow of the 1980s, or 'Prophet Song' - the requiem for a mother of four trying to preserve her family as a far-right totalitarian regime takes control of Ireland and suspends the Irish constitution.
I don't think that it's necessarily true, but the big problem is that discoverability is almost impossible, and that the investment to know how good a book might be is much higher than other forms of media. It's also why you might get more out of books, you have to make some efforts to ingest them, but this means it's a problem if you have no idea how good it might be.
for games, steam offers a trial of the game which can be refunded in full if you do it within two hours. It's a great feature for consumer protection imho.
I'd like to try a chapter or two of a book first, and if it doesn't grab, get a full refund. This is how you can prevent sinking time (and money, presumably) into a bad book.
I don't know where the divide comes from (cultural, generational, social class, or something else) but the idea of thinking "I want to get my money back" for something like a book, music, or a video game is strange to me.
Sometimes I make bad purchases, and that's just too bad.
The more that media becomes a product, the harder it is to feel like you're conning an artist by getting a refund on a purchase.
It's gotten incredibly easy to put media out there, and it's great that people are able to tell the stories they want through the medium they want. At the risk of sounding like I'm just bootlicking, traditional outlets used to be able to filter out some of the more low-effort content and it was easier to expect that you were at least getting mediocre stuff. At this point, a lot of really low effort and low quality junk is in the ecosystem and it's harder to just buy something that looks cool.
In all reality, I've eaten larger purchases as losses than some dumb $20 steam game or ebook or whatever. I just don't think that people are terribly unreasonable if they feel burnt badly enough to press for a refund. It's never been easier to do the old "if I can get x number of people to give me $5 each..." bit
compounded by the fact that reviews, awards, and any institution which formerly served to find good and worthy books or movies seem to have become completely detached from genuine popular interest and quality.
Let’s not resort to such exaggeration. There’s a fuck of a lot more humans on earth today than in the 1600, something like 16x globally with some regions growing more than others. Literacy was also generally low in 1600 as in sub 20%.
Further, up until recently the first few rungs of surviving as an author looked a lot like abject poverty today.
I like the term “organic literature.” A significant amount of readers have no interest whatsoever in
generated prose, so there is definitely a viable market in human provenance.
An independent certification body is quite an old-world solution for a problem like this, but I’m not sure this is something that can be done mathematically. A web of trust may be all we have.
Unfortunately, like most other kinds of commercial art, the mere presence of generated literature waters down the market enough to make actual literature essentially a leisure activity. Sure there was always crap, derivative filler books — it’s just that the ratio will now be 1000x worse and the better of the books just won’t justify the funding for intensive work and novel research that they used to, so even the good ones will probably be worse. Yet another example of the efficiency-obsessed more cheaper > less more expensive mentality making our world worse.
My last novel took over a year to write and edit, going through dozens of revisions. The novel before that took almost five years.
For a laugh I used grok to generate a 35,000-word slop novel, it took twenty prompts and a few hours, it even threw in a nice cover. From there it would have took me another 30 minutes to release it as an ebook on Amazon under a different pen-name. This is what I and the world of indie authors are up against. It is already hard for non-established authors, this may be the final nail in the coffin for most. My first book is now free, but good luck anyone ever finding it.
I had Grok write a post-apocalyptic novel as a test, and I thoroughly enjoyed the first half of it. The problem was running out of context. The quality fell off drastically and I tried to come up with ways to continue it, e.g. asking it to summarize each previous chapter and feeding the summaries in, but they always lacked some tiny detail I thought was key to a character's personality and it ended up being too much work.
A year from now though...?
We're really cooked, though. Whenever I see a cool pic I wonder if it's AI and I have to spin up a TinEye or Google Images search and hope it was once posted to some random Facebook wall in 2011 so I can be pretty sure it's real.
A markov-chain or sufficiently advanced decision tree can only serve to cargo-cult the insight into the human condition and the various contexts and lenses through which we interpret and shape our existence.
Where AI shines - and to the uninitiated, apparently subsumes - is in the fields of lexicon and grammar. However, we do not read Homer's Iliad and Odyssey as an exemplar of dactylic hexameter - we do so to engage with a structured expression of grief for the motivations of man.
Epic (Patrick Kavanagh - 1960)
I have lived in important places, times
When great events were decided; who owned
That half a rood of rock, a no-man’s land
Surrounded by our pitchfork-armed claims.
I heard the Duffys shouting ‘Damn your soul!’
And old McCabe stripped to the waist, seen
Step the plot defying blue cast-steel –
‘Here is the march along these iron stones’
That was the year of the Munich bother. Which
Was more important? I inclined
To lose my faith in Ballyrush and Gortin
Till Homer’s ghost came whispering to my mind.
He said: I made the Iliad from such
A local row. Gods make their own importance.
The world of literature is increasingly making itself inaccessible to broad audiences by turning this into a zero-sum game.
I wish OpenAI, Anthropic and Gemini would all figure out how to pay royalties to copyright holders anytime their content is used. I see absolutely no reason why they can't do this. It would really take all the steam out of these hardline anti-AI positions.
I think we will be seeing a lot more business pop up that will take cater to people who are unhappy with AI. Especially if you consider the large amount of inevitable layoffs, people will begin to resent everything AI. The intelligent machine was never supposed to replace laborers, it was supposed to do your dishes and laundry.
I'm down for this, but only if the people who are getting paid by OpenAI/etc also turn around and pay any inspiration they've had, any artist they've copied from, etc over their entire life. If we're going to do this, we need to do it in a logically consistent way; anyone who's derived art from pre-existing art needs to pay the pre-existing artist, and I mean ALL of it, for anything derivative.
> I'm down for this, but only if the people who are getting paid by OpenAI/etc also turn around and pay any inspiration they've had, any artist they've copied from, etc over their entire life.
Why? Things are scale have different rules (different laws as well) from things done individually or for personal reasons.
What is the argument for AI/LLM stuff getting an exemption in this regard?
I don't see why AI/LLMs should get exemptions or special treatment.
If copying someone is bad, and they should be paid for it, that should be universal.
We already have copyright laws, they already prevent people from distributing AI outputs that infringe on intellectual property. If you don't like those laws in the age of AI, get them changed consistently, don't take a broken system and put it on life support.
I find it funny that many people are pro-library and pro-archive, and will pay money to causes that try to do that with endangered culture, but get angry at AI as having somehow stolen something, when they're fulfilling an archival function as well.
What I find funny about your argument is how completely degraded fair use has become when using anything by a corporation capable of delaying and running up legal fees. It sure feels like there are a separate set of rules.
> If copying someone is bad, and they should be paid for it, that should be universal.
But we (i.e. society) don't agree that it is; the rules, laws and norms we have is that some things are bad at scale!
As a society, we've already decided that things at scale are regulated differently than things for personal use. That ship has sailed and it's too late now to argue for laws to apply universally regardless of scale.
I am asking why AI/LLMs should get a blanket exemption in this regard.
I have not seen any good arguments for why we society should make a special exemption for AI/LLMs.
"Scale" isn't a justification for regulatory differences, that's a straw man. We take shortcuts at scale because of resource constraints, and sometimes there are more differences than just scale, and we're over simplifying because we're not as smart as we'd like to imagine. If there aren't resource constraints, and we have the cognitive bandwidth to handle something in a consistent way, we really should.
If we were talking algorithms, would you special case code because a lot of people hit it even if load wasn't a problem, or would you try to keep one unified function that works everywhere?
Distribution and possession are fundamentally different. Cops try to bust people who have large amounts for distribution even if they don't have any evidence of it, but that's a different issue.
Corporations are individuals and can engage in fair use (at least, as the law is written now). Neither corporations nor individuals can redistribute material in non-fair use applications.
School bake sales are regulated under cottage food laws, which are relaxed under the condition that a "safe" subset of foods is produced. That's why there are no bake sales that sell cured sausage, for instance. Food laws are in some part regulatory capture by big food, but thankfully there hasn't been political will to outlaw independent food production entirely.
You're misinformed about all the examples you cited, you should do more research before stating strong opinions.
> You're misinformed about all the examples you cited, you should do more research before stating strong opinions.
You've literally agreed with what I said[1]:
> School bake sales are regulated under cottage food laws, which are relaxed under the condition that a "safe" subset of foods is produced. That's why there are no bake sales that sell cured sausage, for instance. Food laws are in some part regulatory capture by big food, but thankfully there hasn't been political will to outlaw independent food production entirely.
Scale results in different regulation. You have, with this comment, agreed that it does yet are still pressing on the point that there should be an exemption for AI/LLM.
I don't understand your reasoning in pointing out that baking has different regulations depending on scale; I pointed out the same thing - the regulations are not universal.
-------------------
[1] Things I have said:
> Things are scale have different rules (different laws as well) from things done individually or for personal reasons.
> As a society, we've already decided that things at scale are regulated differently than things for personal use.
> You can hold a bake sale at school with fewer sanitation requirements that a cake store has to satisfy.
> many people are pro-library and pro-archive, but get angry at AI as having somehow stolen something
Yes! They're angry that there are two standards, an onerous one making life hell for archivists, librarians, artists, enthusiasts, and suddenly a free-for-all when it comes to these AI fad companies hoovering all the data in the world they can get their paws on.
I.e. protecting the interests of capital at the expense of artists and people in the former, and the interests of capital at the expense of artists and people in the latter.
So ... every time a model is used? Because it has been trained on these works so they have some influence on all its weights?
> I see absolutely no reason why they can't do this
They didn't even pay to access the works in the first place, frankly the chances of them paying now seems pretty minimal, without being forced to by the courts.
I've had this idea kicking around in my head now for a few months that this is an opportunity to update copyright / IP law generally, and use the size and scope of government to do something about both the energy costs of AI and compensation for people whose works are used. At a very rough draft and high level it goes something like this:
Update copyright to an initial 10 year limit, granted at publication without any need to register. This 10 year period also works just like copyright today, the characters, places, everything is projected. After 10 years, your entire work falls into the public domain.
Alternatively, you can register your copyright with the government within the first 3 years. This requires submitting your entire work in a machine readable specified format for integration into official training sets and models. These data sets and models will be licensed by the government for some fee to interested parties. As a creator with material submitted to this data set, you will receive some portion of those licensing feed, proportional to the quantity and amount of time your material has been in the set, with some caps set to prevent abuse. I imagine this would work something like the broadcast licensing for radios works. You will receive these licensing fees for up to 20 years from the first date of copyright.
During the first 10 years, copyright is still enforced on your work for all the same things that would normally be covered. For the 10 years after that, in additional consideration for adding your work to the data sets, you will be granted an additional weaker copyright term. The details would vary by the work, but for a novel for example, this might still protect the specific characters and creatures you created, but no longer offer protection on the "universe" you created. If we imagine Star Wars being created under this scheme, while Darth Vader, Luke Skywalker and Leia Organa might still be protected from 1987-1997, The Empire, Tatooine, and Star Destroyers might not be.
What I envision here is that these government data sets would be known good, clean, properly categorized and in the case of models, the training costs have already been paid once. Rather than everyone doing a mad dash to scrape all the world's content, or buy up their own collection of books to be scanned and processed, all of that work could already have been done and it's just a license fee away. Additionally because we're building up an archive of media, we could also license custom data sets. Maybe someone wants to make a model trained on only cartoons, or only mystery novels or what have you. The data is already there, a nominal fee can get you that data, or maybe even have something trained up, and all the people who have contributed to that data are getting something for their work, but we're also not hamstringing our data sets to being decades or more out of date because Disney talked the government into century long copyrights decades ago.
Why go after the AI company? If someone is using the AI generated content for commercial purposes and it’s based of a copyrighted work, they are the ones who should be paying the royalty.
The AI company is really more like a vector search company that brings you relevant content, kind of like Google, but that does not mean the user will use those results for commercial purposes. Google doesn’t pay royalties for displaying your website in search results.
I suspect from purely logistics, AI training is better when it's free to injest all the content it can, and for that freedom it pays in some small royalty amount when that source is cited.
They'd simply pass that cost onto the customer. For universities or enterprises or lawfirms, or whatever, they would either include pre-existing agreements, or pay for blanket access. Whatever terms OpenAI, Anthropic, and Gemini sign with these entities, they can workout the royalties there.
These are all solved problems for every other technology middle man company.
This only makes sense if we have open access to the training data so can verify if it’s copyrighted or not. Otherwise how am I supposed to know it’s replicated someone’s IP.
It's not quite the same though, when I search Google I'm generally directed to the source (though the summary box stuff might cross the line a bit).
With AI, copyrighted material is often not obvious to the end user, so I don't think it's fair to go after them.
I think it's non-trivial to make the AI company pay per use though, they'd need a way to calculate what percent of the response is from which source. Let them pay at training time with consent from the copyright holder, or just omit it.
We need this for technical books. I was a chapter into something the other day before deciding I’d been hoodwinked into reading someone’s ChatGPT output
I've noticed entire publishers on Amazon which are just fly-by-night AI slop, probably printed on-demand too.
For example, I stumbled on https://www.amazon.com/dp/B0DT4TKY58 and had never heard of the author. Their page (https://www.amazon.com/stores/author/B004LUETE8) suggested they were incredibly prolific in a huge number of areas which already felt off. No information about "Robert Johnson" was available either. The publisher, HiTeX Press (https://www.amazon.com/s?k=HiTeX+Press) has a few other authors with similarly generic names and no information available about them, each the author of numerous books spanning a huge array of topics.
It feels even more bewildering and disheartening to see AI slop come into the physical world like this.
Yeah I really need to. I do occasionally go through a bout of trying to take my business elsewhere. I am sort of a used book junkie and Amazon still owns that market. Though I do often get my used books from Alibris when I can.
Recommendations for alternatives would be very welcome.
For mainstream books, I use Bookshop ( https://bookshop.org ) or Kobo. Bookshop supports indie bookstores. Both of them will let you know if a given eBook has DRM or not.
(Now, when a book does have DRM, I buy it from Kobo! I'll leave it to the reader to speculate why :) )
Sadly KDP Select makes that impossible. Preferential rates for authors in exchange for exclusivity. There’s a lot of (human) slop there, but enough stuff I want to read.
I recently saw this, and technical subjects that actually have 0 books written about them now have entire pages filled with books. Title sound good, and the page look decently good, but there's something slightly off, and when you look at it the "author" has been writing a book a week...
It's disheartening because now I will look much more into reputable publishers, and so filter off independent writers who have nothing to do with this.
I disagree. AI use is diffuse. An author is specific. Having people label their work as AI free is accountable in a way trying to require AI-generated work be labeled is not.
> similar to those found in cigarettes
Hyperbole undermines your argument. We have decades of rigorous and international evidence for the harms from cigarettes. We don’t for AI.
Saying "I think X should have a warning, like cigarettes do" is not the making the claim "X is harmful in a way that is comparable to cigarettes." The similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes.
> similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes
We put warnings on cigarettes because not only are they harmful, but we have ample evidence about how and by what mechanism they are harmful.
The logic that leads to labelling every harm is the one that causes everything in California to be labeled as a cancer hazard. You want tight causation of sufficient magnitude and in a way that will cause actual behavior change, either through reduced demand or changed producer behavior. Until you have that-which we did for cigarettes—labelling is a bit silly.
That's fine but doesn't give you license to put words in other people's mouths. Maybe they want AI labeled for transparency. Maybe it's a matter of personal preference. Or maybe they're following the precautionary principle and don't want to wait for the evidence of harm, but rather for the evidence of the absence of harm.
There's an infinite number of positions they could hold, and the discussion works better if you ask rather than assume.
Even if every single page was hand written on camera, that could not prove that no AI was used.
Did the author come up with the main ideas, character arcs or plot devices himself? Did he ever seek assistance from AI to come up with new plot points, rewrite paragraphs, create dialog?
When people say AI isn't creating anything novel it's just predicting the next word I wonder whether my brain is just predicting the next word I should type here.
Right, the same assholes gaming the system with slop would just game whatever system you tried to put around them. It's not like you can stand over someone the whole time they work to ensure it's real.
This depends on the subject of the book, but there are enough books written pre-1970 (or some other year one is comfortable with, before the era of “book spinners”, AI etc) to last multiple lifetimes. I used to spend hours and hours in bookstores, but so many books these days (AI or otherwise) don’t seem that interesting. Many, many books could just be 3 page articles, but stretched to 150 page books.
So yeah, simply filtering by year published could be a start
I enjoy old fiction enough that sites like gutenberg.org has that covered and I barely bother with trying to find anything new that I want to read, and that began long pre-ai-slop so no real change for me.
For non-fiction it is a bit trickier. I buy DRM-free from some niche publishers, but I have no idea which ones can be trusted to not begin to mix in AI slop in their books.
The normal writing 'journey' is several months, or years, of hard work and multiple revisions. I invest a little of my time explaining the journey on my blog, and also include the text in the prefix of my novels that it was written by a real human. It is my way of saying I was invested in the story, but it is pretty naive to think this will work in the age of AI today.
I also wrote an article on my blog that you are mainly writing for yourself and your family, friends and followers these days, the algorithm is very unlikely to get you outside of that word-of-mouth audience, unless you pay $$, go full-in promoting on social media (which may backfire), or are extremely lucky. With AI the algorithm has become the enemy and finding genuine indie authors is unfortunately getting harder.
This has the same problems any DRM has. People who want to bypass the process will find a way, but legitimate people get caught up in weird messes.
I'm so happy I'm not doing any school/academic work anymore, because AI writing detection tools (I learned English though reading technical docs; of course my writing style is a bit clinical) and checking the edit history in a Google Docs document would've both fucked me over.
This idealistic objective is highly commendable, but the fight could be futile. As you would need AI to do the work of detection. Then there will be another movement to do "organic detection" of "organic content". And the story goes on.
Think of interview candidates rejected by AI and employees fired by AI, or that case where a snack pack was identified by AI as a weapon in a student's pocket. This will lead to "organic decision making".
i'm currently reading The Recognitions by William Gaddis -- it's a lot to get through but it might change your mind on this point (not that it's for the better!).
In 2020 at the beginning of the pandemic, I set a timer, wrote for 10 minutes, live-streamed it, did it three times per day, for 35 days, and put everything unedited into a book.
It seems as if it may be more relevant in our AI writing times.
I too am open for business, for a modest fee I will arrange to meet a book publisher in nyc for a firm handshake to cement a declaration from them that they are publishing books not made with AI. I will then send a formal email saying they may publish a little gold star on their book, and my preeminence as a member of the literary elite should carry it through. I'm doing this for the people because I _care_.
I wonder how this works since authors are more and more likely to use AI to spell check, fix wording, find alternate words, and all manner of other things. It might be useful to understand the “rules” for what “human” means.
What is the point of this? Any publishing house can just "self certify" that no AI was used. Why would it be necessary to have an outside organization, who can not validate AI use anyway and just has to rely on the publisher.
Writing a book is, in most cases, something which happens between the author and their writing medium, how could any publisher verify anything about AI use, except in the most obvious cases?
The one thing which matters here is honesty and trust and I do not see how an outside organization could help in creating that honesty and maintaining that trust.
I don't care if AI wrote the book, if the book is good. The problem is that AI writes badly and pointlessly. It's not even a good editor, it 1) has no idea what you are talking about, and 2) if it catches some theme, it thinks the best thing to do is to repeat it over and over again and make it very, very clear. The reason you want to avoid LLM books is the same reason why you should avoid Gladwell books.
If a person who I know has taste signs off on a 100% AI book, I'll happily give it a spin. That person, to me, becomes the author as soon as they say that it's work that they would put their name on. The book has become an upside-down urinal. I'm not sure AI books are any different than cut-ups, other than somebody signed a cut-up. I've really enjoyed some cut-ups and stupid experiments, and really projected a lot onto them.
My experience in running things I've written through GPT-5 is that my angry reaction to its rave reviews, or its clumsy attempts to expand or rewrite, are stimulating in and of themselves. They often convince me to rewrite in order to throw the LLM even farther off the trail.
Maybe a lot of modern writers are looking for a certification because a lot of what they turn out is indistinguishable cliché, drawn from their experiences watching television in middle-class suburbs and reading the work of newspaper movie critics.
Lastly, everything about this site looks like it was created by AI.
> I don't care if AI wrote the book, if the book is good.
Not so sure. Books are not all just entertainment but they also develop one's ouook on life, relationships, morality etc. I mean, of course books can also be written by "bad" people to propagate their view of things, but at least you're still peeking into the views and distilled experience of a fellow human who lived a personal life.
Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers. Do that on a large enough scale and it's really hard to tell what kind of effect it has.
> Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers.
There is some of this even without AI. Plenty of modern pulpy thriller and romance books for example are highly market-optimised by now.
There are thousands of data points out there for what works and doesn't and it would be a very principled author who ignores all the evidence of what demonstrably sells in favour of pure self-expression.
Then again, AI allows to turbocharge the analysis and pluck out the variables that statistically trigger higher sales. I'd be surprised if someone isn't right now explicitly training a Content-o-matic model on the text of books along with detailed sales data and reviews. Perhaps a large pro-AI company with access to all the e-book versions, 20 years of detailed sales data, as well as all telemetry such as highlighted passages and page turns on their reader devices? Even if you didn't or couldn't use it to literally write the whole thing, you can have it optimise the output against expected sales.
This is a big problem, though I would be slow to trust anyone purporting to address this problem. (Though, to their credit, this Books by People team is more credible than the bog-standard pair of 20yo Bay Area techbro grifters I expected.)
Reportedly, Kindle has already been flooded with "AI" generated books. And I've heard complaints from authors, of AI superficial rewritings of their own books being published by scammers. (So, not only "AI, write a YA novel, to the market, about a coming of age vampire young woman small town friends-to-lovers romance", but "AI, write a new novel in the style of Jane Smith, basically laundering previous things she's written" and "AI, copy the top-ranked fiction books in each category on Amazon, and substitute names of things, and how things are worded.")
For now, Kindle is already requiring publishers/authors to certify on which aspects of the books AI tools were used (e.g., text, illustrations, covers), something about how the tools were used (e.g., outright generation, assistive with heavy human work, etc.), and which tools were used. So that self-reporting is already being done somewhere, just not exposed to buyers yet.
That won't stop the dishonest, but at least it will help keep the honest writers honest. For example, if you, an honest writer, consider for a moment using generative AI to first-draft a scene, an awareness that you're required to disclose that generative AI use will give you pause, and maybe you decide that's not a direction you want to go with your work, nor how you want to be known.
Incidentally, I've noticed a lot of angry anti-generative-AI sentiment among creatives like writers and artists. Much more than among us techbros. Maybe the difference is that techbros are generally positioning ourselves to profit from AI, from copyright violations, selling AI products to others, and investment scams.
Just another rent-seeker. I mostly choose books based on word of mouth recommendations or liking other things by the same author. This is very resistant to slop from AI and to the large amounts of rubbish that has always been published.
Books written by AI is yet another case of an application of AI that does nothing to solve existing problems that consumers have (too few books in this case) but instead focuses on the producer side of things.
Worse yet, increasing the quantity of books while simultaneously decreasing the quality just makes the situation worse for readers: more slop to filter out.
Why? Can't it be done same way it's done with copyrighted material: by checking the authors process?
(Because at least in EU law permits writing basically same thing, if both authors reached it organically - have a trail of drafts, other writing process documents. As long as you proved you came upon it without influence from the other author.)
Proving that you done it without AI can be similar. For example - just videotaping whole writing process.
Now, as for if anyone cares about such proofs is another topic.
No technical ability required to verify humans as humans. You just have to close your laptop and meet at a coffee shop. Surprisingly many deals are done this way, because humans like other humans.
You have more respect for grifters on social media as humans expressing themselves than for authors of books...? That's very strange.
Doubly so because no one suggested "silencing them," they pointed out that people do presume to teach on social media. You acknowledge this but not how it relates to your original argument, you've pivoted to defending their right to speech instead of shoring up your original argument.
What about capitalism created AI? China is not a purely capitalistic society and they have AI too… I don’t see anything specific about capitalism that brings about AI. In fact much of the advances in it came about through academia, which is more of a socialist structure than capitalist.
I think for me I’m just going to accept that I won’t be reading any modern fiction, likely ever. It isn’t like there isn’t more than I could read in multiple lifetimes already out there that is pre, say, 2010. But the other side is that fiction has never been worse, because the commercial impetus to become a published fiction writer has never been lower (literally since before the 1600s, given functional literacy levels and the amount of fiction reading the average person does). The Steinbecks of the world aren’t writing novels in 2025.
> reading any modern fiction, likely ever. It isn’t like there isn’t more than I could read in multiple lifetimes already out there
Well said. It’s also true for movies these days which are predictable and algorithm tailored minus a couple of directors.
To be honest, I think the main reason why films get predictable as we get older is that we've seen enough of them and it's just hard to be surprised.
I catch myself thinking that even about films / books / games that try real hard to be original. You can't surprise me with a nonlinear time loop. Oh, the protagonist is also a villain but doesn't know it? Pfft, been there, done that.
Both your perspectives are supremely short-sighted. There are enough good films coming out to literally watch 1 or 2 every week. Of course if you limit yourself to superhero slop or Hollywood slop then yes, you're gonna have that impression.
It's entertainment; what's lost if someone decides to only consume stuff from before a cutoff point? As long as they're finding stuff they enjoy, they've already "won."
There's a level of involvement that most people have when it comes to entertainment. The more difficult that finding something you'll enjoy gets, the less interested people will be. Discovery is not a fun part of consuming media for most, I'd imagine
what's lost when culture becomes homogenized and commoditized? quite a lot actually
Unfortunately it is an "Eff you, I got mine" thing. If someone's already resigned to sticking with things 20+ years old, they're not affected by media being bad. It's on culture to get people to buy in, not on individuals to contribute to voting on what new media is good or bad.
Sad but true, and I'm in the same boat. I'm sure you and I will miss a few gems of contemporary fiction, but wading through so much garbage and over hyped mediocrity just isn't worth it. The dreck of the past is mostly filtered out for us already, simply by the passage of time and the survival of quality.
I mean this is why things like the LRB and the Nobel, Booker, Pulitzer, and Hugo awards exist. Denying yourself the likes of Hilary Mantel or Zadie Smith or Colson Whitehead on the basis of chronology is the antithesis of what being a lover of literature is about.
Not that I like Goodreads — I don’t even have an account — but I always check the rating. Anything above 4 out of 5 with thousands of reviews is usually worth reading.
It’s very easy to filter out the weeds: read the classics and any book that breaks through the noise with consistently high reviews. We don’t need to waste time on low-quality literature or AI-generated slop.
If you're giving up on art and cultural movement, even if things are bad, it still reflects on you.
So I see someone has taken Naomi Kanakia’s essay, “Contemporary Literary Novels Are Haunted by the Absence of Money”, to heart.
In fact, far from the contention that "Steinbecks" of the world no longer exist - they are prolific, and commercially successful, in a wide variety of Genres. Indeed, given Saul Bellow only published his last novel in the last 25 years, it seems somewhat callous to bifurcate the great from the good so chronologically.
Percival Everett immediately comes to mind - with Pulitzer Prize-winning James, a nuanced and insightful retelling of Huckleberry Finn, or 'I Am Not Sydney Poitier' which works almost as an homage to Steinbeck.
'The Nickel Boys' by Colson Whitehead I'd argue surpasses most of Steinbeck's more popular canon (Of Mice and Men, Cannery Row etc...). A magnificent novel in the very best of the american tradition.
'A Visit from the Goon Squad' by Jennifer Egan is probably tied for the best 21st Century Pulitzer winner with Jayne Anne Phillips' 'Night Watch' -both Steinbeck-esque in their charting of social mores in the face of an ever-changing culture rendered as the symbol and signifier drenched shadows of capitalism against the cave wall of society.
Looking at the Booker Prize since Paul Beatty, I'd also highlight 'Shuggie Bain' - Douglas Stuart's opus about growing up with an alcoholic mother in the working class Glasgow of the 1980s, or 'Prophet Song' - the requiem for a mother of four trying to preserve her family as a far-right totalitarian regime takes control of Ireland and suspends the Irish constitution.
I don't think that it's necessarily true, but the big problem is that discoverability is almost impossible, and that the investment to know how good a book might be is much higher than other forms of media. It's also why you might get more out of books, you have to make some efforts to ingest them, but this means it's a problem if you have no idea how good it might be.
> if you have no idea how good it might be.
for games, steam offers a trial of the game which can be refunded in full if you do it within two hours. It's a great feature for consumer protection imho.
I'd like to try a chapter or two of a book first, and if it doesn't grab, get a full refund. This is how you can prevent sinking time (and money, presumably) into a bad book.
I don't know where the divide comes from (cultural, generational, social class, or something else) but the idea of thinking "I want to get my money back" for something like a book, music, or a video game is strange to me.
Sometimes I make bad purchases, and that's just too bad.
The more that media becomes a product, the harder it is to feel like you're conning an artist by getting a refund on a purchase.
It's gotten incredibly easy to put media out there, and it's great that people are able to tell the stories they want through the medium they want. At the risk of sounding like I'm just bootlicking, traditional outlets used to be able to filter out some of the more low-effort content and it was easier to expect that you were at least getting mediocre stuff. At this point, a lot of really low effort and low quality junk is in the ecosystem and it's harder to just buy something that looks cool.
I agree on the state of things but I still think that's just my problem.
Sometimes I read reviews for a restaurant, go, and come out thinking the other reviewers and I have a totally different take on things. It happens.
Same goes for movies, books, games, etc. I "do my research" and sometimes I'm wrong.
And sure, I absolutely sometimes feel "scammed" but to me that's just something that happens.
I'm not too bothered by the idea of demos (eg 2 free chapters), but I am a bit bothered by the idea of "I want a refund if I'm not satisfied enough".
I guess everyone would have a different threshold on what "satisfied enough" is.
In all reality, I've eaten larger purchases as losses than some dumb $20 steam game or ebook or whatever. I just don't think that people are terribly unreasonable if they feel burnt badly enough to press for a refund. It's never been easier to do the old "if I can get x number of people to give me $5 each..." bit
Yeah. I guess it's just a question of where you draw the line between a scam and a customer just making a bad choice.
This is literally the whole value proposition of brick and mortar book stores...
please, don't forget about your local library! you can probably read the entire book (digitally, too! or listen to it!) at no cost to yourself.
This is often (far from always) available on kindle. A one or two chapter sample for free.
compounded by the fact that reviews, awards, and any institution which formerly served to find good and worthy books or movies seem to have become completely detached from genuine popular interest and quality.
Let’s not resort to such exaggeration. There’s a fuck of a lot more humans on earth today than in the 1600, something like 16x globally with some regions growing more than others. Literacy was also generally low in 1600 as in sub 20%.
Further, up until recently the first few rungs of surviving as an author looked a lot like abject poverty today.
Are the Steinbecks/Austen's/Joyces of the world even being created?
yes
cool
I like the term “organic literature.” A significant amount of readers have no interest whatsoever in generated prose, so there is definitely a viable market in human provenance.
An independent certification body is quite an old-world solution for a problem like this, but I’m not sure this is something that can be done mathematically. A web of trust may be all we have.
Unfortunately, like most other kinds of commercial art, the mere presence of generated literature waters down the market enough to make actual literature essentially a leisure activity. Sure there was always crap, derivative filler books — it’s just that the ratio will now be 1000x worse and the better of the books just won’t justify the funding for intensive work and novel research that they used to, so even the good ones will probably be worse. Yet another example of the efficiency-obsessed more cheaper > less more expensive mentality making our world worse.
My last novel took over a year to write and edit, going through dozens of revisions. The novel before that took almost five years.
For a laugh I used grok to generate a 35,000-word slop novel, it took twenty prompts and a few hours, it even threw in a nice cover. From there it would have took me another 30 minutes to release it as an ebook on Amazon under a different pen-name. This is what I and the world of indie authors are up against. It is already hard for non-established authors, this may be the final nail in the coffin for most. My first book is now free, but good luck anyone ever finding it.
I had Grok write a post-apocalyptic novel as a test, and I thoroughly enjoyed the first half of it. The problem was running out of context. The quality fell off drastically and I tried to come up with ways to continue it, e.g. asking it to summarize each previous chapter and feeding the summaries in, but they always lacked some tiny detail I thought was key to a character's personality and it ended up being too much work.
A year from now though...?
We're really cooked, though. Whenever I see a cool pic I wonder if it's AI and I have to spin up a TinEye or Google Images search and hope it was once posted to some random Facebook wall in 2011 so I can be pretty sure it's real.
A markov-chain or sufficiently advanced decision tree can only serve to cargo-cult the insight into the human condition and the various contexts and lenses through which we interpret and shape our existence.
Where AI shines - and to the uninitiated, apparently subsumes - is in the fields of lexicon and grammar. However, we do not read Homer's Iliad and Odyssey as an exemplar of dactylic hexameter - we do so to engage with a structured expression of grief for the motivations of man.
Epic (Patrick Kavanagh - 1960)
I have lived in important places, times When great events were decided; who owned That half a rood of rock, a no-man’s land Surrounded by our pitchfork-armed claims.
I heard the Duffys shouting ‘Damn your soul!’ And old McCabe stripped to the waist, seen Step the plot defying blue cast-steel – ‘Here is the march along these iron stones’
That was the year of the Munich bother. Which Was more important? I inclined To lose my faith in Ballyrush and Gortin
Till Homer’s ghost came whispering to my mind. He said: I made the Iliad from such A local row. Gods make their own importance.
> but good luck anyone ever finding it.
Get what you’re saying but hasn’t that always been the case without extensive marketing and promotion?
My understanding that writing and finishing the book is just the start. Then you need to sell the thing.
You're right, it's the bit nobody tells you (or you ignore) at the start of the writing adventure. AI just makes it even harder to get noticed.
The world of literature is increasingly making itself inaccessible to broad audiences by turning this into a zero-sum game.
I wish OpenAI, Anthropic and Gemini would all figure out how to pay royalties to copyright holders anytime their content is used. I see absolutely no reason why they can't do this. It would really take all the steam out of these hardline anti-AI positions.
I think we will be seeing a lot more business pop up that will take cater to people who are unhappy with AI. Especially if you consider the large amount of inevitable layoffs, people will begin to resent everything AI. The intelligent machine was never supposed to replace laborers, it was supposed to do your dishes and laundry.
> I think we will be seeing a lot more business pop up that will take cater to people who are unhappy with AI.
How will those potential customers know which businesses are providing an AI-generated product and which are not?
It's only a viable business if there is a way for potential customers to determine the amount of AI in whatever the product is.
My dishes and laundry are done by machines. Dumb ones though.
tbf dishes and laundry are also labourers' jobs.
but i agree i think there will be a small market for “vegan” content.
I'm down for this, but only if the people who are getting paid by OpenAI/etc also turn around and pay any inspiration they've had, any artist they've copied from, etc over their entire life. If we're going to do this, we need to do it in a logically consistent way; anyone who's derived art from pre-existing art needs to pay the pre-existing artist, and I mean ALL of it, for anything derivative.
Good luck with that.
> I'm down for this, but only if the people who are getting paid by OpenAI/etc also turn around and pay any inspiration they've had, any artist they've copied from, etc over their entire life.
Why? Things are scale have different rules (different laws as well) from things done individually or for personal reasons.
What is the argument for AI/LLM stuff getting an exemption in this regard?
I don't see why AI/LLMs should get exemptions or special treatment.
If copying someone is bad, and they should be paid for it, that should be universal.
We already have copyright laws, they already prevent people from distributing AI outputs that infringe on intellectual property. If you don't like those laws in the age of AI, get them changed consistently, don't take a broken system and put it on life support.
I find it funny that many people are pro-library and pro-archive, and will pay money to causes that try to do that with endangered culture, but get angry at AI as having somehow stolen something, when they're fulfilling an archival function as well.
What I find funny about your argument is how completely degraded fair use has become when using anything by a corporation capable of delaying and running up legal fees. It sure feels like there are a separate set of rules.
> If copying someone is bad, and they should be paid for it, that should be universal.
But we (i.e. society) don't agree that it is; the rules, laws and norms we have is that some things are bad at scale!
As a society, we've already decided that things at scale are regulated differently than things for personal use. That ship has sailed and it's too late now to argue for laws to apply universally regardless of scale.
I am asking why AI/LLMs should get a blanket exemption in this regard.
I have not seen any good arguments for why we society should make a special exemption for AI/LLMs.
"Scale" isn't a justification for regulatory differences, that's a straw man. We take shortcuts at scale because of resource constraints, and sometimes there are more differences than just scale, and we're over simplifying because we're not as smart as we'd like to imagine. If there aren't resource constraints, and we have the cognitive bandwidth to handle something in a consistent way, we really should.
If we were talking algorithms, would you special case code because a lot of people hit it even if load wasn't a problem, or would you try to keep one unified function that works everywhere?
> "Scale" isn't a justification for regulatory differences, that's a straw man.
It's not a strawman - that's literally how things work.
You can hold a bake sale at school with fewer sanitation requirements that a cake store has to satisfy.
You can possess weed for personal use, but will get locked up if you possess a warehouse filled with 200 tons of the stuff.
You can reproduce snippets of copyrighted material, but you can't reproduce the entire thing.
(I can go on and on and on, but you get the idea)
Which laws, regulations or social norms did you have in mind when you thought that scale doesn't matter?
I'm unable to think of even one regulation that applies universally regardless of scale. Which one were you thinking of?
Distribution and possession are fundamentally different. Cops try to bust people who have large amounts for distribution even if they don't have any evidence of it, but that's a different issue.
Corporations are individuals and can engage in fair use (at least, as the law is written now). Neither corporations nor individuals can redistribute material in non-fair use applications.
School bake sales are regulated under cottage food laws, which are relaxed under the condition that a "safe" subset of foods is produced. That's why there are no bake sales that sell cured sausage, for instance. Food laws are in some part regulatory capture by big food, but thankfully there hasn't been political will to outlaw independent food production entirely.
You're misinformed about all the examples you cited, you should do more research before stating strong opinions.
> You're misinformed about all the examples you cited, you should do more research before stating strong opinions.
You've literally agreed with what I said[1]:
> School bake sales are regulated under cottage food laws, which are relaxed under the condition that a "safe" subset of foods is produced. That's why there are no bake sales that sell cured sausage, for instance. Food laws are in some part regulatory capture by big food, but thankfully there hasn't been political will to outlaw independent food production entirely.
Scale results in different regulation. You have, with this comment, agreed that it does yet are still pressing on the point that there should be an exemption for AI/LLM.
I don't understand your reasoning in pointing out that baking has different regulations depending on scale; I pointed out the same thing - the regulations are not universal.
-------------------
[1] Things I have said:
> Things are scale have different rules (different laws as well) from things done individually or for personal reasons.
> As a society, we've already decided that things at scale are regulated differently than things for personal use.
> You can hold a bake sale at school with fewer sanitation requirements that a cake store has to satisfy.
> many people are pro-library and pro-archive, but get angry at AI as having somehow stolen something
Yes! They're angry that there are two standards, an onerous one making life hell for archivists, librarians, artists, enthusiasts, and suddenly a free-for-all when it comes to these AI fad companies hoovering all the data in the world they can get their paws on.
I.e. protecting the interests of capital at the expense of artists and people in the former, and the interests of capital at the expense of artists and people in the latter.
Why stop differentiating between humans and machines?
> anytime their content is used
So ... every time a model is used? Because it has been trained on these works so they have some influence on all its weights?
> I see absolutely no reason why they can't do this
They didn't even pay to access the works in the first place, frankly the chances of them paying now seems pretty minimal, without being forced to by the courts.
I've had this idea kicking around in my head now for a few months that this is an opportunity to update copyright / IP law generally, and use the size and scope of government to do something about both the energy costs of AI and compensation for people whose works are used. At a very rough draft and high level it goes something like this:
Update copyright to an initial 10 year limit, granted at publication without any need to register. This 10 year period also works just like copyright today, the characters, places, everything is projected. After 10 years, your entire work falls into the public domain.
Alternatively, you can register your copyright with the government within the first 3 years. This requires submitting your entire work in a machine readable specified format for integration into official training sets and models. These data sets and models will be licensed by the government for some fee to interested parties. As a creator with material submitted to this data set, you will receive some portion of those licensing feed, proportional to the quantity and amount of time your material has been in the set, with some caps set to prevent abuse. I imagine this would work something like the broadcast licensing for radios works. You will receive these licensing fees for up to 20 years from the first date of copyright.
During the first 10 years, copyright is still enforced on your work for all the same things that would normally be covered. For the 10 years after that, in additional consideration for adding your work to the data sets, you will be granted an additional weaker copyright term. The details would vary by the work, but for a novel for example, this might still protect the specific characters and creatures you created, but no longer offer protection on the "universe" you created. If we imagine Star Wars being created under this scheme, while Darth Vader, Luke Skywalker and Leia Organa might still be protected from 1987-1997, The Empire, Tatooine, and Star Destroyers might not be.
What I envision here is that these government data sets would be known good, clean, properly categorized and in the case of models, the training costs have already been paid once. Rather than everyone doing a mad dash to scrape all the world's content, or buy up their own collection of books to be scanned and processed, all of that work could already have been done and it's just a license fee away. Additionally because we're building up an archive of media, we could also license custom data sets. Maybe someone wants to make a model trained on only cartoons, or only mystery novels or what have you. The data is already there, a nominal fee can get you that data, or maybe even have something trained up, and all the people who have contributed to that data are getting something for their work, but we're also not hamstringing our data sets to being decades or more out of date because Disney talked the government into century long copyrights decades ago.
Why go after the AI company? If someone is using the AI generated content for commercial purposes and it’s based of a copyrighted work, they are the ones who should be paying the royalty.
The AI company is really more like a vector search company that brings you relevant content, kind of like Google, but that does not mean the user will use those results for commercial purposes. Google doesn’t pay royalties for displaying your website in search results.
Sure and that's one way to solve royalties.
I suspect from purely logistics, AI training is better when it's free to injest all the content it can, and for that freedom it pays in some small royalty amount when that source is cited.
They'd simply pass that cost onto the customer. For universities or enterprises or lawfirms, or whatever, they would either include pre-existing agreements, or pay for blanket access. Whatever terms OpenAI, Anthropic, and Gemini sign with these entities, they can workout the royalties there.
These are all solved problems for every other technology middle man company.
This only makes sense if we have open access to the training data so can verify if it’s copyrighted or not. Otherwise how am I supposed to know it’s replicated someone’s IP.
It's not quite the same though, when I search Google I'm generally directed to the source (though the summary box stuff might cross the line a bit).
With AI, copyrighted material is often not obvious to the end user, so I don't think it's fair to go after them.
I think it's non-trivial to make the AI company pay per use though, they'd need a way to calculate what percent of the response is from which source. Let them pay at training time with consent from the copyright holder, or just omit it.
The AI company isn’t making money off the copyrighted material, they make money off finding the copyrighted material for you.
The end user is 100% to blame for any copyright violations.
This argument didn’t work out for Napster or Google Image Search.
Surely both of them should have some sort of valid license to the work in that case?
> We put an identifying mark on publishers committed to publishing only (human-written) literature
>hardline anti-AI position
Some people are beyond parody.
[dead]
We need this for technical books. I was a chapter into something the other day before deciding I’d been hoodwinked into reading someone’s ChatGPT output
I've noticed entire publishers on Amazon which are just fly-by-night AI slop, probably printed on-demand too.
For example, I stumbled on https://www.amazon.com/dp/B0DT4TKY58 and had never heard of the author. Their page (https://www.amazon.com/stores/author/B004LUETE8) suggested they were incredibly prolific in a huge number of areas which already felt off. No information about "Robert Johnson" was available either. The publisher, HiTeX Press (https://www.amazon.com/s?k=HiTeX+Press) has a few other authors with similarly generic names and no information available about them, each the author of numerous books spanning a huge array of topics.
It feels even more bewildering and disheartening to see AI slop come into the physical world like this.
Give yourself a treat as a reader: stop going to Amazon.
There are lots of other ebook stores and physical book stores that don't enable scams, mistreat workers, or do this weird AI junk.
Yeah I really need to. I do occasionally go through a bout of trying to take my business elsewhere. I am sort of a used book junkie and Amazon still owns that market. Though I do often get my used books from Alibris when I can.
Recommendations for alternatives would be very welcome.
For mainstream books, I use Bookshop ( https://bookshop.org ) or Kobo. Bookshop supports indie bookstores. Both of them will let you know if a given eBook has DRM or not.
(Now, when a book does have DRM, I buy it from Kobo! I'll leave it to the reader to speculate why :) )
Do you have recs? Particularly where I can buy epub without drm with a fairly rich catalogue across niche and more mainstream writings?
Sadly KDP Select makes that impossible. Preferential rates for authors in exchange for exclusivity. There’s a lot of (human) slop there, but enough stuff I want to read.
I recently saw this, and technical subjects that actually have 0 books written about them now have entire pages filled with books. Title sound good, and the page look decently good, but there's something slightly off, and when you look at it the "author" has been writing a book a week...
It's disheartening because now I will look much more into reputable publishers, and so filter off independent writers who have nothing to do with this.
This is inverted. AI books should come with warning labels similar to those found in cigarettes.
> AI books should come with warning labels
I disagree. AI use is diffuse. An author is specific. Having people label their work as AI free is accountable in a way trying to require AI-generated work be labeled is not.
> similar to those found in cigarettes
Hyperbole undermines your argument. We have decades of rigorous and international evidence for the harms from cigarettes. We don’t for AI.
Saying "I think X should have a warning, like cigarettes do" is not the making the claim "X is harmful in a way that is comparable to cigarettes." The similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes.
> similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes
We put warnings on cigarettes because not only are they harmful, but we have ample evidence about how and by what mechanism they are harmful.
The logic that leads to labelling every harm is the one that causes everything in California to be labeled as a cancer hazard. You want tight causation of sufficient magnitude and in a way that will cause actual behavior change, either through reduced demand or changed producer behavior. Until you have that-which we did for cigarettes—labelling is a bit silly.
That's fine but doesn't give you license to put words in other people's mouths. Maybe they want AI labeled for transparency. Maybe it's a matter of personal preference. Or maybe they're following the precautionary principle and don't want to wait for the evidence of harm, but rather for the evidence of the absence of harm.
There's an infinite number of positions they could hold, and the discussion works better if you ask rather than assume.
I always wondered if there was some way to make a "proof" that some piece of work was human created.
A recording of the entire process of it's creation is one possible answer (though how are deep fakes countered)
But maybe there is some cryptographic solution involving single direction provable timestamps..
Does anyone know of anyone working on such a thing?
Even if every single page was hand written on camera, that could not prove that no AI was used.
Did the author come up with the main ideas, character arcs or plot devices himself? Did he ever seek assistance from AI to come up with new plot points, rewrite paragraphs, create dialog?
The only thing which really matters is trust.
That's interesting, I hadn't considered that a person might completely parrot a novel from an LLM and then claim it as their own..
I would almost give that a pass as long as we could prove that the person had sat down for 10s/100s of hours to type out the novel.
In that instance it would be almost the same as false authorship and stealing credit for other people's work. This has existed for centuries already.
As to being influenced by an LLM I think that is fine, even up to entire plot structure.. as with the above this could happen between people too.
By this point we can also discuss what is trully original and what if creative work is just "stealing" ideas that other people "created" before.
(I don't have an answer, just wondering.)
When people say AI isn't creating anything novel it's just predicting the next word I wonder whether my brain is just predicting the next word I should type here.
They keep trying this with digital cameras signing the data and it's always a complete failure.
It's a social problem at heart and piling on yet more technology won't fix it.
Right, the same assholes gaming the system with slop would just game whatever system you tried to put around them. It's not like you can stand over someone the whole time they work to ensure it's real.
> some piece of work was human created
Are thoughts and ideas creations? Or you just mean the literal typewriting?
How do you prove an idea is original and you have been in a vacuum not influenced _by anything at all_?
If anything The Hunger Games is the perfect example that you can get away with anything you want, and that was almost 20 years ago.
Everything is a remix https://www.youtube.com/watch?v=nJPERZDfyWc or if you hate your life https://tvtropes.org/
This depends on the subject of the book, but there are enough books written pre-1970 (or some other year one is comfortable with, before the era of “book spinners”, AI etc) to last multiple lifetimes. I used to spend hours and hours in bookstores, but so many books these days (AI or otherwise) don’t seem that interesting. Many, many books could just be 3 page articles, but stretched to 150 page books.
So yeah, simply filtering by year published could be a start
Buying a book scanner and frequenting used book stores seems like a past time to start that'll pay off in the long term.
I enjoy old fiction enough that sites like gutenberg.org has that covered and I barely bother with trying to find anything new that I want to read, and that began long pre-ai-slop so no real change for me.
For non-fiction it is a bit trickier. I buy DRM-free from some niche publishers, but I have no idea which ones can be trusted to not begin to mix in AI slop in their books.
The normal writing 'journey' is several months, or years, of hard work and multiple revisions. I invest a little of my time explaining the journey on my blog, and also include the text in the prefix of my novels that it was written by a real human. It is my way of saying I was invested in the story, but it is pretty naive to think this will work in the age of AI today.
I also wrote an article on my blog that you are mainly writing for yourself and your family, friends and followers these days, the algorithm is very unlikely to get you outside of that word-of-mouth audience, unless you pay $$, go full-in promoting on social media (which may backfire), or are extremely lucky. With AI the algorithm has become the enemy and finding genuine indie authors is unfortunately getting harder.
An authoring device akin to this, perhaps? https://roc.camera/
This is great, for wider adoption I feel it needs to be an adaptable technology one existing tech. Phones, DSLR etc
> wondered if there was some way to make a "proof" that some piece of work was human created
Self certification backed by a war chest to sue those who lie.
Maybe they could prove it using blockchain!!!
No need to invent more tech to mitigate techslop.
People will know by reputation alone, which cannot be fabricated.
This has the same problems any DRM has. People who want to bypass the process will find a way, but legitimate people get caught up in weird messes.
I'm so happy I'm not doing any school/academic work anymore, because AI writing detection tools (I learned English though reading technical docs; of course my writing style is a bit clinical) and checking the edit history in a Google Docs document would've both fucked me over.
This idealistic objective is highly commendable, but the fight could be futile. As you would need AI to do the work of detection. Then there will be another movement to do "organic detection" of "organic content". And the story goes on.
Think of interview candidates rejected by AI and employees fired by AI, or that case where a snack pack was identified by AI as a weapon in a student's pocket. This will lead to "organic decision making".
Nothing futile about defense of humanity! Art forgers and technofrauds will never be true participants in culture.
i'm currently reading The Recognitions by William Gaddis -- it's a lot to get through but it might change your mind on this point (not that it's for the better!).
[dead]
> you would need AI to do the work of detection
Why?
In 2020 at the beginning of the pandemic, I set a timer, wrote for 10 minutes, live-streamed it, did it three times per day, for 35 days, and put everything unedited into a book.
It seems as if it may be more relevant in our AI writing times.
I too am open for business, for a modest fee I will arrange to meet a book publisher in nyc for a firm handshake to cement a declaration from them that they are publishing books not made with AI. I will then send a formal email saying they may publish a little gold star on their book, and my preeminence as a member of the literary elite should carry it through. I'm doing this for the people because I _care_.
I wonder how this works since authors are more and more likely to use AI to spell check, fix wording, find alternate words, and all manner of other things. It might be useful to understand the “rules” for what “human” means.
What is the point of this? Any publishing house can just "self certify" that no AI was used. Why would it be necessary to have an outside organization, who can not validate AI use anyway and just has to rely on the publisher.
Writing a book is, in most cases, something which happens between the author and their writing medium, how could any publisher verify anything about AI use, except in the most obvious cases?
The one thing which matters here is honesty and trust and I do not see how an outside organization could help in creating that honesty and maintaining that trust.
I don't care if AI wrote the book, if the book is good. The problem is that AI writes badly and pointlessly. It's not even a good editor, it 1) has no idea what you are talking about, and 2) if it catches some theme, it thinks the best thing to do is to repeat it over and over again and make it very, very clear. The reason you want to avoid LLM books is the same reason why you should avoid Gladwell books.
If a person who I know has taste signs off on a 100% AI book, I'll happily give it a spin. That person, to me, becomes the author as soon as they say that it's work that they would put their name on. The book has become an upside-down urinal. I'm not sure AI books are any different than cut-ups, other than somebody signed a cut-up. I've really enjoyed some cut-ups and stupid experiments, and really projected a lot onto them.
My experience in running things I've written through GPT-5 is that my angry reaction to its rave reviews, or its clumsy attempts to expand or rewrite, are stimulating in and of themselves. They often convince me to rewrite in order to throw the LLM even farther off the trail.
Maybe a lot of modern writers are looking for a certification because a lot of what they turn out is indistinguishable cliché, drawn from their experiences watching television in middle-class suburbs and reading the work of newspaper movie critics.
Lastly, everything about this site looks like it was created by AI.
> I don't care if AI wrote the book, if the book is good.
Not so sure. Books are not all just entertainment but they also develop one's ouook on life, relationships, morality etc. I mean, of course books can also be written by "bad" people to propagate their view of things, but at least you're still peeking into the views and distilled experience of a fellow human who lived a personal life.
Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers. Do that on a large enough scale and it's really hard to tell what kind of effect it has.
> Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers.
There is some of this even without AI. Plenty of modern pulpy thriller and romance books for example are highly market-optimised by now.
There are thousands of data points out there for what works and doesn't and it would be a very principled author who ignores all the evidence of what demonstrably sells in favour of pure self-expression.
Then again, AI allows to turbocharge the analysis and pluck out the variables that statistically trigger higher sales. I'd be surprised if someone isn't right now explicitly training a Content-o-matic model on the text of books along with detailed sales data and reviews. Perhaps a large pro-AI company with access to all the e-book versions, 20 years of detailed sales data, as well as all telemetry such as highlighted passages and page turns on their reader devices? Even if you didn't or couldn't use it to literally write the whole thing, you can have it optimise the output against expected sales.
This is a big problem, though I would be slow to trust anyone purporting to address this problem. (Though, to their credit, this Books by People team is more credible than the bog-standard pair of 20yo Bay Area techbro grifters I expected.)
Reportedly, Kindle has already been flooded with "AI" generated books. And I've heard complaints from authors, of AI superficial rewritings of their own books being published by scammers. (So, not only "AI, write a YA novel, to the market, about a coming of age vampire young woman small town friends-to-lovers romance", but "AI, write a new novel in the style of Jane Smith, basically laundering previous things she's written" and "AI, copy the top-ranked fiction books in each category on Amazon, and substitute names of things, and how things are worded.")
For now, Kindle is already requiring publishers/authors to certify on which aspects of the books AI tools were used (e.g., text, illustrations, covers), something about how the tools were used (e.g., outright generation, assistive with heavy human work, etc.), and which tools were used. So that self-reporting is already being done somewhere, just not exposed to buyers yet.
That won't stop the dishonest, but at least it will help keep the honest writers honest. For example, if you, an honest writer, consider for a moment using generative AI to first-draft a scene, an awareness that you're required to disclose that generative AI use will give you pause, and maybe you decide that's not a direction you want to go with your work, nor how you want to be known.
Incidentally, I've noticed a lot of angry anti-generative-AI sentiment among creatives like writers and artists. Much more than among us techbros. Maybe the difference is that techbros are generally positioning ourselves to profit from AI, from copyright violations, selling AI products to others, and investment scams.
Just another rent-seeker. I mostly choose books based on word of mouth recommendations or liking other things by the same author. This is very resistant to slop from AI and to the large amounts of rubbish that has always been published.
Books written by AI is yet another case of an application of AI that does nothing to solve existing problems that consumers have (too few books in this case) but instead focuses on the producer side of things.
Worse yet, increasing the quantity of books while simultaneously decreasing the quality just makes the situation worse for readers: more slop to filter out.
An organization with zero technical capability charging publishers recurring fees to certify something they can't actually verify?
So this is the thing that Zitron and Doctorow are always talking about? Naked grifting in the AI industry?
> can't actually verify
Why? Can't it be done same way it's done with copyrighted material: by checking the authors process?
(Because at least in EU law permits writing basically same thing, if both authors reached it organically - have a trail of drafts, other writing process documents. As long as you proved you came upon it without influence from the other author.)
Proving that you done it without AI can be similar. For example - just videotaping whole writing process.
Now, as for if anyone cares about such proofs is another topic.
I sneak out to the toilet and ask chatgpt what should happen in the next chapter. Or do you stick a camera there too?
>Proving that you done it without AI can be similar. For example - just videotaping whole writing process.
Which proves very little. It also would be something which authors would absolutely loath to do.
No technical ability required to verify humans as humans. You just have to close your laptop and meet at a coffee shop. Surprisingly many deals are done this way, because humans like other humans.
[dead]
[flagged]
> atleast social media has no pretences about teaching you something
Are we reading the same social media? How many courses and coaches are running around?
grifting and shilling is mankinds collective yearning for wealth, fame, & fortune. it would be inhumane to silence these voices.
You have more respect for grifters on social media as humans expressing themselves than for authors of books...? That's very strange.
Doubly so because no one suggested "silencing them," they pointed out that people do presume to teach on social media. You acknowledge this but not how it relates to your original argument, you've pivoted to defending their right to speech instead of shoring up your original argument.
Books have the great advantage that they can't capture the zero attention span market and thus have not attempted to appeal to it.
Ugh, capitalism monifies yet another problem it created.
What about capitalism created AI? China is not a purely capitalistic society and they have AI too… I don’t see anything specific about capitalism that brings about AI. In fact much of the advances in it came about through academia, which is more of a socialist structure than capitalist.
State capitalism is still capitalism.
You’re really arguing a non-capitalist society would be incapable of developing LLMs?
No. You are making a big leap there. I just object to calling "communism with Chinese characteristics" anything but state capitalism.
Capitalism does however incentive unhealthy and self-destructive exploitation. Including of generative AI.
I’m curious why you see this as a problem created by capitalism, rather than another cause?
Would you have another cause you could posit? Surely the reason the marketplace is being flooded with AI generated books is because it's profitable.
Nothing besides greed could explain the extent to which people handwave and ignore the many obvious harms and failures and risks.
> Nothing besides greed
You’re really arguing greed (EDIT: and bad risk evaluation) didn’t exist before capitalism?
Is my cat capitalist?
Affirming the consequent.
Fair enough.
Projectiles existed before guns, but guns created new problems.
For fucks sake we can play word games all day but let's not.