8 Ways to Spot AI-Generated Text in Creative Submissions
The most reliable way to detect AI-generated writing is human intuition. Add some knowledge of the specific patterns LLMs leave behind, and you'll have more of a chance of catching it! This guide covers eight tell-tale signs from repetitive sentence structures and buzzword overload to hallucinated facts and odd placeholders.

Published on Jan 7th 2026
Reading time: ~5 minutes
Introduction
AI tools like ChatGPT and Claude are already being widely used in the creative industries, and written submissions are no exception. At Dapple, we process thousands of submissions through our platform, and we see the concern from programme coordinators, editors, and prize administrators firsthand. AI isn't a looming threat on the horizon; it's already in your inbox.
You can add a checkbox to your form that says 'I confirm AI wasn't used in the creation of this work.' You can put warnings on your website. You can rail against it publicly. None of it solves the problem. As LLMs become more capable and easier to use, the volume of AI-assisted submissions will only grow.
What's in This Article
✓What are the best ways to detect AI-generated submissions?
✓8 tell-tale signs of AI-generated text
✓How should organisations respond to AI-generated submissions?
✓Quick reference: 8 signs of AI-generated text
What Are the Best Ways to Detect AI-Generated Submissions?
There are three practical approaches, and the strongest defence uses all three in combination:
- Deterrents — policy statements, declaration checkboxes, clear guidelines
- AI detection tools — software that flags suspicious text (Dapple is building this in)
- Human intuition — a trained eye that knows what to look for
This article focuses on the third. The eight signs below are patterns that experienced readers consistently report noticing. It's usually something they can't articulate exactly why but something usually just feels off. Once you know what you're looking for, you can't unsee it.
8 Tell-Tale Signs of AI-Generated Text
1. Repetitive Sentence Structures and Predictable Connectors
Human writers vary sentence length, tone, and rhythm instinctively. It's a quality built up over years of reading and writing. AI, by contrast, tends to lean on a short list of connectors: “Additionally…”, “Moreover…”, “It is worth noting that…” These phrases appear repeatedly across a piece, and the argument rarely advances as efficiently as it should. Points get restated rather than built upon. The result is writing that feels padded and circular. It's competent on the surface but oddly weightless. In longer-form submissions, this is more obvious, but even in shorter pieces, it produces that nagging sense that something is just a little “off.”
2. Abrupt Shifts in Register or Style
A paragraph that opens like a personal essay and then suddenly reads like a Wikipedia entry is a strong signal that the model has pulled content from different training sources and stitched them together imperfectly. This tonal whiplash, colloquial one moment, clinical the next, is something human writers rarely do unintentionally. When it appears mid-paragraph with no clear rhetorical purpose, treat it as a flag.
3. Surface-Level Ideas with No Lived Experience
AI lacks lived experience. To quote Robin Williams' character in Good Will Hunting, talking to the main character Will (essentially a walking LLM), AI might give you “the skinny on every art book ever written. Michelangelo…life's work, political aspirations” But when it comes to that deeper level: “I'll bet you can't tell me what it smells like in the Sistine Chapel. You've never actually stood there and looked up at that beautiful ceiling.” AI is the same. It can regurgitate, but it can't feel. Text that seems formulaic, generic, or missing that emotional nuance can often be a red flag.
4. Buzzword Overload
Lexicographer Susie Dent (of UK's Countdown fame) has noted publicly that certain words appear with suspicious frequency in AI-generated text: “delve,” “transformative,” “dynamic,” “navigating,” “multifaceted.” Phrases like “rich tapestry,” “embarking on a journey,” and “game changer” are also common. Dent's observation: “AI absolutely loves the jargon that we are all used to using.” The tell is not any single word, but the density of them. A piece of work that deploys four or five of these in a single page is almost certainly AI-assisted.
5. Flawless Grammar with No Human Quirks
Despite the plethora of tools out there to help with spelling and grammar, “To err is human”, as Pope once mused, and humans make mistakes. A fully polished work can be a red flag. We also like to break rules for emphasis and style; AI often doesn't. A flawless piece with no contractions or quirks, no syntactic somersaults that don't stick the landing, well, that might not be human. There are also a few classic tell-tale signs, like the frequent use of the em dash ( — ). That being said, the fanfare this little punctuation mark has received has made many wise to its usage, and those wise to this will likely de-em their work.
6. Hallucinated or Unverifiable Facts
AI models invent details with complete confidence. There are countless examples (often quite comical, some even deadly) of fabricated events. Legal precedents that don't exist. Statistics without sources. Quotes attributed to the wrong person. Google's Bard, in its first public demonstration, stated incorrect information about the James Webb Space Telescope. A New York lawyer submitted AI-generated case citations that turned out to be entirely fabricated. If a submission contains a surprising fact, a specific statistic, or an obscure reference, make sure to check it. If it can't be verified with a credible source, that's a significant flag.
7. Information That Stops at a Cut-Off Date
Most AI models have a training cut-off, a point beyond which they have no direct knowledge of events. Most tools can now access the internet or process uploaded documents to partially bridge this gap, but the underlying model's reasoning still reflects its training window. Submissions that discuss a fast-moving field, a current cultural moment, or recent events with confident authority yet get details subtly wrong, or miss obvious recent developments, may be the work of a model extrapolating from older data. This is particularly noticeable in pieces about technology, policy, or current affairs.
8. Odd Placeholders or Suspiciously Paraphrased Phrases
Some AI users hit copy before reading what they've been given. The result: placeholders like “[Insert specific example here]” or mismatched details that make no sense in context. A related pattern comes from paraphrasing tools like QuillBot, which students and writers use to run AI-generated or plagiarised text through a secondary process to bypass detection. The result is often phrases that have been translated so literally that they lose all meaning, or sentences that are grammatically correct but semantically strange. Any time writing feels like it's been translated from another language, is worth investigating.
How Should Organisations Respond to AI-Generated Submissions?
No single approach is sufficient on its own. The most effective strategy combines clear policy (so submitters know the rules), detection tools (to flag suspicious submissions at scale), and human review (to make the final call on edge cases).
Detection tools are improving but they are not infallible, and sophisticated AI use is increasingly hard for software to catch reliably. The eight patterns above are what the tools struggle with most: stylistic tells, hollowness of lived experience, the particular texture of something that has been produced rather than felt.
That instinct and sense that a piece is technically competent but somehow not quite present, is something experienced readers develop over time. It's not foolproof. But it's still the best instrument we have.
Quick Reference: 8 Signs of AI-Generated Text
| Sign | What to look for |
|---|---|
| Repetitive structures | Same connectors and sentence patterns repeating; argument doesn't advance |
| Register shifts | Sudden tonal change mid-paragraph with no rhetorical reason |
| Surface-level ideas | Accurate but textureless with no lived detail, no unexpected observation |
| Buzzword overload | High density of 'delve', 'transformative', 'rich tapestry' and similar |
| Flawless grammar | No contractions, no rule-breaking, no personality and heavy em dash use |
| Hallucinated facts | Specific claims that can't be verified or turn out to be fabricated |
| Cut-off date gaps | Confident but subtly wrong information about recent events |
| Odd placeholders | Unfilled template text, or phrases that read as literal translations of nothing |
Last updated: April 2026 | Dapple processes thousands of creative submissions annually. AI detection tooling is currently in development for the Dapple platform.