What I HAVE been using it for, however, is a Beta-Reader and Line Editor, having it go over my chapters with a fine-tooth comb and picking out structure and flow issues for me to correct.
Its been doing well, but ChatGPT-5 seems to have the memory of a goldfish.
Its contextual memory can't keep up with a long narrative, and its got a bad habit of bleeding context from other narratives into each other.
I've tried to combat this by creating Frameworks that control digest and context.
I HAVE been able to get it to output some rather good contextual summaries, typically in the range of 15-20 chapters,
Right now, it works like this;
1 – I upload batches of 5-6 chapters, varying in length between 2000-2500 words each.
2 – Though the framework, ChatGPT condenses those chapters into digests, complete with project name, Act, Arc, and Book labels, narrative summaries, and Continuity Threads. In the end, these "Act" Digests end up around 700-800 narrative words long.
3 – Once enough Act digests have been collected, (typically 3-4 per Arc) the Framework combines them into an Arc Digest, that does the same as the Act Digest, and typically ranges between 1200-1600 words long.
4 – Finally, once all the Arcs have been collected (3-4), it combines them in the same way, ending with a Book Digest, typically running between 3000-5000 Words total.
In this way, I turn a 200,000 work novel in to a 12,000 word Novel Summary that the AI can check and reference on multiple levels.
But getting it to KEEP those summaries in memory and context has been a nightmare.
Eventually, I decided that the best thing I can do, is instead save all of these Digests as .txt filed, then when I start a conversation to beta read a new batch of chapters, upload my Frameworks and the Digests before hand so that they start fresh.
Is this the best way to do this? or am I missing an alternative/something obvious.
Yes, I realize that even on these compressed forms, they still take up a good chunk of tokens.