PPQS
WILD CORPUS · reddit

PQS 45 (D) - prompt from www.reddit.com

Source: www.reddit.com · Scraped 2026-05-04 · Scored 2026-05-04

Score

D
45 / 80
gemma4:latest · local · pqs-v2.0 · canonical
Clarity8 / 10
Specificity7 / 10
Context8 / 10
Constraints7 / 10
Output format6 / 10
Role definition2 / 10
Examples3 / 10
CoT structure4 / 10

The prompt

I kept running into the same issue using ChatGPT for emails.

The replies were technically correct…

but still off.

Too polite 
Too long 
Kind of avoiding the actual point

So I’d end up rewriting them anyway.

 
What fixed it wasn’t a better prompt.

It was adding one missing piece:

*the actual goal of the email*

 
**Here’s the exact format I use now:**

Write a reply to this client email.

Context:

\[paste email here\]

Goal of this reply:

\- set a clear deadline

\- push back on scope

\- keep the relationship positive

Tone:

casual but professional

Rules:

\- keep it direct

\- no unnecessary filler

\- structure it clearly (acknowledge → respond → next step)

 
The difference is honestly bigger than I expected.

Before → safe, generic, not very useful 
After → much more direct and actually aligned with what I needed

 
What seems to be happening:

If you don’t define the goal, the model just guesses.

And it usually defaults to:

* overly polite
* non-committal
* trying to please both sides

 
Once you give it a clear outcome, it stops guessing and just executes.

 
I’ve started using this structure for pretty much everything now:

emails 
proposals 
follow-ups

Anything where the intent isn’t obvious from the input.

 
It’s a small change, but it removed a lot of the back-and-forth editing for me.

Still falls apart if the context is messy, but way more consistent overall.

 
I’ve been turning these into small reusable systems so I don’t have to think through them every time.

Made a free set of them if anyone wants to try → link in bio

This prompt was scraped from a public source. The score reflects the input as written, not the quality of any output it produced. The AI input quality problem is the gap between what people type and what the model can act on.