WILD CORPUS · reddit
PQS 37 (D) - prompt from www.reddit.com
Source: www.reddit.com · Scraped 2026-05-04 · Scored 2026-05-04
Score
D37 / 80
gemma4:latest · local · pqs-v2.0 · canonical
Clarity9 / 10
Specificity6 / 10
Context8 / 10
Constraints7 / 10
Output format3 / 10
Role definition1 / 10
Examples1 / 10
CoT structure2 / 10
The prompt
I build quantitative research tools and use AI daily for financial analysis, coding, and writing. After a year of trial and error, these are the patterns that consistently produce the best output regardless of model or task. **1. Specific role > generic expert.** "You are an expert" does nothing. "Senior equity research analyst with 12 years covering Nordic tech, specializing in SaaS valuation" gives the model a real lens. Changes vocabulary, depth, and assumptions completely. **2. Layered context.** Separate your industry context from your problem context from your audience context. Each layer narrows the output. Dump everything in one paragraph and the model picks what to focus on. Layer it and you decide. **3. Numbered deliverables.** "Give me an analysis" produces filler. "Give me (1) root cause assessment, (2) three solutions ranked by cost, (3) a recommendation with reasoning, (4) risks for the top option" produces something usable. Always decompose. **4. Model-specific formatting.** Claude handles XML tags best. ChatGPT works well with markdown headers. Gemini responds to bold labels and clean hierarchy. Same prompt formatted differently for each model gives noticeably different quality. **5. Negative constraints.** "Don't hedge every statement. Don't give generic advice. Don't use filler phrases." This one pattern alone cut my iterations in half. Tells the model to skip its default safe-and-bland mode. A short prompt with all five of these beats a long unstructured prompt every time. What patterns are working for you?
This prompt was scraped from a public source. The score reflects the input as written, not the quality of any output it produced. The AI input quality problem is the gap between what people type and what the model can act on.