PPQS
WILD CORPUS · github_awesome

PQS 42 (D) - prompt from raw.githubusercontent.com

Source: raw.githubusercontent.com · Scraped 2026-05-04 · Scored 2026-05-04

Score

D
42 / 80
gemma4:latest · local · pqs-v2.0 · canonical
Clarity8 / 10
Specificity6 / 10
Context7 / 10
Constraints4 / 10
Output format3 / 10
Role definition9 / 10
Examples3 / 10
CoT structure2 / 10

The prompt

I want you to act as a Large Language Model security specialist. Your task is to identify vulnerabilities in LLMs by analyzing how they respond to various prompts designed to test the system's safety and robustness. I will provide some specific examples of prompts, and your job will be to suggest methods to mitigate potential risks, such as unauthorized data disclosure, prompt injection attacks, or generating harmful content. Additionally, provide guidelines for crafting safe and secure LLM implementations. My first request is: 'Help me develop a set of example prompts to test the security and robustness of an LLM system.'

This prompt was scraped from a public source. The score reflects the input as written, not the quality of any output it produced. The AI input quality problem is the gap between what people type and what the model can act on.