promptguard.schemas¶
Data Models¶
- class promptguard.RiskScore(is_malicious, probability, risk_level, confidence, explanation, metadata=None)[source]¶
Bases:
objectResult of a single-prompt security analysis.
- class promptguard.SanitizationResult(original, sanitized, was_modified, removed_patterns, strategy, confidence, risk_reduction)[source]¶
Bases:
objectOutcome of a single prompt sanitisation operation.
- strategy: SanitizationStrategy¶
The
SanitizationStrategythat was applied.
- class promptguard.SanitizeResponse(sanitization, original_analysis, sanitized_analysis, risk_before, risk_after, risk_reduction)[source]¶
Bases:
objectTyped result returned by
PromptGuard.sanitize().- sanitization: SanitizationResult¶
Detailed sanitisation outcome.
Enumerations¶
- class promptguard.RiskLevel(*values)[source]¶
-
Categorised risk level returned by the classifier.
- LOW = 'low'¶
- MEDIUM = 'medium'¶
- HIGH = 'high'¶
- class promptguard.Intent(*values)[source]¶
-
Detected intent of the analysed prompt.
- QUESTION = 'question'¶
- INSTRUCTION = 'instruction'¶
- CONVERSATION = 'conversation'¶
- JAILBREAK = 'jailbreak'¶
- INJECTION = 'injection'¶
- UNKNOWN = 'unknown'¶