Large language models (LLMs) are increasingly deployed in real-world applications, raising concerns about their security. While jailbreak attacks highlight failures under overtly harmful queries, they overlook a critical risk: incorrectly answering harmless-looking inputs can be dangerous and cause real-world harm (Implicit Harm). We systematically reformulate the LLM risk landscape through a structured quadrant perspective based on output factuality and input harmlessness, uncovering an overlooked high-risk region. To investigate this gap, we propose JailFlipBench, a benchmark aims to capture implicit harm, spanning single-modal, multimodal, and factual extension scenarios with diverse evaluation metrics. We further develop initial JailFlip attack methodologies and conduct comprehensive evaluations across multiple open-source and black-box LLMs, show that implicit harm present immediate and urgent real-world risks, calling for broader LLM safety assessments and alignment beyond conventional jailbreak paradigms.
As the first to identify the Implicit Harm vulnerability in LLMs, we warmly welcome all forms of contribution from both academia and industry. Whether it's sharing thoughts or insights, proposing new attacks, mitigations, or benchmarks, or requesting support for testing additional target models, please feel free to reach out via email at jailflip@gmail.com, or directly open an issue on our GitHub repository.
@article{zhou2025beyond,
title={Beyond Jailbreaks: Revealing Stealthier and Broader LLM Security Risks Stemming from Alignment Failures},
author={Zhou, Yukai and Yang, Sibei and Wang, Wenjie},
journal={arXiv preprint arXiv:2506.07402},
year={2025}
}