When using Gemini AI for translation or speech recognition tasks, you may encounter errors such as "Response content was flagged."
This is because Gemini has safety restrictions on the content it processes. Although code allows for some adjustment, and the most lenient "Block None" setting is applied, the final decision on filtering rests with Gemini's comprehensive evaluation.
Gemini API's adjustable safety filters cover the following categories. Content not listed here cannot be adjusted through code:
Category | Description |
---|---|
Harassment Content | Negative or harmful comments targeting identity and/or protected attributes. |
Hate Speech | Rude, disrespectful, or profane content. |
Explicitly Sexual | Contains references to sexual acts or other obscene content. |
Dangerous Content | Promotes, facilitates, or encourages harmful behavior. |
Civic Integrity | Election-related queries. |
The table below describes the blocking settings available in the code for each category.
For example, if you set the blocking setting for the Hate Speech category to Block Few, the system will block all parts with a high probability of containing hate speech content. However, any part with a low probability of containing dangerous content is allowed.
Threshold (Google AI Studio) | Threshold (API) | Description |
---|---|---|
Block None | BLOCK_NONE | Displays regardless of the likelihood of unsafe content. |
Block Few | BLOCK_ONLY_HIGH | Blocks when the probability of unsafe content is high. |
Block Some | BLOCK_MEDIUM_AND_ABOVE | Blocks when the probability of unsafe content is medium or high. |
Block Most | BLOCK_LOW_AND_ABOVE | Blocks when the probability of unsafe content is low, medium, or high. |
N/A | HARM_BLOCK_THRESHOLD_UNSPECIFIED | Threshold not specified; uses the default threshold for blocking. |
You can enable BLOCK_NONE
in the code with the following settings:
safetySettings = [
{
"category": HarmCategory.HARM_CATEGORY_HARASSMENT,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
{
"category": HarmCategory.HARM_CATEGORY_HATE_SPEECH,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
{
"category": HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
{
"category": HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
]
model = genai.GenerativeModel('gemini-2.0-flash-exp')
model.generate_content(
message,
safety_settings=safetySettings
)
However, note that even if all settings are set to BLOCK_NONE
, it does not mean that Gemini will allow the relevant content. It will still infer security based on the context and filter accordingly.
How to Reduce the Probability of Security Restrictions?
Generally, the flash series has more security restrictions, while the pro and thinking series models have relatively fewer. You can try switching to different models. In addition, when sensitive content may be involved, sending less content at a time and reducing the context length can also reduce the frequency of security filtering to some extent.
How to Completely Disable Gemini's Security Judgment and Allow All of the Above Content?
Bind a foreign credit card and switch to a paid premium account.