Skip to content

When using Gemini AI for translation or speech recognition tasks, you may encounter errors such as "Response content was flagged."

image.png

This is because Gemini has safety restrictions on the content it processes. Although code allows for some adjustment, and the most lenient "Block None" setting is applied, the final decision on filtering rests with Gemini's comprehensive evaluation.

Gemini API's adjustable safety filters cover the following categories. Content not listed here cannot be adjusted through code:

CategoryDescription
Harassment ContentNegative or harmful comments targeting identity and/or protected attributes.
Hate SpeechRude, disrespectful, or profane content.
Explicitly SexualContains references to sexual acts or other obscene content.
Dangerous ContentPromotes, facilitates, or encourages harmful behavior.
Civic IntegrityElection-related queries.

The table below describes the blocking settings available in the code for each category.

For example, if you set the blocking setting for the Hate Speech category to Block Few, the system will block all parts with a high probability of containing hate speech content. However, any part with a low probability of containing dangerous content is allowed.

Threshold (Google AI Studio)Threshold (API)Description
Block NoneBLOCK_NONEDisplays regardless of the likelihood of unsafe content.
Block FewBLOCK_ONLY_HIGHBlocks when the probability of unsafe content is high.
Block SomeBLOCK_MEDIUM_AND_ABOVEBlocks when the probability of unsafe content is medium or high.
Block MostBLOCK_LOW_AND_ABOVEBlocks when the probability of unsafe content is low, medium, or high.
N/AHARM_BLOCK_THRESHOLD_UNSPECIFIEDThreshold not specified; uses the default threshold for blocking.

You can enable BLOCK_NONE in the code with the following settings:

safetySettings = [
    {
        "category": HarmCategory.HARM_CATEGORY_HARASSMENT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_HATE_SPEECH,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
]

model = genai.GenerativeModel('gemini-2.0-flash-exp')
model.generate_content(
                message,
                safety_settings=safetySettings
)

However, note that even if all settings are set to BLOCK_NONE, it does not mean that Gemini will allow the relevant content. It will still infer security based on the context and filter accordingly.

How to Reduce the Probability of Security Restrictions?

Generally, the flash series has more security restrictions, while the pro and thinking series models have relatively fewer. You can try switching to different models. In addition, when sensitive content may be involved, sending less content at a time and reducing the context length can also reduce the frequency of security filtering to some extent.

How to Completely Disable Gemini's Security Judgment and Allow All of the Above Content?

Bind a foreign credit card and switch to a paid premium account.