Skip to content

For information accuracy and completeness, some content is excerpted from the internet, such as Google Gemini's official documentation, and copyright requirements have been followed.

When using Gemini AI to perform translation or speech recognition tasks, you may encounter errors such as "Response content has been flagged."

image.png

This is because Gemini has security restrictions on the content it processes. Although the code allows for some adjustments and has the most lenient "Block None" setting, the final decision on whether to filter is still determined by Gemini's comprehensive evaluation.

Gemini API's adjustable safety filters cover the following categories, and content not listed here cannot be adjusted through code:

CategoryDescription
HarassmentNegative or harmful comments targeting identity and/or protected attributes.
Hate SpeechRude, disrespectful, or profane content.
Sexually ExplicitContains references to sexual acts or other obscene content.
Dangerous ContentPromotes, facilitates, or encourages harmful acts.
Civic IntegrityQueries related to elections.

The table below describes the blocking settings that can be configured in the code for each category.

For example, if you set the blocking setting for the Hate Speech category to Block Few, the system will block any part that has a high probability of containing hate speech content. But it will allow any part that has a low probability of containing dangerous content.

Threshold (Google AI Studio)Threshold (API)Description
Block NoneBLOCK_NONEAlways show, regardless of the likelihood of unsafe content
Block FewBLOCK_ONLY_HIGHBlock when there is a high probability of unsafe content
Block SomeBLOCK_MEDIUM_AND_ABOVEBlock when the likelihood of unsafe content is medium or high
Block MostBLOCK_LOW_AND_ABOVEBlock when the likelihood of unsafe content is low, medium, or high
Not ApplicableHARM_BLOCK_THRESHOLD_UNSPECIFIEDThreshold not specified, block using default threshold

The following settings can be used in the code to enable BLOCK_NONE:

safetySettings = [
    {
        "category": HarmCategory.HARM_CATEGORY_HARASSMENT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_HATE_SPEECH,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
]

model = genai.GenerativeModel('gemini-2.0-flash-exp')
model.generate_content(
                message,
                safety_settings=safetySettings
)

However, it is important to note that even if all settings are set to BLOCK_NONE, it does not mean that Gemini will allow the relevant content. It will still infer safety based on the context and filter accordingly.

How to reduce the probability of security restrictions?

Generally, the flash series has more security restrictions, while the pro and thinking series models have relatively fewer. You can try switching to different models. In addition, when potentially sensitive content is involved, sending less content at a time and reducing the context length can also reduce the frequency of security filtering to some extent.

How to completely disable Gemini from making security judgments and allow all of the above content?

Bind a foreign credit card and switch to a paid premium account.