Generating content

Gemini API 支援生成圖片、音訊、程式碼、工具等內容,如要瞭解各項功能的詳細資訊,請繼續閱讀並查看以工作為導向的範例程式碼,或參閱完整指南。

方法:models.generateContent

根據輸入內容 GenerateContentRequest 生成模型回覆。如需詳細的使用資訊,請參閱文字生成指南。輸入功能會因模型而異,包括微調模型。詳情請參閱模型指南微調指南

端點

貼文 https://generativelanguage.googleapis.com/v1beta/{model=models/*}:generateContent

路徑參數

model string

必要欄位。用於生成完成內容的 Model 名稱。

格式:models/{model}。格式為 models/{model}

要求主體

要求主體的資料會採用以下結構:

Fields
contents[] object (Content)

必要欄位。目前與模型對話的內容。

如果是單輪查詢,這就是單一執行個體。如果是多輪查詢 (例如即時通訊),這個重複欄位會包含對話記錄和最新要求。

tools[] object (Tool)

(選用步驟) Tools Model 可能會使用這份清單生成下一個回覆。

Tool是一段程式碼,可讓系統與外部系統互動,執行 Model 知識和範圍以外的動作或一連串動作。支援的 ToolFunctioncodeExecution。詳情請參閱「函式呼叫」和「程式碼執行」指南。

toolConfig object (ToolConfig)

(選用步驟) 要求中指定任何 Tool 的工具設定。如需使用範例,請參閱函式呼叫指南

safetySettings[] object (SafetySetting)

(選用步驟) 用於封鎖不安全內容的不重複 SafetySetting 執行個體清單。

這項規定將於 GenerateContentRequest.contentsGenerateContentResponse.candidates 生效。每個 SafetyCategory 類型不得有多個設定。如果內容和回覆未達到這些設定的門檻,API 就會封鎖。這份清單會覆寫 safetySettings 中指定的每個 SafetyCategory 的預設設定。如果清單中提供的特定 SafetyCategory 沒有 SafetySetting,API 會使用該類別的預設安全設定。支援的危害類別包括 HARM_CATEGORY_HATE_SPEECH、HARM_CATEGORY_SEXUALLY_EXPLICIT、HARM_CATEGORY_DANGEROUS_CONTENT、HARM_CATEGORY_HARASSMENT、HARM_CATEGORY_CIVIC_INTEGRITY。如需可用安全設定的詳細資訊,請參閱指南。此外,請參閱安全指南,瞭解如何在 AI 應用程式中納入安全考量。

systemInstruction object (Content)

(選用步驟) 開發人員設定系統指令。目前僅支援文字。

generationConfig object (GenerationConfig)

(選用步驟) 模型生成和輸出內容的設定選項。

cachedContent string

(選用步驟) 快取內容的名稱,用來做為提供預測結果的背景資訊。格式:cachedContents/{cachedContent}

要求範例

文字

Python

from google import genai  client = genai.Client() response = client.models.generate_content(     model="gemini-2.0-flash", contents="Write a story about a magic backpack." ) print(response.text)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  const response = await ai.models.generateContent({   model: "gemini-2.0-flash",   contents: "Write a story about a magic backpack.", }); console.log(response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) } contents := []*genai.Content{ 	genai.NewContentFromText("Write a story about a magic backpack.", genai.RoleUser), } response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil) if err != nil { 	log.Fatal(err) } printResponse(response)

貝殼

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [{         "parts":[{"text": "Write a story about a magic backpack."}]         }]        }' 2> /dev/null

Java

Client client = new Client();  GenerateContentResponse response =         client.models.generateContent(                 "gemini-2.0-flash",                 "Write a story about a magic backpack.",                 null);  System.out.println(response.text());

圖片

Python

from google import genai import PIL.Image  client = genai.Client() organ = PIL.Image.open(media / "organ.jpg") response = client.models.generate_content(     model="gemini-2.0-flash", contents=["Tell me about this instrument", organ] ) print(response.text)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  const organ = await ai.files.upload({   file: path.join(media, "organ.jpg"), });  const response = await ai.models.generateContent({   model: "gemini-2.0-flash",   contents: [     createUserContent([       "Tell me about this instrument",        createPartFromUri(organ.uri, organ.mimeType)     ]),   ], }); console.log(response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "organ.jpg"),  	&genai.UploadFileConfig{ 		MIMEType : "image/jpeg", 	}, ) if err != nil { 	log.Fatal(err) } parts := []*genai.Part{ 	genai.NewPartFromText("Tell me about this instrument"), 	genai.NewPartFromURI(file.URI, file.MIMEType), } contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), }  response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil) if err != nil { 	log.Fatal(err) } printResponse(response)

貝殼

# Use a temporary file to hold the base64 encoded image data TEMP_B64=$(mktemp) trap 'rm -f "$TEMP_B64"' EXIT base64 $B64FLAGS $IMG_PATH > "$TEMP_B64"  # Use a temporary file to hold the JSON payload TEMP_JSON=$(mktemp) trap 'rm -f "$TEMP_JSON"' EXIT  cat > "$TEMP_JSON" << EOF {   "contents": [{     "parts":[       {"text": "Tell me about this instrument"},       {         "inline_data": {           "mime_type":"image/jpeg",           "data": "$(cat "$TEMP_B64")"         }       }     ]   }] } EOF  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d "@$TEMP_JSON" 2> /dev/null

Java

Client client = new Client();  String path = media_path + "organ.jpg"; byte[] imageData = Files.readAllBytes(Paths.get(path));  Content content =         Content.fromParts(                 Part.fromText("Tell me about this instrument."),                 Part.fromBytes(imageData, "image/jpeg"));  GenerateContentResponse response = client.models.generateContent("gemini-2.0-flash", content, null);  System.out.println(response.text());

音訊

Python

from google import genai  client = genai.Client() sample_audio = client.files.upload(file=media / "sample.mp3") response = client.models.generate_content(     model="gemini-2.0-flash",     contents=["Give me a summary of this audio file.", sample_audio], ) print(response.text)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  const audio = await ai.files.upload({   file: path.join(media, "sample.mp3"), });  const response = await ai.models.generateContent({   model: "gemini-2.0-flash",   contents: [     createUserContent([       "Give me a summary of this audio file.",       createPartFromUri(audio.uri, audio.mimeType),     ]),   ], }); console.log(response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "sample.mp3"),  	&genai.UploadFileConfig{ 		MIMEType : "audio/mpeg", 	}, ) if err != nil { 	log.Fatal(err) }  parts := []*genai.Part{ 	genai.NewPartFromText("Give me a summary of this audio file."), 	genai.NewPartFromURI(file.URI, file.MIMEType), }  contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), }  response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil) if err != nil { 	log.Fatal(err) } printResponse(response)

貝殼

# Use File API to upload audio data to API request. MIME_TYPE=$(file -b --mime-type "${AUDIO_PATH}") NUM_BYTES=$(wc -c < "${AUDIO_PATH}") DISPLAY_NAME=AUDIO  tmp_header_file=upload-header.tmp  # Initial resumable request defining metadata. # The upload url is in the response headers dump them to a file. curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \   -D upload-header.tmp \   -H "X-Goog-Upload-Protocol: resumable" \   -H "X-Goog-Upload-Command: start" \   -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \   -H "Content-Type: application/json" \   -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null  upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r") rm "${tmp_header_file}"  # Upload the actual bytes. curl "${upload_url}" \   -H "Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Offset: 0" \   -H "X-Goog-Upload-Command: upload, finalize" \   --data-binary "@${AUDIO_PATH}" 2> /dev/null > file_info.json  file_uri=$(jq ".file.uri" file_info.json) echo file_uri=$file_uri  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [{         "parts":[           {"text": "Please describe this file."},           {"file_data":{"mime_type": "audio/mpeg", "file_uri": '$file_uri'}}]         }]        }' 2> /dev/null > response.json  cat response.json echo  jq ".candidates[].content.parts[].text" response.json

影片

Python

from google import genai import time  client = genai.Client() # Video clip (CC BY 3.0) from https://peach.blender.org/download/ myfile = client.files.upload(file=media / "Big_Buck_Bunny.mp4") print(f"{myfile=}")  # Poll until the video file is completely processed (state becomes ACTIVE). while not myfile.state or myfile.state.name != "ACTIVE":     print("Processing video...")     print("File state:", myfile.state)     time.sleep(5)     myfile = client.files.get(name=myfile.name)  response = client.models.generate_content(     model="gemini-2.0-flash", contents=[myfile, "Describe this video clip"] ) print(f"{response.text=}")

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  let video = await ai.files.upload({   file: path.join(media, 'Big_Buck_Bunny.mp4'), });  // Poll until the video file is completely processed (state becomes ACTIVE). while (!video.state || video.state.toString() !== 'ACTIVE') {   console.log('Processing video...');   console.log('File state: ', video.state);   await sleep(5000);   video = await ai.files.get({name: video.name}); }  const response = await ai.models.generateContent({   model: "gemini-2.0-flash",   contents: [     createUserContent([       "Describe this video clip",       createPartFromUri(video.uri, video.mimeType),     ]),   ], }); console.log(response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "Big_Buck_Bunny.mp4"),  	&genai.UploadFileConfig{ 		MIMEType : "video/mp4", 	}, ) if err != nil { 	log.Fatal(err) }  // Poll until the video file is completely processed (state becomes ACTIVE). for file.State == genai.FileStateUnspecified || file.State != genai.FileStateActive { 	fmt.Println("Processing video...") 	fmt.Println("File state:", file.State) 	time.Sleep(5 * time.Second)  	file, err = client.Files.Get(ctx, file.Name, nil) 	if err != nil { 		log.Fatal(err) 	} }  parts := []*genai.Part{ 	genai.NewPartFromText("Describe this video clip"), 	genai.NewPartFromURI(file.URI, file.MIMEType), }  contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), }  response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil) if err != nil { 	log.Fatal(err) } printResponse(response)

貝殼

# Use File API to upload audio data to API request. MIME_TYPE=$(file -b --mime-type "${VIDEO_PATH}") NUM_BYTES=$(wc -c < "${VIDEO_PATH}") DISPLAY_NAME=VIDEO  # Initial resumable request defining metadata. # The upload url is in the response headers dump them to a file. curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \   -D "${tmp_header_file}" \   -H "X-Goog-Upload-Protocol: resumable" \   -H "X-Goog-Upload-Command: start" \   -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \   -H "Content-Type: application/json" \   -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null  upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r") rm "${tmp_header_file}"  # Upload the actual bytes. curl "${upload_url}" \   -H "Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Offset: 0" \   -H "X-Goog-Upload-Command: upload, finalize" \   --data-binary "@${VIDEO_PATH}" 2> /dev/null > file_info.json  file_uri=$(jq ".file.uri" file_info.json) echo file_uri=$file_uri  state=$(jq ".file.state" file_info.json) echo state=$state  name=$(jq ".file.name" file_info.json) echo name=$name  while [[ "($state)" = *"PROCESSING"* ]]; do   echo "Processing video..."   sleep 5   # Get the file of interest to check state   curl https://generativelanguage.googleapis.com/v1beta/files/$name > file_info.json   state=$(jq ".file.state" file_info.json) done  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [{         "parts":[           {"text": "Transcribe the audio from this video, giving timestamps for salient events in the video. Also provide visual descriptions."},           {"file_data":{"mime_type": "video/mp4", "file_uri": '$file_uri'}}]         }]        }' 2> /dev/null > response.json  cat response.json echo  jq ".candidates[].content.parts[].text" response.json

PDF

Python

from google import genai  client = genai.Client() sample_pdf = client.files.upload(file=media / "test.pdf") response = client.models.generate_content(     model="gemini-2.0-flash",     contents=["Give me a summary of this document:", sample_pdf], ) print(f"{response.text=}")

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "test.pdf"),  	&genai.UploadFileConfig{ 		MIMEType : "application/pdf", 	}, ) if err != nil { 	log.Fatal(err) }  parts := []*genai.Part{ 	genai.NewPartFromText("Give me a summary of this document:"), 	genai.NewPartFromURI(file.URI, file.MIMEType), }  contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), }  response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, nil) if err != nil { 	log.Fatal(err) } printResponse(response)

貝殼

MIME_TYPE=$(file -b --mime-type "${PDF_PATH}") NUM_BYTES=$(wc -c < "${PDF_PATH}") DISPLAY_NAME=TEXT   echo $MIME_TYPE tmp_header_file=upload-header.tmp  # Initial resumable request defining metadata. # The upload url is in the response headers dump them to a file. curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \   -D upload-header.tmp \   -H "X-Goog-Upload-Protocol: resumable" \   -H "X-Goog-Upload-Command: start" \   -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \   -H "Content-Type: application/json" \   -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null  upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r") rm "${tmp_header_file}"  # Upload the actual bytes. curl "${upload_url}" \   -H "Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Offset: 0" \   -H "X-Goog-Upload-Command: upload, finalize" \   --data-binary "@${PDF_PATH}" 2> /dev/null > file_info.json  file_uri=$(jq ".file.uri" file_info.json) echo file_uri=$file_uri  # Now generate content using that file curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [{         "parts":[           {"text": "Can you add a few more lines to this poem?"},           {"file_data":{"mime_type": "application/pdf", "file_uri": '$file_uri'}}]         }]        }' 2> /dev/null > response.json  cat response.json echo  jq ".candidates[].content.parts[].text" response.json

即時通訊

Python

from google import genai from google.genai import types  client = genai.Client() # Pass initial history using the "history" argument chat = client.chats.create(     model="gemini-2.0-flash",     history=[         types.Content(role="user", parts=[types.Part(text="Hello")]),         types.Content(             role="model",             parts=[                 types.Part(                     text="Great to meet you. What would you like to know?"                 )             ],         ),     ], ) response = chat.send_message(message="I have 2 dogs in my house.") print(response.text) response = chat.send_message(message="How many paws are in my house?") print(response.text)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); const chat = ai.chats.create({   model: "gemini-2.0-flash",   history: [     {       role: "user",       parts: [{ text: "Hello" }],     },     {       role: "model",       parts: [{ text: "Great to meet you. What would you like to know?" }],     },   ], });  const response1 = await chat.sendMessage({   message: "I have 2 dogs in my house.", }); console.log("Chat response 1:", response1.text);  const response2 = await chat.sendMessage({   message: "How many paws are in my house?", }); console.log("Chat response 2:", response2.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  // Pass initial history using the History field. history := []*genai.Content{ 	genai.NewContentFromText("Hello", genai.RoleUser), 	genai.NewContentFromText("Great to meet you. What would you like to know?", genai.RoleModel), }  chat, err := client.Chats.Create(ctx, "gemini-2.0-flash", nil, history) if err != nil { 	log.Fatal(err) }  firstResp, err := chat.SendMessage(ctx, genai.Part{Text: "I have 2 dogs in my house."}) if err != nil { 	log.Fatal(err) } fmt.Println(firstResp.Text())  secondResp, err := chat.SendMessage(ctx, genai.Part{Text: "How many paws are in my house?"}) if err != nil { 	log.Fatal(err) } fmt.Println(secondResp.Text())

貝殼

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [         {"role":"user",          "parts":[{            "text": "Hello"}]},         {"role": "model",          "parts":[{            "text": "Great to meet you. What would you like to know?"}]},         {"role":"user",          "parts":[{            "text": "I have two dogs in my house. How many paws are in my house?"}]},       ]     }' 2> /dev/null | grep "text"

Java

Client client = new Client();  Content userContent = Content.fromParts(Part.fromText("Hello")); Content modelContent =         Content.builder()                 .role("model")                 .parts(                         Collections.singletonList(                                 Part.fromText("Great to meet you. What would you like to know?")                         )                 ).build();  Chat chat = client.chats.create(         "gemini-2.0-flash",         GenerateContentConfig.builder()                 .systemInstruction(userContent)                 .systemInstruction(modelContent)                 .build() );  GenerateContentResponse response1 = chat.sendMessage("I have 2 dogs in my house."); System.out.println(response1.text());  GenerateContentResponse response2 = chat.sendMessage("How many paws are in my house?"); System.out.println(response2.text()); 

快取

Python

from google import genai from google.genai import types  client = genai.Client() document = client.files.upload(file=media / "a11.txt") model_name = "gemini-1.5-flash-001"  cache = client.caches.create(     model=model_name,     config=types.CreateCachedContentConfig(         contents=[document],         system_instruction="You are an expert analyzing transcripts.",     ), ) print(cache)  response = client.models.generate_content(     model=model_name,     contents="Please summarize this transcript",     config=types.GenerateContentConfig(cached_content=cache.name), ) print(response.text)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); const filePath = path.join(media, "a11.txt"); const document = await ai.files.upload({   file: filePath,   config: { mimeType: "text/plain" }, }); console.log("Uploaded file name:", document.name); const modelName = "gemini-1.5-flash-001";  const contents = [   createUserContent(createPartFromUri(document.uri, document.mimeType)), ];  const cache = await ai.caches.create({   model: modelName,   config: {     contents: contents,     systemInstruction: "You are an expert analyzing transcripts.",   }, }); console.log("Cache created:", cache);  const response = await ai.models.generateContent({   model: modelName,   contents: "Please summarize this transcript",   config: { cachedContent: cache.name }, }); console.log("Response text:", response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"),  	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  modelName := "gemini-1.5-flash-001" document, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "a11.txt"),  	&genai.UploadFileConfig{ 		MIMEType : "text/plain", 	}, ) if err != nil { 	log.Fatal(err) } parts := []*genai.Part{ 	genai.NewPartFromURI(document.URI, document.MIMEType), } contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), } cache, err := client.Caches.Create(ctx, modelName, &genai.CreateCachedContentConfig{ 	Contents: contents, 	SystemInstruction: genai.NewContentFromText( 		"You are an expert analyzing transcripts.", genai.RoleUser, 	), }) if err != nil { 	log.Fatal(err) } fmt.Println("Cache created:") fmt.Println(cache)  // Use the cache for generating content. response, err := client.Models.GenerateContent( 	ctx, 	modelName, 	genai.Text("Please summarize this transcript"), 	&genai.GenerateContentConfig{ 		CachedContent: cache.Name, 	}, ) if err != nil { 	log.Fatal(err) } printResponse(response)

調整過的模型

Python

# With Gemini 2 we're launching a new SDK. See the following doc for details. # https://ai.google.dev/gemini-api/docs/migrate

JSON 模式

Python

from google import genai from google.genai import types from typing_extensions import TypedDict  class Recipe(TypedDict):     recipe_name: str     ingredients: list[str]  client = genai.Client() result = client.models.generate_content(     model="gemini-2.0-flash",     contents="List a few popular cookie recipes.",     config=types.GenerateContentConfig(         response_mime_type="application/json", response_schema=list[Recipe]     ), ) print(result)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); const response = await ai.models.generateContent({   model: "gemini-2.0-flash",   contents: "List a few popular cookie recipes.",   config: {     responseMimeType: "application/json",     responseSchema: {       type: "array",       items: {         type: "object",         properties: {           recipeName: { type: "string" },           ingredients: { type: "array", items: { type: "string" } },         },         required: ["recipeName", "ingredients"],       },     },   }, }); console.log(response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"),  	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  schema := &genai.Schema{ 	Type: genai.TypeArray, 	Items: &genai.Schema{ 		Type: genai.TypeObject, 		Properties: map[string]*genai.Schema{ 			"recipe_name": {Type: genai.TypeString}, 			"ingredients": { 				Type:  genai.TypeArray, 				Items: &genai.Schema{Type: genai.TypeString}, 			}, 		}, 		Required: []string{"recipe_name"}, 	}, }  config := &genai.GenerateContentConfig{ 	ResponseMIMEType: "application/json", 	ResponseSchema:   schema, }  response, err := client.Models.GenerateContent( 	ctx, 	"gemini-2.0-flash", 	genai.Text("List a few popular cookie recipes."), 	config, ) if err != nil { 	log.Fatal(err) } printResponse(response)

貝殼

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \ -H 'Content-Type: application/json' \ -d '{     "contents": [{       "parts":[         {"text": "List 5 popular cookie recipes"}         ]     }],     "generationConfig": {         "response_mime_type": "application/json",         "response_schema": {           "type": "ARRAY",           "items": {             "type": "OBJECT",             "properties": {               "recipe_name": {"type":"STRING"},             }           }         }     } }' 2> /dev/null | head

Java

Client client = new Client();  Schema recipeSchema = Schema.builder()         .type(Array.class.getSimpleName())         .items(Schema.builder()                 .type(Object.class.getSimpleName())                 .properties(                         Map.of("recipe_name", Schema.builder()                                         .type(String.class.getSimpleName())                                         .build(),                                 "ingredients", Schema.builder()                                         .type(Array.class.getSimpleName())                                         .items(Schema.builder()                                                 .type(String.class.getSimpleName())                                                 .build())                                         .build())                 )                 .required(List.of("recipe_name", "ingredients"))                 .build())         .build();  GenerateContentConfig config =         GenerateContentConfig.builder()                 .responseMimeType("application/json")                 .responseSchema(recipeSchema)                 .build();  GenerateContentResponse response =         client.models.generateContent(                 "gemini-2.0-flash",                 "List a few popular cookie recipes.",                 config);  System.out.println(response.text());

程式碼執行

Python

from google import genai from google.genai import types  client = genai.Client() response = client.models.generate_content(     model="gemini-2.0-pro-exp-02-05",     contents=(         "Write and execute code that calculates the sum of the first 50 prime numbers. "         "Ensure that only the executable code and its resulting output are generated."     ), ) # Each part may contain text, executable code, or an execution result. for part in response.candidates[0].content.parts:     print(part, "\n")  print("-" * 80) # The .text accessor concatenates the parts into a markdown-formatted text. print("\n", response.text)

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  response, err := client.Models.GenerateContent( 	ctx, 	"gemini-2.0-pro-exp-02-05", 	genai.Text( 		`Write and execute code that calculates the sum of the first 50 prime numbers. 		 Ensure that only the executable code and its resulting output are generated.`, 	), 	&genai.GenerateContentConfig{}, ) if err != nil { 	log.Fatal(err) }  // Print the response. printResponse(response)  fmt.Println("--------------------------------------------------------------------------------") fmt.Println(response.Text())

Java

Client client = new Client();  String prompt = """         Write and execute code that calculates the sum of the first 50 prime numbers.         Ensure that only the executable code and its resulting output are generated.         """;  GenerateContentResponse response =         client.models.generateContent(                 "gemini-2.0-pro-exp-02-05",                 prompt,                 null);  for (Part part : response.candidates().get().getFirst().content().get().parts().get()) {     System.out.println(part + "\n"); }  System.out.println("-".repeat(80)); System.out.println(response.text());

函式呼叫

Python

from google import genai from google.genai import types  client = genai.Client()  def add(a: float, b: float) -> float:     """returns a + b."""     return a + b  def subtract(a: float, b: float) -> float:     """returns a - b."""     return a - b  def multiply(a: float, b: float) -> float:     """returns a * b."""     return a * b  def divide(a: float, b: float) -> float:     """returns a / b."""     return a / b  # Create a chat session; function calling (via tools) is enabled in the config. chat = client.chats.create(     model="gemini-2.0-flash",     config=types.GenerateContentConfig(tools=[add, subtract, multiply, divide]), ) response = chat.send_message(     message="I have 57 cats, each owns 44 mittens, how many mittens is that in total?" ) print(response.text)

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) } modelName := "gemini-2.0-flash"  // Create the function declarations for arithmetic operations. addDeclaration := createArithmeticToolDeclaration("addNumbers", "Return the result of adding two numbers.") subtractDeclaration := createArithmeticToolDeclaration("subtractNumbers", "Return the result of subtracting the second number from the first.") multiplyDeclaration := createArithmeticToolDeclaration("multiplyNumbers", "Return the product of two numbers.") divideDeclaration := createArithmeticToolDeclaration("divideNumbers", "Return the quotient of dividing the first number by the second.")  // Group the function declarations as a tool. tools := []*genai.Tool{ 	{ 		FunctionDeclarations: []*genai.FunctionDeclaration{ 			addDeclaration, 			subtractDeclaration, 			multiplyDeclaration, 			divideDeclaration, 		}, 	}, }  // Create the content prompt. contents := []*genai.Content{ 	genai.NewContentFromText( 		"I have 57 cats, each owns 44 mittens, how many mittens is that in total?", genai.RoleUser, 	), }  // Set up the generate content configuration with function calling enabled. config := &genai.GenerateContentConfig{ 	Tools: tools, 	ToolConfig: &genai.ToolConfig{ 		FunctionCallingConfig: &genai.FunctionCallingConfig{ 			// The mode equivalent to FunctionCallingConfigMode.ANY in JS. 			Mode: genai.FunctionCallingConfigModeAny, 		}, 	}, }  genContentResp, err := client.Models.GenerateContent(ctx, modelName, contents, config) if err != nil { 	log.Fatal(err) }  // Assume the response includes a list of function calls. if len(genContentResp.FunctionCalls()) == 0 { 	log.Println("No function call returned from the AI.") 	return nil } functionCall := genContentResp.FunctionCalls()[0] log.Printf("Function call: %+v\n", functionCall)  // Marshal the Args map into JSON bytes. argsMap, err := json.Marshal(functionCall.Args) if err != nil { 	log.Fatal(err) }  // Unmarshal the JSON bytes into the ArithmeticArgs struct. var args ArithmeticArgs if err := json.Unmarshal(argsMap, &args); err != nil { 	log.Fatal(err) }  // Map the function name to the actual arithmetic function. var result float64 switch functionCall.Name { 	case "addNumbers": 		result = add(args.FirstParam, args.SecondParam) 	case "subtractNumbers": 		result = subtract(args.FirstParam, args.SecondParam) 	case "multiplyNumbers": 		result = multiply(args.FirstParam, args.SecondParam) 	case "divideNumbers": 		result = divide(args.FirstParam, args.SecondParam) 	default: 		return fmt.Errorf("unimplemented function: %s", functionCall.Name) } log.Printf("Function result: %v\n", result)  // Prepare the final result message as content. resultContents := []*genai.Content{ 	genai.NewContentFromText("The final result is " + fmt.Sprintf("%v", result), genai.RoleUser), }  // Use GenerateContent to send the final result. finalResponse, err := client.Models.GenerateContent(ctx, modelName, resultContents, &genai.GenerateContentConfig{}) if err != nil { 	log.Fatal(err) }  printResponse(finalResponse)

Node.js

  // Make sure to include the following import:   // import {GoogleGenAI} from '@google/genai';   const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });    /**    * The add function returns the sum of two numbers.    * @param {number} a    * @param {number} b    * @returns {number}    */   function add(a, b) {     return a + b;   }    /**    * The subtract function returns the difference (a - b).    * @param {number} a    * @param {number} b    * @returns {number}    */   function subtract(a, b) {     return a - b;   }    /**    * The multiply function returns the product of two numbers.    * @param {number} a    * @param {number} b    * @returns {number}    */   function multiply(a, b) {     return a * b;   }    /**    * The divide function returns the quotient of a divided by b.    * @param {number} a    * @param {number} b    * @returns {number}    */   function divide(a, b) {     return a / b;   }    const addDeclaration = {     name: "addNumbers",     parameters: {       type: "object",       description: "Return the result of adding two numbers.",       properties: {         firstParam: {           type: "number",           description:             "The first parameter which can be an integer or a floating point number.",         },         secondParam: {           type: "number",           description:             "The second parameter which can be an integer or a floating point number.",         },       },       required: ["firstParam", "secondParam"],     },   };    const subtractDeclaration = {     name: "subtractNumbers",     parameters: {       type: "object",       description:         "Return the result of subtracting the second number from the first.",       properties: {         firstParam: {           type: "number",           description: "The first parameter.",         },         secondParam: {           type: "number",           description: "The second parameter.",         },       },       required: ["firstParam", "secondParam"],     },   };    const multiplyDeclaration = {     name: "multiplyNumbers",     parameters: {       type: "object",       description: "Return the product of two numbers.",       properties: {         firstParam: {           type: "number",           description: "The first parameter.",         },         secondParam: {           type: "number",           description: "The second parameter.",         },       },       required: ["firstParam", "secondParam"],     },   };    const divideDeclaration = {     name: "divideNumbers",     parameters: {       type: "object",       description:         "Return the quotient of dividing the first number by the second.",       properties: {         firstParam: {           type: "number",           description: "The first parameter.",         },         secondParam: {           type: "number",           description: "The second parameter.",         },       },       required: ["firstParam", "secondParam"],     },   };    // Step 1: Call generateContent with function calling enabled.   const generateContentResponse = await ai.models.generateContent({     model: "gemini-2.0-flash",     contents:       "I have 57 cats, each owns 44 mittens, how many mittens is that in total?",     config: {       toolConfig: {         functionCallingConfig: {           mode: FunctionCallingConfigMode.ANY,         },       },       tools: [         {           functionDeclarations: [             addDeclaration,             subtractDeclaration,             multiplyDeclaration,             divideDeclaration,           ],         },       ],     },   });    // Step 2: Extract the function call.(   // Assuming the response contains a 'functionCalls' array.   const functionCall =     generateContentResponse.functionCalls &&     generateContentResponse.functionCalls[0];   console.log(functionCall);    // Parse the arguments.   const args = functionCall.args;   // Expected args format: { firstParam: number, secondParam: number }    // Step 3: Invoke the actual function based on the function name.   const functionMapping = {     addNumbers: add,     subtractNumbers: subtract,     multiplyNumbers: multiply,     divideNumbers: divide,   };   const func = functionMapping[functionCall.name];   if (!func) {     console.error("Unimplemented error:", functionCall.name);     return generateContentResponse;   }   const resultValue = func(args.firstParam, args.secondParam);   console.log("Function result:", resultValue);    // Step 4: Use the chat API to send the result as the final answer.   const chat = ai.chats.create({ model: "gemini-2.0-flash" });   const chatResponse = await chat.sendMessage({     message: "The final result is " + resultValue,   });   console.log(chatResponse.text);   return chatResponse; } 

貝殼

 cat > tools.json << EOF {   "function_declarations": [     {       "name": "enable_lights",       "description": "Turn on the lighting system."     },     {       "name": "set_light_color",       "description": "Set the light color. Lights must be enabled for this to work.",       "parameters": {         "type": "object",         "properties": {           "rgb_hex": {             "type": "string",             "description": "The light color as a 6-digit hex string, e.g. ff0000 for red."           }         },         "required": [           "rgb_hex"         ]       }     },     {       "name": "stop_lights",       "description": "Turn off the lighting system."     }   ] }  EOF  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \   -H 'Content-Type: application/json' \   -d @<(echo '   {     "system_instruction": {       "parts": {         "text": "You are a helpful lighting system bot. You can turn lights on and off, and you can set the color. Do not perform any other tasks."       }     },     "tools": ['$(cat tools.json)'],      "tool_config": {       "function_calling_config": {"mode": "auto"}     },      "contents": {       "role": "user",       "parts": {         "text": "Turn on the lights please."       }     }   } ') 2>/dev/null |sed -n '/"content"/,/"finishReason"/p'

Java

Client client = new Client();  FunctionDeclaration addFunction =         FunctionDeclaration.builder()                 .name("addNumbers")                 .parameters(                         Schema.builder()                                 .type("object")                                 .properties(Map.of(                                         "firstParam", Schema.builder().type("number").description("First number").build(),                                         "secondParam", Schema.builder().type("number").description("Second number").build()))                                 .required(Arrays.asList("firstParam", "secondParam"))                                 .build())                 .build();  FunctionDeclaration subtractFunction =         FunctionDeclaration.builder()                 .name("subtractNumbers")                 .parameters(                         Schema.builder()                                 .type("object")                                 .properties(Map.of(                                         "firstParam", Schema.builder().type("number").description("First number").build(),                                         "secondParam", Schema.builder().type("number").description("Second number").build()))                                 .required(Arrays.asList("firstParam", "secondParam"))                                 .build())                 .build();  FunctionDeclaration multiplyFunction =         FunctionDeclaration.builder()                 .name("multiplyNumbers")                 .parameters(                         Schema.builder()                                 .type("object")                                 .properties(Map.of(                                         "firstParam", Schema.builder().type("number").description("First number").build(),                                         "secondParam", Schema.builder().type("number").description("Second number").build()))                                 .required(Arrays.asList("firstParam", "secondParam"))                                 .build())                 .build();  FunctionDeclaration divideFunction =         FunctionDeclaration.builder()                 .name("divideNumbers")                 .parameters(                         Schema.builder()                                 .type("object")                                 .properties(Map.of(                                         "firstParam", Schema.builder().type("number").description("First number").build(),                                         "secondParam", Schema.builder().type("number").description("Second number").build()))                                 .required(Arrays.asList("firstParam", "secondParam"))                                 .build())                 .build();  GenerateContentConfig config = GenerateContentConfig.builder()         .toolConfig(ToolConfig.builder().functionCallingConfig(                 FunctionCallingConfig.builder().mode("ANY").build()         ).build())         .tools(                 Collections.singletonList(                         Tool.builder().functionDeclarations(                                 Arrays.asList(                                         addFunction,                                         subtractFunction,                                         divideFunction,                                         multiplyFunction                                 )                         ).build()                  )         )         .build();  GenerateContentResponse response =         client.models.generateContent(                 "gemini-2.0-flash",                 "I have 57 cats, each owns 44 mittens, how many mittens is that in total?",                 config);   if (response.functionCalls() == null || response.functionCalls().isEmpty()) {     System.err.println("No function call received");     return null; }  var functionCall = response.functionCalls().getFirst(); String functionName = functionCall.name().get(); var arguments = functionCall.args();  Map<String, BiFunction<Double, Double, Double>> functionMapping = new HashMap<>(); functionMapping.put("addNumbers", (a, b) -> a + b); functionMapping.put("subtractNumbers", (a, b) -> a - b); functionMapping.put("multiplyNumbers", (a, b) -> a * b); functionMapping.put("divideNumbers", (a, b) -> b != 0 ? a / b : Double.NaN);  BiFunction<Double, Double, Double> function = functionMapping.get(functionName);  Number firstParam = (Number) arguments.get().get("firstParam"); Number secondParam = (Number) arguments.get().get("secondParam"); Double result = function.apply(firstParam.doubleValue(), secondParam.doubleValue());  System.out.println(result);

生成設定

Python

from google import genai from google.genai import types  client = genai.Client() response = client.models.generate_content(     model="gemini-2.0-flash",     contents="Tell me a story about a magic backpack.",     config=types.GenerateContentConfig(         candidate_count=1,         stop_sequences=["x"],         max_output_tokens=20,         temperature=1.0,     ), ) print(response.text)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  const response = await ai.models.generateContent({   model: "gemini-2.0-flash",   contents: "Tell me a story about a magic backpack.",   config: {     candidateCount: 1,     stopSequences: ["x"],     maxOutputTokens: 20,     temperature: 1.0,   }, });  console.log(response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  // Create local variables for parameters. candidateCount := int32(1) maxOutputTokens := int32(20) temperature := float32(1.0)  response, err := client.Models.GenerateContent( 	ctx, 	"gemini-2.0-flash", 	genai.Text("Tell me a story about a magic backpack."), 	&genai.GenerateContentConfig{ 		CandidateCount:  candidateCount, 		StopSequences:   []string{"x"}, 		MaxOutputTokens: maxOutputTokens, 		Temperature:     &temperature, 	}, ) if err != nil { 	log.Fatal(err) }  printResponse(response)

貝殼

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY \     -H 'Content-Type: application/json' \     -X POST \     -d '{         "contents": [{             "parts":[                 {"text": "Explain how AI works"}             ]         }],         "generationConfig": {             "stopSequences": [                 "Title"             ],             "temperature": 1.0,             "maxOutputTokens": 800,             "topP": 0.8,             "topK": 10         }     }'  2> /dev/null | grep "text"

Java

Client client = new Client();  GenerateContentConfig config =         GenerateContentConfig.builder()                 .candidateCount(1)                 .stopSequences(List.of("x"))                 .maxOutputTokens(20)                 .temperature(1.0F)                 .build();  GenerateContentResponse response =         client.models.generateContent(                 "gemini-2.0-flash",                 "Tell me a story about a magic backpack.",                 config);  System.out.println(response.text());

安全性設定

Python

from google import genai from google.genai import types  client = genai.Client() unsafe_prompt = (     "I support Martians Soccer Club and I think Jupiterians Football Club sucks! "     "Write a ironic phrase about them including expletives." ) response = client.models.generate_content(     model="gemini-2.0-flash",     contents=unsafe_prompt,     config=types.GenerateContentConfig(         safety_settings=[             types.SafetySetting(                 category="HARM_CATEGORY_HATE_SPEECH",                 threshold="BLOCK_MEDIUM_AND_ABOVE",             ),             types.SafetySetting(                 category="HARM_CATEGORY_HARASSMENT", threshold="BLOCK_ONLY_HIGH"             ),         ]     ), ) try:     print(response.text) except Exception:     print("No information generated by the model.")  print(response.candidates[0].safety_ratings)

Node.js

  // Make sure to include the following import:   // import {GoogleGenAI} from '@google/genai';   const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });   const unsafePrompt =     "I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them including expletives.";    const response = await ai.models.generateContent({     model: "gemini-2.0-flash",     contents: unsafePrompt,     config: {       safetySettings: [         {           category: "HARM_CATEGORY_HATE_SPEECH",           threshold: "BLOCK_MEDIUM_AND_ABOVE",         },         {           category: "HARM_CATEGORY_HARASSMENT",           threshold: "BLOCK_ONLY_HIGH",         },       ],     },   });    try {     console.log("Generated text:", response.text);   } catch (error) {     console.log("No information generated by the model.");   }   console.log("Safety ratings:", response.candidates[0].safetyRatings);   return response; } 

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  unsafePrompt := "I support Martians Soccer Club and I think Jupiterians Football Club sucks! " + 	"Write a ironic phrase about them including expletives."  config := &genai.GenerateContentConfig{ 	SafetySettings: []*genai.SafetySetting{ 		{ 			Category:  "HARM_CATEGORY_HATE_SPEECH", 			Threshold: "BLOCK_MEDIUM_AND_ABOVE", 		}, 		{ 			Category:  "HARM_CATEGORY_HARASSMENT", 			Threshold: "BLOCK_ONLY_HIGH", 		}, 	}, } contents := []*genai.Content{ 	genai.NewContentFromText(unsafePrompt, genai.RoleUser), } response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, config) if err != nil { 	log.Fatal(err) }  // Print the generated text. text := response.Text() fmt.Println("Generated text:", text)  // Print the and safety ratings from the first candidate. if len(response.Candidates) > 0 { 	fmt.Println("Finish reason:", response.Candidates[0].FinishReason) 	safetyRatings, err := json.MarshalIndent(response.Candidates[0].SafetyRatings, "", "  ") 	if err != nil { 		return err 	} 	fmt.Println("Safety ratings:", string(safetyRatings)) } else { 	fmt.Println("No candidate returned.") }

貝殼

echo '{     "safetySettings": [         {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"},         {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}     ],     "contents": [{         "parts":[{             "text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d @request.json 2> /dev/null

Java

Client client = new Client();  String unsafePrompt = """          I support Martians Soccer Club and I think Jupiterians Football Club sucks!          Write a ironic phrase about them including expletives.         """;  GenerateContentConfig config =         GenerateContentConfig.builder()                 .safetySettings(Arrays.asList(                         SafetySetting.builder()                                 .category("HARM_CATEGORY_HATE_SPEECH")                                 .threshold("BLOCK_MEDIUM_AND_ABOVE")                                 .build(),                         SafetySetting.builder()                                 .category("HARM_CATEGORY_HARASSMENT")                                 .threshold("BLOCK_ONLY_HIGH")                                 .build()                 )).build();  GenerateContentResponse response =         client.models.generateContent(                 "gemini-2.0-flash",                 unsafePrompt,                 config);  try {     System.out.println(response.text()); } catch (Exception e) {     System.out.println("No information generated by the model"); }  System.out.println(response.candidates().get().getFirst().safetyRatings());

系統操作說明

Python

from google import genai from google.genai import types  client = genai.Client() response = client.models.generate_content(     model="gemini-2.0-flash",     contents="Good morning! How are you?",     config=types.GenerateContentConfig(         system_instruction="You are a cat. Your name is Neko."     ), ) print(response.text)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); const response = await ai.models.generateContent({   model: "gemini-2.0-flash",   contents: "Good morning! How are you?",   config: {     systemInstruction: "You are a cat. Your name is Neko.",   }, }); console.log(response.text);

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  // Construct the user message contents. contents := []*genai.Content{ 	genai.NewContentFromText("Good morning! How are you?", genai.RoleUser), }  // Set the system instruction as a *genai.Content. config := &genai.GenerateContentConfig{ 	SystemInstruction: genai.NewContentFromText("You are a cat. Your name is Neko.", genai.RoleUser), }  response, err := client.Models.GenerateContent(ctx, "gemini-2.0-flash", contents, config) if err != nil { 	log.Fatal(err) } printResponse(response)

貝殼

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "system_instruction": {     "parts":       { "text": "You are a cat. Your name is Neko."}},     "contents": {       "parts": {         "text": "Hello there"}}}'

Java

Client client = new Client();  Part textPart = Part.builder().text("You are a cat. Your name is Neko.").build();  Content content = Content.builder().role("system").parts(ImmutableList.of(textPart)).build();  GenerateContentConfig config = GenerateContentConfig.builder()         .systemInstruction(content)         .build();  GenerateContentResponse response =         client.models.generateContent(                 "gemini-2.0-flash",                 "Good morning! How are you?",                 config);  System.out.println(response.text());

回應主體

如果成功,回應主體會包含 GenerateContentResponse 的執行例項。

方法:models.streamGenerateContent

根據輸入內容 GenerateContentRequest,從模型生成串流回應

端點

貼文 https://generativelanguage.googleapis.com/v1beta/{model=models/*}:streamGenerateContent

路徑參數

model string

必要欄位。用於生成完成內容的 Model 名稱。

格式:models/{model}。格式為 models/{model}

要求主體

要求主體的資料會採用以下結構:

Fields
contents[] object (Content)

必要欄位。目前與模型對話的內容。

如果是單輪查詢,這就是單一執行個體。如果是多輪查詢 (例如即時通訊),這個重複欄位會包含對話記錄和最新要求。

tools[] object (Tool)

(選用步驟) Tools Model 可能會使用這份清單生成下一個回覆。

Tool是一段程式碼,可讓系統與外部系統互動,執行 Model 知識和範圍以外的動作或一連串動作。支援的 ToolFunctioncodeExecution。詳情請參閱「函式呼叫」和「程式碼執行」指南。

toolConfig object (ToolConfig)

(選用步驟) 要求中指定任何 Tool 的工具設定。如需使用範例,請參閱函式呼叫指南

safetySettings[] object (SafetySetting)

(選用步驟) 用於封鎖不安全內容的不重複 SafetySetting 執行個體清單。

這項規定將於 GenerateContentRequest.contentsGenerateContentResponse.candidates 生效。每個 SafetyCategory 類型不得有多個設定。如果內容和回覆未達到這些設定的門檻,API 就會封鎖。這份清單會覆寫 safetySettings 中指定的每個 SafetyCategory 的預設設定。如果清單中提供的特定 SafetyCategory 沒有 SafetySetting,API 會使用該類別的預設安全設定。支援的危害類別包括 HARM_CATEGORY_HATE_SPEECH、HARM_CATEGORY_SEXUALLY_EXPLICIT、HARM_CATEGORY_DANGEROUS_CONTENT、HARM_CATEGORY_HARASSMENT、HARM_CATEGORY_CIVIC_INTEGRITY。如需可用安全設定的詳細資訊,請參閱指南。此外,請參閱安全指南,瞭解如何在 AI 應用程式中納入安全考量。

systemInstruction object (Content)

(選用步驟) 開發人員設定系統指令。目前僅支援文字。

generationConfig object (GenerationConfig)

(選用步驟) 模型生成和輸出內容的設定選項。

cachedContent string

(選用步驟) 快取內容的名稱,用來做為提供預測結果的背景資訊。格式:cachedContents/{cachedContent}

要求範例

文字

Python

from google import genai  client = genai.Client() response = client.models.generate_content_stream(     model="gemini-2.0-flash", contents="Write a story about a magic backpack." ) for chunk in response:     print(chunk.text)     print("_" * 80)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  const response = await ai.models.generateContentStream({   model: "gemini-2.0-flash",   contents: "Write a story about a magic backpack.", }); let text = ""; for await (const chunk of response) {   console.log(chunk.text);   text += chunk.text; }

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) } contents := []*genai.Content{ 	genai.NewContentFromText("Write a story about a magic backpack.", genai.RoleUser), } for response, err := range client.Models.GenerateContentStream( 	ctx, 	"gemini-2.0-flash", 	contents, 	nil, ) { 	if err != nil { 		log.Fatal(err) 	} 	fmt.Print(response.Candidates[0].Content.Parts[0].Text) }

貝殼

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=${GEMINI_API_KEY}" \         -H 'Content-Type: application/json' \         --no-buffer \         -d '{ "contents":[{"parts":[{"text": "Write a story about a magic backpack."}]}]}'

Java

Client client = new Client();  ResponseStream<GenerateContentResponse> responseStream =         client.models.generateContentStream(                 "gemini-2.0-flash",                 "Write a story about a magic backpack.",                 null);  StringBuilder response = new StringBuilder(); for (GenerateContentResponse res : responseStream) {     System.out.print(res.text());     response.append(res.text()); }  responseStream.close();

圖片

Python

from google import genai import PIL.Image  client = genai.Client() organ = PIL.Image.open(media / "organ.jpg") response = client.models.generate_content_stream(     model="gemini-2.0-flash", contents=["Tell me about this instrument", organ] ) for chunk in response:     print(chunk.text)     print("_" * 80)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  const organ = await ai.files.upload({   file: path.join(media, "organ.jpg"), });  const response = await ai.models.generateContentStream({   model: "gemini-2.0-flash",   contents: [     createUserContent([       "Tell me about this instrument",        createPartFromUri(organ.uri, organ.mimeType)     ]),   ], }); let text = ""; for await (const chunk of response) {   console.log(chunk.text);   text += chunk.text; }

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) } file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "organ.jpg"),  	&genai.UploadFileConfig{ 		MIMEType : "image/jpeg", 	}, ) if err != nil { 	log.Fatal(err) } parts := []*genai.Part{ 	genai.NewPartFromText("Tell me about this instrument"), 	genai.NewPartFromURI(file.URI, file.MIMEType), } contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), } for response, err := range client.Models.GenerateContentStream( 	ctx, 	"gemini-2.0-flash", 	contents, 	nil, ) { 	if err != nil { 		log.Fatal(err) 	} 	fmt.Print(response.Candidates[0].Content.Parts[0].Text) }

貝殼

cat > "$TEMP_JSON" << EOF {   "contents": [{     "parts":[       {"text": "Tell me about this instrument"},       {         "inline_data": {           "mime_type":"image/jpeg",           "data": "$(cat "$TEMP_B64")"         }       }     ]   }] } EOF  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d "@$TEMP_JSON" 2> /dev/null

Java

Client client = new Client();  String path = media_path + "organ.jpg"; byte[] imageData = Files.readAllBytes(Paths.get(path));  Content content =         Content.fromParts(                 Part.fromText("Tell me about this instrument."),                 Part.fromBytes(imageData, "image/jpeg"));   ResponseStream<GenerateContentResponse> responseStream =         client.models.generateContentStream(                 "gemini-2.0-flash",                 content,                 null);  StringBuilder response = new StringBuilder(); for (GenerateContentResponse res : responseStream) {     System.out.print(res.text());     response.append(res.text()); }  responseStream.close();

音訊

Python

from google import genai  client = genai.Client() sample_audio = client.files.upload(file=media / "sample.mp3") response = client.models.generate_content_stream(     model="gemini-2.0-flash",     contents=["Give me a summary of this audio file.", sample_audio], ) for chunk in response:     print(chunk.text)     print("_" * 80)

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "sample.mp3"),  	&genai.UploadFileConfig{ 		MIMEType : "audio/mpeg", 	}, ) if err != nil { 	log.Fatal(err) }  parts := []*genai.Part{ 	genai.NewPartFromText("Give me a summary of this audio file."), 	genai.NewPartFromURI(file.URI, file.MIMEType), }  contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), }  for result, err := range client.Models.GenerateContentStream( 	ctx, 	"gemini-2.0-flash", 	contents, 	nil, ) { 	if err != nil { 		log.Fatal(err) 	} 	fmt.Print(result.Candidates[0].Content.Parts[0].Text) }

貝殼

# Use File API to upload audio data to API request. MIME_TYPE=$(file -b --mime-type "${AUDIO_PATH}") NUM_BYTES=$(wc -c < "${AUDIO_PATH}") DISPLAY_NAME=AUDIO  tmp_header_file=upload-header.tmp  # Initial resumable request defining metadata. # The upload url is in the response headers dump them to a file. curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \   -D upload-header.tmp \   -H "X-Goog-Upload-Protocol: resumable" \   -H "X-Goog-Upload-Command: start" \   -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \   -H "Content-Type: application/json" \   -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null  upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r") rm "${tmp_header_file}"  # Upload the actual bytes. curl "${upload_url}" \   -H "Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Offset: 0" \   -H "X-Goog-Upload-Command: upload, finalize" \   --data-binary "@${AUDIO_PATH}" 2> /dev/null > file_info.json  file_uri=$(jq ".file.uri" file_info.json) echo file_uri=$file_uri  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [{         "parts":[           {"text": "Please describe this file."},           {"file_data":{"mime_type": "audio/mpeg", "file_uri": '$file_uri'}}]         }]        }' 2> /dev/null > response.json  cat response.json echo

影片

Python

from google import genai import time  client = genai.Client() # Video clip (CC BY 3.0) from https://peach.blender.org/download/ myfile = client.files.upload(file=media / "Big_Buck_Bunny.mp4") print(f"{myfile=}")  # Poll until the video file is completely processed (state becomes ACTIVE). while not myfile.state or myfile.state.name != "ACTIVE":     print("Processing video...")     print("File state:", myfile.state)     time.sleep(5)     myfile = client.files.get(name=myfile.name)  response = client.models.generate_content_stream(     model="gemini-2.0-flash", contents=[myfile, "Describe this video clip"] ) for chunk in response:     print(chunk.text)     print("_" * 80)

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });  let video = await ai.files.upload({   file: path.join(media, 'Big_Buck_Bunny.mp4'), });  // Poll until the video file is completely processed (state becomes ACTIVE). while (!video.state || video.state.toString() !== 'ACTIVE') {   console.log('Processing video...');   console.log('File state: ', video.state);   await sleep(5000);   video = await ai.files.get({name: video.name}); }  const response = await ai.models.generateContentStream({   model: "gemini-2.0-flash",   contents: [     createUserContent([       "Describe this video clip",       createPartFromUri(video.uri, video.mimeType),     ]),   ], }); let text = ""; for await (const chunk of response) {   console.log(chunk.text);   text += chunk.text; }

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "Big_Buck_Bunny.mp4"),  	&genai.UploadFileConfig{ 		MIMEType : "video/mp4", 	}, ) if err != nil { 	log.Fatal(err) }  // Poll until the video file is completely processed (state becomes ACTIVE). for file.State == genai.FileStateUnspecified || file.State != genai.FileStateActive { 	fmt.Println("Processing video...") 	fmt.Println("File state:", file.State) 	time.Sleep(5 * time.Second)  	file, err = client.Files.Get(ctx, file.Name, nil) 	if err != nil { 		log.Fatal(err) 	} }  parts := []*genai.Part{ 	genai.NewPartFromText("Describe this video clip"), 	genai.NewPartFromURI(file.URI, file.MIMEType), }  contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), }  for result, err := range client.Models.GenerateContentStream( 	ctx, 	"gemini-2.0-flash", 	contents, 	nil, ) { 	if err != nil { 		log.Fatal(err) 	} 	fmt.Print(result.Candidates[0].Content.Parts[0].Text) }

貝殼

# Use File API to upload audio data to API request. MIME_TYPE=$(file -b --mime-type "${VIDEO_PATH}") NUM_BYTES=$(wc -c < "${VIDEO_PATH}") DISPLAY_NAME=VIDEO_PATH  # Initial resumable request defining metadata. # The upload url is in the response headers dump them to a file. curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \   -D upload-header.tmp \   -H "X-Goog-Upload-Protocol: resumable" \   -H "X-Goog-Upload-Command: start" \   -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \   -H "Content-Type: application/json" \   -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null  upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r") rm "${tmp_header_file}"  # Upload the actual bytes. curl "${upload_url}" \   -H "Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Offset: 0" \   -H "X-Goog-Upload-Command: upload, finalize" \   --data-binary "@${VIDEO_PATH}" 2> /dev/null > file_info.json  file_uri=$(jq ".file.uri" file_info.json) echo file_uri=$file_uri  state=$(jq ".file.state" file_info.json) echo state=$state  while [[ "($state)" = *"PROCESSING"* ]]; do   echo "Processing video..."   sleep 5   # Get the file of interest to check state   curl https://generativelanguage.googleapis.com/v1beta/files/$name > file_info.json   state=$(jq ".file.state" file_info.json) done  curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [{         "parts":[           {"text": "Please describe this file."},           {"file_data":{"mime_type": "video/mp4", "file_uri": '$file_uri'}}]         }]        }' 2> /dev/null > response.json  cat response.json echo

PDF

Python

from google import genai  client = genai.Client() sample_pdf = client.files.upload(file=media / "test.pdf") response = client.models.generate_content_stream(     model="gemini-2.0-flash",     contents=["Give me a summary of this document:", sample_pdf], )  for chunk in response:     print(chunk.text)     print("_" * 80)

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  file, err := client.Files.UploadFromPath( 	ctx,  	filepath.Join(getMedia(), "test.pdf"),  	&genai.UploadFileConfig{ 		MIMEType : "application/pdf", 	}, ) if err != nil { 	log.Fatal(err) }  parts := []*genai.Part{ 	genai.NewPartFromText("Give me a summary of this document:"), 	genai.NewPartFromURI(file.URI, file.MIMEType), }  contents := []*genai.Content{ 	genai.NewContentFromParts(parts, genai.RoleUser), }  for result, err := range client.Models.GenerateContentStream( 	ctx, 	"gemini-2.0-flash", 	contents, 	nil, ) { 	if err != nil { 		log.Fatal(err) 	} 	fmt.Print(result.Candidates[0].Content.Parts[0].Text) }

貝殼

MIME_TYPE=$(file -b --mime-type "${PDF_PATH}") NUM_BYTES=$(wc -c < "${PDF_PATH}") DISPLAY_NAME=TEXT   echo $MIME_TYPE tmp_header_file=upload-header.tmp  # Initial resumable request defining metadata. # The upload url is in the response headers dump them to a file. curl "${BASE_URL}/upload/v1beta/files?key=${GEMINI_API_KEY}" \   -D upload-header.tmp \   -H "X-Goog-Upload-Protocol: resumable" \   -H "X-Goog-Upload-Command: start" \   -H "X-Goog-Upload-Header-Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Header-Content-Type: ${MIME_TYPE}" \   -H "Content-Type: application/json" \   -d "{'file': {'display_name': '${DISPLAY_NAME}'}}" 2> /dev/null  upload_url=$(grep -i "x-goog-upload-url: " "${tmp_header_file}" | cut -d" " -f2 | tr -d "\r") rm "${tmp_header_file}"  # Upload the actual bytes. curl "${upload_url}" \   -H "Content-Length: ${NUM_BYTES}" \   -H "X-Goog-Upload-Offset: 0" \   -H "X-Goog-Upload-Command: upload, finalize" \   --data-binary "@${PDF_PATH}" 2> /dev/null > file_info.json  file_uri=$(jq ".file.uri" file_info.json) echo file_uri=$file_uri  # Now generate content using that file curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY" \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [{         "parts":[           {"text": "Can you add a few more lines to this poem?"},           {"file_data":{"mime_type": "application/pdf", "file_uri": '$file_uri'}}]         }]        }' 2> /dev/null > response.json  cat response.json echo

即時通訊

Python

from google import genai from google.genai import types  client = genai.Client() chat = client.chats.create(     model="gemini-2.0-flash",     history=[         types.Content(role="user", parts=[types.Part(text="Hello")]),         types.Content(             role="model",             parts=[                 types.Part(                     text="Great to meet you. What would you like to know?"                 )             ],         ),     ], ) response = chat.send_message_stream(message="I have 2 dogs in my house.") for chunk in response:     print(chunk.text)     print("_" * 80) response = chat.send_message_stream(message="How many paws are in my house?") for chunk in response:     print(chunk.text)     print("_" * 80)  print(chat.get_history())

Node.js

// Make sure to include the following import: // import {GoogleGenAI} from '@google/genai'; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); const chat = ai.chats.create({   model: "gemini-2.0-flash",   history: [     {       role: "user",       parts: [{ text: "Hello" }],     },     {       role: "model",       parts: [{ text: "Great to meet you. What would you like to know?" }],     },   ], });  console.log("Streaming response for first message:"); const stream1 = await chat.sendMessageStream({   message: "I have 2 dogs in my house.", }); for await (const chunk of stream1) {   console.log(chunk.text);   console.log("_".repeat(80)); }  console.log("Streaming response for second message:"); const stream2 = await chat.sendMessageStream({   message: "How many paws are in my house?", }); for await (const chunk of stream2) {   console.log(chunk.text);   console.log("_".repeat(80)); }  console.log(chat.getHistory());

Go

ctx := context.Background() client, err := genai.NewClient(ctx, &genai.ClientConfig{ 	APIKey:  os.Getenv("GEMINI_API_KEY"), 	Backend: genai.BackendGeminiAPI, }) if err != nil { 	log.Fatal(err) }  history := []*genai.Content{ 	genai.NewContentFromText("Hello", genai.RoleUser), 	genai.NewContentFromText("Great to meet you. What would you like to know?", genai.RoleModel), } chat, err := client.Chats.Create(ctx, "gemini-2.0-flash", nil, history) if err != nil { 	log.Fatal(err) }  for chunk, err := range chat.SendMessageStream(ctx, genai.Part{Text: "I have 2 dogs in my house."}) { 	if err != nil { 		log.Fatal(err) 	} 	fmt.Println(chunk.Text()) 	fmt.Println(strings.Repeat("_", 64)) }  for chunk, err := range chat.SendMessageStream(ctx, genai.Part{Text: "How many paws are in my house?"}) { 	if err != nil { 		log.Fatal(err) 	} 	fmt.Println(chunk.Text()) 	fmt.Println(strings.Repeat("_", 64)) }  fmt.Println(chat.History(false))

貝殼

curl https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=$GEMINI_API_KEY \     -H 'Content-Type: application/json' \     -X POST \     -d '{       "contents": [         {"role":"user",          "parts":[{            "text": "Hello"}]},         {"role": "model",          "parts":[{            "text": "Great to meet you. What would you like to know?"}]},         {"role":"user",          "parts":[{            "text": "I have two dogs in my house. How many paws are in my house?"}]},       ]     }' 2> /dev/null | grep "text"

回應主體

如果成功,回應主體會包含 GenerateContentResponse 例項的串流。

GenerateContentResponse

模型回覆,支援多個候選回覆。

系統會針對 GenerateContentResponse.prompt_feedback 中的提示,以及 finishReasonsafetyRatings 中的每個候選項目,回報安全評估結果和內容篩選結果。API: - 會傳回所有要求候選人或不傳回任何候選人 - 只有在提示有誤時 (請檢查 promptFeedback),才會完全不傳回任何候選人 - 會在 finishReasonsafetyRatings 中回報每位候選人的意見回饋。

欄位
candidates[] object (Candidate)

模型提供的候選回覆。

promptFeedback object (PromptFeedback)

傳回與內容篩選器相關的提示意見回饋。

usageMetadata object (UsageMetadata)

僅供輸出。生成要求權杖用量的中繼資料。

modelVersion string

僅供輸出。用於生成回覆的模型版本。

responseId string

僅供輸出。responseId 用於識別每個回應。

JSON 表示法
{   "candidates": [     {       object (Candidate)     }   ],   "promptFeedback": {     object (PromptFeedback)   },   "usageMetadata": {     object (UsageMetadata)   },   "modelVersion": string,   "responseId": string }

PromptFeedback

提示在 GenerateContentRequest.content 中指定的一組意見回饋中繼資料。

欄位
blockReason enum (BlockReason)

(選用步驟) 如果設定了這項參數,系統就會封鎖提示,且不會傳回任何候選項目。改寫提示。

safetyRatings[] object (SafetyRating)

提示安全評分。每個類別最多只能有一個分級。

JSON 表示法
{   "blockReason": enum (BlockReason),   "safetyRatings": [     {       object (SafetyRating)     }   ] }

BlockReason

說明提示遭到封鎖的原因。

列舉
BLOCK_REASON_UNSPECIFIED 預設值。這個值不會使用。
SAFETY 基於安全考量,系統已封鎖提示。檢查 safetyRatings,瞭解是哪個安全類別封鎖了該內容。
OTHER 提示遭到封鎖,原因不明。
BLOCKLIST 提示含有術語封鎖清單中的字詞,因此遭到封鎖。
PROHIBITED_CONTENT 提示含有禁止出現的內容,因此遭到封鎖。
IMAGE_SAFETY 候選人因生成不安全的圖像內容而遭到封鎖。

UsageMetadata

生成要求權杖用量的中繼資料。

欄位
promptTokenCount integer

提示中的權杖數量。即使設定 cachedContent,這仍是有效提示的總大小,也就是說,這包括快取內容中的權杖數量。

cachedContentTokenCount integer

提示快取部分 (快取內容) 中的權杖數量

candidatesTokenCount integer

所有生成的候選回覆的權杖總數。

toolUsePromptTokenCount integer

僅供輸出。工具使用提示中的權杖數量。

thoughtsTokenCount integer

僅供輸出。思考模型思考時的詞元數量。

totalTokenCount integer

生成要求 (提示 + 回覆候選項目) 的權杖總數。

promptTokensDetails[] object (ModalityTokenCount)

僅供輸出。要求輸入內容中處理的模態清單。

cacheTokensDetails[] object (ModalityTokenCount)

僅供輸出。要求輸入內容中快取內容的模態清單。

candidatesTokensDetails[] object (ModalityTokenCount)

僅供輸出。回應中傳回的模態清單。

toolUsePromptTokensDetails[] object (ModalityTokenCount)

僅供輸出。處理工具使用要求輸入內容的模態清單。

JSON 表示法
{   "promptTokenCount": integer,   "cachedContentTokenCount": integer,   "candidatesTokenCount": integer,   "toolUsePromptTokenCount": integer,   "thoughtsTokenCount": integer,   "totalTokenCount": integer,   "promptTokensDetails": [     {       object (ModalityTokenCount)     }   ],   "cacheTokensDetails": [     {       object (ModalityTokenCount)     }   ],   "candidatesTokensDetails": [     {       object (ModalityTokenCount)     }   ],   "toolUsePromptTokensDetails": [     {       object (ModalityTokenCount)     }   ] }

候選人

模型生成的候選回覆。

欄位
content object (Content)

僅供輸出。模型傳回的生成內容。

finishReason enum (FinishReason)

(選用步驟) 僅供輸出。模型停止生成權杖的原因。

如果為空白,表示模型尚未停止產生符記。

safetyRatings[] object (SafetyRating)

回覆候選內容安全評分清單。

每個類別最多只能有一個分級。

citationMetadata object (CitationMetadata)

僅供輸出。模型生成的候選內容的引用資訊。

這個欄位可能會填入 content 中任何文字的背誦資訊。這些是從基礎 LLM 訓練資料中的著作權內容「背誦」的段落。

tokenCount integer

僅供輸出。這個候選人的權杖數量。

groundingAttributions[] object (GroundingAttribution)

僅供輸出。提供可靠答案的來源作者資訊。

這個欄位會填入 GenerateAnswer 呼叫。

groundingMetadata object (GroundingMetadata)

僅供輸出。候選人的基礎中繼資料。

這個欄位會填入 GenerateContent 呼叫。

avgLogprobs number

僅供輸出。候選人的平均對數機率分數。

logprobsResult object (LogprobsResult)

僅供輸出。回應符記和熱門符記的對數似然分數

urlContextMetadata object (UrlContextMetadata)

僅供輸出。與網址內容擷取工具相關的中繼資料。

index integer

僅供輸出。回應候選人清單中候選人的索引。

JSON 表示法
{   "content": {     object (Content)   },   "finishReason": enum (FinishReason),   "safetyRatings": [     {       object (SafetyRating)     }   ],   "citationMetadata": {     object (CitationMetadata)   },   "tokenCount": integer,   "groundingAttributions": [     {       object (GroundingAttribution)     }   ],   "groundingMetadata": {     object (GroundingMetadata)   },   "avgLogprobs": number,   "logprobsResult": {     object (LogprobsResult)   },   "urlContextMetadata": {     object (UrlContextMetadata)   },   "index": integer }

FinishReason

定義模型停止生成權杖的原因。

列舉
FINISH_REASON_UNSPECIFIED 預設值。這個值不會使用。
STOP 模型自然停止點或提供的停止序列。
MAX_TOKENS 已達到要求中指定的權杖數量上限。
SAFETY 基於安全考量,系統已標記候選回覆內容。
RECITATION 由於背誦內容,回覆候選內容遭到檢舉。
LANGUAGE 系統偵測到回覆候選內容使用不受支援的語言,因此標記為違規。
OTHER 原因不明。
BLOCKLIST 由於內容含有禁止使用的字詞,因此系統已停止產生權杖。
PROHIBITED_CONTENT 由於可能含有禁止內容,系統已停止生成權杖。
SPII 由於內容可能含有具敏感性的個人識別資訊 (SPII),因此系統已停止產生權杖。
MALFORMED_FUNCTION_CALL 模型產生的函式呼叫無效。
IMAGE_SAFETY 生成的圖像含有違反安全規定的內容,因此系統已停止生成權杖。
UNEXPECTED_TOOL_CALL 模型產生工具呼叫,但要求中未啟用任何工具。

GroundingAttribution

對提供答案的來源進行出處標註。

欄位
sourceId object (AttributionSourceId)

僅供輸出。促成這項歸因的來源 ID。

content object (Content)

構成這項出處資訊的基礎來源內容。

JSON 表示法
{   "sourceId": {     object (AttributionSourceId)   },   "content": {     object (Content)   } }

AttributionSourceId

促成這項歸因的來源 ID。

欄位
source Union type
source 只能是下列其中一項:
groundingPassage object (GroundingPassageId)

內嵌段落的 ID。

semanticRetrieverChunk object (SemanticRetrieverChunk)

透過語意擷取器擷取的 Chunk ID。

JSON 表示法
{    // source   "groundingPassage": {     object (GroundingPassageId)   },   "semanticRetrieverChunk": {     object (SemanticRetrieverChunk)   }   // Union type }

GroundingPassageId

GroundingPassage 中零件的 ID。

欄位
passageId string

僅供輸出。與 GenerateAnswerRequestGroundingPassage.id 相符的段落 ID。

partIndex integer

僅供輸出。GenerateAnswerRequestGroundingPassage.content 的索引。

JSON 表示法
{   "passageId": string,   "partIndex": integer }

SemanticRetrieverChunk

透過 SemanticRetrieverConfig 中指定的語意擷取器 GenerateAnswerRequest 擷取的 Chunk ID。

欄位
source string

僅供輸出。與要求 SemanticRetrieverConfig.source 相符的來源名稱。例如:corpora/123corpora/123/documents/abc

chunk string

僅供輸出。包含出處文字的 Chunk 名稱。範例:corpora/123/documents/abc/chunks/xyz

JSON 表示法
{   "source": string,   "chunk": string }

GroundingMetadata

啟用基礎模型時傳回給用戶端的中繼資料。

欄位
groundingChunks[] object (GroundingChunk)

從指定基礎來源擷取的佐證參考資料清單。

groundingSupports[] object (GroundingSupport)

支援的基礎模型清單。

webSearchQueries[] string

用於後續網頁搜尋的網頁搜尋查詢。

searchEntryPoint object (SearchEntryPoint)

(選用步驟) 後續網頁搜尋的 Google 搜尋項目。

retrievalMetadata object (RetrievalMetadata)

基礎流程中與擷取相關的中繼資料。

JSON 表示法
{   "groundingChunks": [     {       object (GroundingChunk)     }   ],   "groundingSupports": [     {       object (GroundingSupport)     }   ],   "webSearchQueries": [     string   ],   "searchEntryPoint": {     object (SearchEntryPoint)   },   "retrievalMetadata": {     object (RetrievalMetadata)   } }

SearchEntryPoint

Google 搜尋進入點。

欄位
renderedContent string

(選用步驟) 可內嵌在網頁或應用程式 WebView 中的網頁內容程式碼片段。

sdkBlob string (bytes format)

(選用步驟) 以 Base64 編碼的 JSON,代表 <搜尋字詞、搜尋網址> 元組的陣列。

Base64 編碼字串。

JSON 表示法
{   "renderedContent": string,   "sdkBlob": string }

GroundingChunk

基礎區塊。

欄位
chunk_type Union type
區塊類型。chunk_type 只能是下列其中一項:
web object (Web)

網路上的基礎資料塊。

JSON 表示法
{    // chunk_type   "web": {     object (Web)   }   // Union type }

網頁

網路上的區塊。

欄位
uri string

區塊的 URI 參照。

title string

區塊的標題。

JSON 表示法
{   "uri": string,   "title": string }

GroundingSupport

支援建立基準。

欄位
groundingChunkIndices[] integer

索引清單 (進入「grounding_chunk」),指定與聲明相關的引文。舉例來說,[1,3,4] 表示 grounding_chunk[1]、grounding_chunk[3]、grounding_chunk[4] 是歸因於該聲明的擷取內容。

confidenceScores[] number

支援參考資料的可信度分數。範圍為 0 到 1。1 代表最有信心,這個清單的大小必須與 groundingChunkIndices 相同。

segment object (Segment)

這項支援服務所屬的內容區隔。

JSON 表示法
{   "groundingChunkIndices": [     integer   ],   "confidenceScores": [     number   ],   "segment": {     object (Segment)   } }

區隔

內容片段。

欄位
partIndex integer

僅供輸出。Part 物件在其父項 Content 物件中的索引。

startIndex integer

僅供輸出。指定 Part 中的開始索引,以位元組為單位。從 Part 開頭的偏移量 (含),從零開始。

endIndex integer

僅供輸出。指定 Part 中的結束索引,以位元組為單位。從 Part 開頭算起的偏移量 (不含開頭),從零開始。

text string

僅供輸出。回覆中與該區段相應的文字。

JSON 表示法
{   "partIndex": integer,   "startIndex": integer,   "endIndex": integer,   "text": string }

RetrievalMetadata

基礎流程中與擷取相關的中繼資料。

欄位
googleSearchDynamicRetrievalScore number

(選用步驟) 分數:表示 Google 搜尋資訊有多大可能可以協助回答提示。分數範圍為 [0, 1],0 代表最不可能,1 代表最有可能。只有在啟用 Google 搜尋基礎和動態擷取功能時,系統才會填入這項分數。系統會將這項分數與門檻比較,判斷是否要觸發 Google 搜尋。

JSON 表示法
{   "googleSearchDynamicRetrievalScore": number }

LogprobsResult

Logprobs 結果

欄位
topCandidates[] object (TopCandidates)

長度 = 解碼步驟總數。

chosenCandidates[] object (Candidate)

長度 = 解碼步驟總數。所選候選字可能位於 topCandidates 中,也可能不在其中。

JSON 表示法
{   "topCandidates": [     {       object (TopCandidates)     }   ],   "chosenCandidates": [     {       object (Candidate)     }   ] }

TopCandidates

每個解碼步驟中,記錄機率最高的候選字詞。

欄位
candidates[] object (Candidate)

依對數機率遞減排序。

JSON 表示法
{   "candidates": [     {       object (Candidate)     }   ] }

候選人

記錄機率符記和分數的候選項目。

欄位
token string

候選人的權杖字串值。

tokenId integer

候選人的權杖 ID 值。

logProbability number

候選者的記錄機率。

JSON 表示法
{   "token": string,   "tokenId": integer,   "logProbability": number }

UrlContextMetadata

與網址內容擷取工具相關的中繼資料。

欄位
urlMetadata[] object (UrlMetadata)

網址內容清單。

JSON 表示法
{   "urlMetadata": [     {       object (UrlMetadata)     }   ] }

UrlMetadata

單一網址擷取的背景資訊。

欄位
retrievedUrl string

工具擷取的網址。

urlRetrievalStatus enum (UrlRetrievalStatus)

網址擷取狀態。

JSON 表示法
{   "retrievedUrl": string,   "urlRetrievalStatus": enum (UrlRetrievalStatus) }

UrlRetrievalStatus

網址擷取狀態。

列舉
URL_RETRIEVAL_STATUS_UNSPECIFIED 預設值。這個值不會使用。
URL_RETRIEVAL_STATUS_SUCCESS 網址擷取成功。
URL_RETRIEVAL_STATUS_ERROR 發生錯誤,因此無法擷取網址。

CitationMetadata

內容的來源出處集合。

欄位
citationSources[] object (CitationSource)

特定回覆的來源引用內容。

JSON 表示法
{   "citationSources": [     {       object (CitationSource)     }   ] }

CitationSource

特定回覆部分內容的來源出處。

欄位
startIndex integer

(選用步驟) 歸因於這個來源的回覆片段開頭。

索引會指出區段的開頭,以位元組為單位。

endIndex integer

(選用步驟) 歸因區隔的結束時間 (不包含在內)。

uri string

(選用步驟) URI,歸因於部分文字的來源。

license string

(選用步驟) GitHub 專案的授權,該專案會歸因於區隔的來源。

引用程式碼時必須提供授權資訊。

JSON 表示法
{   "startIndex": integer,   "endIndex": integer,   "uri": string,   "license": string }

GenerationConfig

模型生成和輸出的設定選項。並非所有模型都能設定所有參數。

欄位
stopSequences[] string

(選用步驟) 這組字元序列 (最多 5 個) 會停止產生輸出內容。如果指定,API 會在第一次出現 stop_sequence 時停止。回覆中不會包含停止序列。

responseMimeType string

(選用步驟) 產生的候選文字 MIME 類型。支援的 MIME 類型包括:text/plain:(預設) 文字輸出。application/json:回應候選項目中的 JSON 回應。text/x.enum:ENUM 做為回應候選項目中的字串回應。如需所有支援的文字 MIME 類型清單,請參閱說明文件

responseSchema object (Schema)

(選用步驟) 生成候選文字的輸出結構定義。結構定義必須是 OpenAPI 結構定義的子集,且可以是物件、基本型別或陣列。

如要設定這項屬性,也必須設定相容的 responseMimeType。相容的 MIME 類型:application/json:JSON 回應的結構定義。詳情請參閱 JSON 文字生成指南

responseJsonSchema value (Value format)

(選用步驟) 生成回覆的輸出結構定義。這是 responseSchema 的替代方案,可接受 JSON 結構定義

如果已設定,就必須省略 responseSchema,但 responseMimeType 為必填。

雖然可以傳送完整的 JSON 結構定義,但並非所有功能都受到支援。具體來說,系統僅支援下列屬性:

  • $id
  • $defs
  • $ref
  • $anchor
  • type
  • format
  • title
  • description
  • enum (適用於字串和數字)
  • items
  • prefixItems
  • minItems
  • maxItems
  • minimum
  • maximum
  • anyOf
  • oneOf (解讀方式與 anyOf 相同)
  • properties
  • additionalProperties
  • required

也可以設定非標準的 propertyOrdering 屬性。

循環參照會展開至有限程度,因此只能用於非必要屬性。(可為空值的屬性不足)。如果子結構定義中設定了 $ref,則只能設定以 $ 開頭的屬性。

responseModalities[] enum (Modality)

(選用步驟) 要求的回覆模式。代表模型可傳回的一組模態,且應在回應中預期。這與回覆的模態完全相符。

模型可能支援多種模式組合。如果要求模式與任何支援的組合不符,系統會傳回錯誤。

空白清單等同於只要求文字。

candidateCount integer

(選用步驟) 要傳回的生成回覆數量。如未設定,系統會預設為 1。請注意,這項功能不適用於前幾代模型 (Gemini 1.0 系列)

maxOutputTokens integer

(選用步驟) 回覆候選內容中可包含的詞元數量上限。

注意:預設值因模型而異,請參閱 getModel 函式傳回的 ModelModel.output_token_limit 屬性。

temperature number

(選用步驟) 控制輸出內容的隨機程度。

注意:預設值因模型而異,請參閱 getModel 函式傳回的 ModelModel.temperature 屬性。

值的範圍為 [0.0, 2.0]。

topP number

(選用步驟) 取樣時要考慮的符記累計機率上限。

模型會結合 Top-k 和 Top-p (核心) 抽樣。

系統會根據指派的機率排序符記,只考慮機率最高的符記。Top-k 取樣會直接限制要考慮的詞元數量上限,而 Nucleus 取樣則會根據累積機率限制詞元數量。

注意:預設值會因 Model 而異,並由 getModel 函式傳回的 Model.top_p 屬性指定。如果 topK 屬性為空白,表示模型不會套用 top-k 抽樣,也不允許在要求中設定 topK

topK integer

(選用步驟) 取樣時要考慮的權杖數量上限。

Gemini 模型會使用 Top-p (核心) 取樣,或 Top-k 和核心取樣的組合。Top-k 取樣會考慮topK機率最高的符記組合。使用核心取樣執行的模型不允許設定 topK。

注意:預設值會因 Model 而異,並由 getModel 函式傳回的 Model.top_p 屬性指定。如果 topK 屬性為空白,表示模型不會套用 top-k 抽樣,也不允許在要求中設定 topK

seed integer

(選用步驟) 解碼時使用的種子。如未設定,要求會使用隨機產生的種子。

presencePenalty number

(選用步驟) 如果權杖已出現在回應中,則會對下一個權杖的 logprobs 套用存在懲罰。

這項處罰是二進位制,不會根據權杖的使用次數 (第一次之後) 而有所不同。如果罰則會隨著使用次數增加,請使用 frequencyPenalty

正向懲罰會阻止使用回應中已用過的符記,進而增加詞彙。

負向懲罰會鼓勵使用回覆中已使用的符記,減少詞彙。

frequencyPenalty number

(選用步驟) 套用至下一個符記 logprobs 的頻率懲罰,乘以目前為止在回應中看到每個符記的次數。

正向懲罰會根據權杖的使用次數,阻止模型使用已用過的權杖:權杖使用次數越多,模型就越難再次使用該權杖,進而增加回應的詞彙。

注意:懲罰會鼓勵模型重複使用權杖,比例與權杖的使用次數成正比。如果值為負數,回應的詞彙就會減少。如果負值越大,模型就會開始重複常見的權杖,直到達到 maxOutputTokens 限制為止。

responseLogprobs boolean

(選用步驟) 如果為 true,則在回應中匯出 logprobs 結果。

logprobs integer

(選用步驟) 必須設定 responseLogprobs=True 才會生效。這會在 Candidate.logprobs_result 的每個解碼步驟中,設定要傳回的最高 logprobs 數量。

enableEnhancedCivicAnswers boolean

(選用步驟) 啟用強化版公民問題答案。部分機型可能不支援這項功能。

speechConfig object (SpeechConfig)

(選用步驟) 語音生成設定。

thinkingConfig object (ThinkingConfig)

(選用步驟) 思考功能的設定。如果為不支援思考的模型設定這個欄位,系統會傳回錯誤。

mediaResolution enum (MediaResolution)

(選用步驟) 如果指定,系統會使用指定的媒體解析度。

JSON 表示法
{   "stopSequences": [     string   ],   "responseMimeType": string,   "responseSchema": {     object (Schema)   },   "responseJsonSchema": value,   "responseModalities": [     enum (Modality)   ],   "candidateCount": integer,   "maxOutputTokens": integer,   "temperature": number,   "topP": number,   "topK": integer,   "seed": integer,   "presencePenalty": number,   "frequencyPenalty": number,   "responseLogprobs": boolean,   "logprobs": integer,   "enableEnhancedCivicAnswers": boolean,   "speechConfig": {     object (SpeechConfig)   },   "thinkingConfig": {     object (ThinkingConfig)   },   "mediaResolution": enum (MediaResolution) }

模態

支援的回覆模式。

列舉
MODALITY_UNSPECIFIED 預設值。
TEXT 表示模型應傳回文字。
IMAGE 表示模型應傳回圖片。
AUDIO 表示模型應傳回音訊。

SpeechConfig

語音生成設定。

欄位
voiceConfig object (VoiceConfig)

單一語音輸出時的設定。

multiSpeakerVoiceConfig object (MultiSpeakerVoiceConfig)

(選用步驟) 多音箱設定的設定檔。這個欄位與 voiceConfig 欄位互斥。

languageCode string

(選用步驟) 用於語音合成的語言代碼 (採用 BCP 47 格式,例如「en-US」)。

有效值包括:de-DE、en-AU、en-GB、en-IN、en-US、es-US、fr-FR、hi-IN、pt-BR、ar-XA、es-ES、fr-CA、id-ID、it-IT、ja-JP、tr-TR、vi-VN、bn-IN、gu-IN、kn-IN、ml-IN、mr-IN、ta-IN、te-IN、nl-NL、ko-KR、cmn-CN、pl-PL、ru-RU 和 th-TH。

JSON 表示法
{   "voiceConfig": {     object (VoiceConfig)   },   "multiSpeakerVoiceConfig": {     object (MultiSpeakerVoiceConfig)   },   "languageCode": string }

VoiceConfig

要使用的語音設定。

欄位
voice_config Union type
音箱要使用的設定。voice_config 只能是下列其中一項:
prebuiltVoiceConfig object (PrebuiltVoiceConfig)

要使用的預建語音設定。

JSON 表示法
{    // voice_config   "prebuiltVoiceConfig": {     object (PrebuiltVoiceConfig)   }   // Union type }

PrebuiltVoiceConfig

預先建構的音箱要使用的設定。

欄位
voiceName string

要使用的預設語音名稱。

JSON 表示法
{   "voiceName": string }

MultiSpeakerVoiceConfig

多音箱設定的設定。

欄位
speakerVoiceConfigs[] object (SpeakerVoiceConfig)

必要欄位。所有已啟用的揚聲器語音。

JSON 表示法
{   "speakerVoiceConfigs": [     {       object (SpeakerVoiceConfig)     }   ] }

SpeakerVoiceConfig

多部音箱設定中單一音箱的設定。

欄位
speaker string

必要欄位。要使用的音箱名稱。應與提示中的內容相同。

voiceConfig object (VoiceConfig)

必要欄位。要使用的語音設定。

JSON 表示法
{   "speaker": string,   "voiceConfig": {     object (VoiceConfig)   } }

ThinkingConfig

思考功能設定。

欄位
includeThoughts boolean

指出是否要在回覆中加入想法。如果為 true,則只會在有想法時傳回。

thinkingBudget integer

模型應生成的想法權杖數量。

JSON 表示法
{   "includeThoughts": boolean,   "thinkingBudget": integer }

MediaResolution

輸入媒體的媒體解析度。

列舉
MEDIA_RESOLUTION_UNSPECIFIED 尚未設定媒體解析度。
MEDIA_RESOLUTION_LOW 媒體解析度設為低 (64 個權杖)。
MEDIA_RESOLUTION_MEDIUM 媒體解析度設為中等 (256 個權杖)。
MEDIA_RESOLUTION_HIGH 媒體解析度設為高 (以 256 個權杖進行縮放重構)。

HarmCategory

分級的類別。

這些類別涵蓋開發人員可能想調整的各種危害。

列舉
HARM_CATEGORY_UNSPECIFIED 未指定類別。
HARM_CATEGORY_DEROGATORY PaLM - 針對特定身分和/或受保護特質發表負面或有害言論。
HARM_CATEGORY_TOXICITY PaLM - 粗魯、不敬或不雅的內容。
HARM_CATEGORY_VIOLENCE PaLM - 描述對個人或群體施暴的場景,或一般血腥內容。
HARM_CATEGORY_SEXUAL PaLM - 提及性行為或其他猥褻情事的內容。
HARM_CATEGORY_MEDICAL PaLM - 宣傳未經查證的醫療建議。
HARM_CATEGORY_DANGEROUS PaLM - 宣傳、鼓吹或助長有害舉動的危險內容。
HARM_CATEGORY_HARASSMENT Gemini - 騷擾內容。
HARM_CATEGORY_HATE_SPEECH Gemini - 仇恨言論和內容。
HARM_CATEGORY_SEXUALLY_EXPLICIT Gemini - 情色露骨內容。
HARM_CATEGORY_DANGEROUS_CONTENT Gemini - Dangerous content.
HARM_CATEGORY_CIVIC_INTEGRITY Gemini - 可能用於危害公民誠信的內容。

ModalityTokenCount

代表單一模態的權杖計數資訊。

欄位
modality enum (Modality)

與這個權杖計數相關聯的模態。

tokenCount integer

權杖數量。

JSON 表示法
{   "modality": enum (Modality),   "tokenCount": integer }

模態

內容部分模式

列舉
MODALITY_UNSPECIFIED 未指定模式。
TEXT 純文字。
IMAGE 圖片。
VIDEO 影片。
AUDIO 音訊。
DOCUMENT 文件,例如 PDF。

SafetyRating

內容的安全評分。

安全評分包含內容的危害類別,以及該類別的危害機率等級。系統會根據多個危害類別對內容進行安全分類,並在此處顯示內容屬於特定危害類別的機率。

欄位
category enum (HarmCategory)

必要欄位。這項評分的類別。

probability enum (HarmProbability)

必要欄位。這項內容的危害機率。

blocked boolean

這項內容是否因這個分級而遭到封鎖?

JSON 表示法
{   "category": enum (HarmCategory),   "probability": enum (HarmProbability),   "blocked": boolean }

HarmProbability

內容有害的機率。

分類系統會提供內容不安全的機率。這並不代表內容造成的危害程度。

列舉
HARM_PROBABILITY_UNSPECIFIED 未指定機率。
NEGLIGIBLE 內容不安全的機率微乎其微。
LOW 內容不太可能不安全。
MEDIUM 內容有中等機率不安全。
HIGH 內容很有可能不安全。

SafetySetting

安全性設定,會影響安全性封鎖行為。

為類別傳遞安全性設定會變更內容遭到封鎖的允許機率。

欄位
category enum (HarmCategory)

必要欄位。這項設定的類別。

threshold enum (HarmBlockThreshold)

必要欄位。控制系統封鎖有害內容的機率門檻。

JSON 表示法
{   "category": enum (HarmCategory),   "threshold": enum (HarmBlockThreshold) }

HarmBlockThreshold

封鎖有害機率達到或超過指定值的內容。

列舉
HARM_BLOCK_THRESHOLD_UNSPECIFIED 未指定門檻。
BLOCK_LOW_AND_ABOVE 內容的「可忽略」程度。
BLOCK_MEDIUM_AND_ABOVE 內容的影響程度為「微乎其微」和「低」時,將可繼續發布。
BLOCK_ONLY_HIGH 內容的風險等級為「可忽略」、「低」和「中」時,將可正常顯示。
BLOCK_NONE 允許所有內容。
OFF 關閉安全篩選器。