使用 Vertex AI API 训练 AutoML Edge 模型

您可以直接在Google Cloud 控制台中创建 AutoML 模型,也可以通过使用 API 或某个 Vertex AI 客户端库以编程方式创建训练流水线。

此模型是使用您通过控制台或 Vertex AI API 提供且准备好的数据集创建的。Vertex AI API 使用数据集中的项来训练、测试模型并评估模型性能。查看评估结果,根据需要调整训练数据集,并使用改进的数据集创建新的训练流水线。

模型训练可能需要几个小时才能完成。借助 Vertex AI API,您可以获取训练作业的状态。

创建 AutoML Edge 训练流水线

如果您有一个包含一组代表性训练项的数据集,就可以创建 AutoML Edge 训练流水线了。

选择数据类型。

图片

在下面选择您的目标对应的标签页:

分类

训练时,您可以根据具体使用场景选择所需的 AutoML Edge 模型类型:

  • 低延时 (MOBILE_TF_LOW_LATENCY_1)
  • 通用目的 (MOBILE_TF_VERSATILE_1)
  • 更高预测质量 (MOBILE_TF_HIGH_ACCURACY_1)

在下面选择您的语言或环境对应的标签页:

REST

在使用任何请求数据之前,请先进行以下替换:

  • LOCATION:数据集所在且模型在其中创建的区域。例如 us-central1
  • PROJECT:您的项目 ID
  • TRAININGPIPELINE_DISPLAYNAME:必填。trainingPipeline 的显示名称。
  • DATASET_ID:用于训练的数据集的 ID 编号。
  • fractionSplit:可选。数据的多个可能的机器学习用途拆分选项之一。对于 fractionSplit,值的总和必须为 1。例如:
    • {"trainingFraction": "0.7","validationFraction": "0.15","testFraction": "0.15"}
  • MODEL_DISPLAYNAME*:TrainingPipeline 上传(创建)的模型的显示名称。
  • MODEL_DESCRIPTION*:模型的说明。
  • modelToUpload.labels*:用于组织模型的任何键值对。例如:
    • "env": "prod"
    • "tier": "backend"
  • EDGE_MODELTYPE:要训练的 Edge 模型的类型。选项包括:
    • MOBILE_TF_LOW_LATENCY_1
    • MOBILE_TF_VERSATILE_1
    • MOBILE_TF_HIGH_ACCURACY_1
  • NODE_HOUR_BUDGET:实际训练费用将等于或小于此值。对于 Edge 模型,预算必须为 1,000 至 10 万毫节点时(含边界值)。
  • PROJECT_NUMBER:自动生成的项目编号

HTTP 方法和网址:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines

请求 JSON 正文:

 {   "displayName": "TRAININGPIPELINE_DISPLAYNAME",   "inputDataConfig": {     "datasetId": "DATASET_ID",     "fractionSplit": {       "trainingFraction": "DECIMAL",       "validationFraction": "DECIMAL",       "testFraction": "DECIMAL"     }   },   "modelToUpload": {     "displayName": "MODEL_DISPLAYNAME",     "description": "MODEL_DESCRIPTION",     "labels": {       "KEY": "VALUE"     }   },   "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml",   "trainingTaskInputs": {     "multiLabel": "false",     "modelType": ["EDGE_MODELTYPE"],     "budgetMilliNodeHours": NODE_HOUR_BUDGET   } } 

如需发送请求,请选择以下方式之一:

curl

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines"

PowerShell

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines" | Select-Object -Expand Content

响应包含有关规范的信息以及 TRAININGPIPELINE_ID

您可以使用 TRAININGPIPELINE_ID 获取 trainingPipeline 作业的状态

分类

训练时,您可以根据具体使用场景选择所需的 AutoML Edge 模型类型:

  • 低延时 (MOBILE_TF_LOW_LATENCY_1)
  • 通用目的 (MOBILE_TF_VERSATILE_1)
  • 更高预测质量 (MOBILE_TF_HIGH_ACCURACY_1)

在下面选择您的语言或环境对应的标签页:

REST

在使用任何请求数据之前,请先进行以下替换:

  • LOCATION:数据集所在且模型在其中创建的区域。例如 us-central1
  • PROJECT:您的项目 ID
  • TRAININGPIPELINE_DISPLAYNAME:必填。trainingPipeline 的显示名称。
  • DATASET_ID:用于训练的数据集的 ID 编号。
  • fractionSplit:可选。数据的多个可能的机器学习用途拆分选项之一。对于 fractionSplit,值的总和必须为 1。例如:
    • {"trainingFraction": "0.7","validationFraction": "0.15","testFraction": "0.15"}
  • MODEL_DISPLAYNAME*:TrainingPipeline 上传(创建)的模型的显示名称。
  • MODEL_DESCRIPTION*:模型的说明。
  • modelToUpload.labels*:用于组织模型的任何键值对。例如:
    • "env": "prod"
    • "tier": "backend"
  • EDGE_MODELTYPE:要训练的 Edge 模型的类型。选项包括:
    • MOBILE_TF_LOW_LATENCY_1
    • MOBILE_TF_VERSATILE_1
    • MOBILE_TF_HIGH_ACCURACY_1
  • NODE_HOUR_BUDGET:实际训练费用将等于或小于此值。对于 Edge 模型,预算必须为 1,000 至 10 万毫节点时(含边界值)。
  • PROJECT_NUMBER:自动生成的项目编号

HTTP 方法和网址:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines

请求 JSON 正文:

 {   "displayName": "TRAININGPIPELINE_DISPLAYNAME",   "inputDataConfig": {     "datasetId": "DATASET_ID",     "fractionSplit": {       "trainingFraction": "DECIMAL",       "validationFraction": "DECIMAL",       "testFraction": "DECIMAL"     }   },   "modelToUpload": {     "displayName": "MODEL_DISPLAYNAME",     "description": "MODEL_DESCRIPTION",     "labels": {       "KEY": "VALUE"     }   },   "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml",   "trainingTaskInputs": {     "multiLabel": "true",     "modelType": ["EDGE_MODELTYPE"],     "budgetMilliNodeHours": NODE_HOUR_BUDGET   } } 

如需发送请求,请选择以下方式之一:

curl

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines"

PowerShell

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines" | Select-Object -Expand Content

响应包含有关规范的信息以及 TRAININGPIPELINE_ID

您可以使用 TRAININGPIPELINE_ID 获取 trainingPipeline 作业的状态

对象检测

训练时,您可以根据具体使用场景选择所需的 AutoML Edge 模型类型:

  • 低延时 (MOBILE_TF_LOW_LATENCY_1)
  • 通用目的 (MOBILE_TF_VERSATILE_1)
  • 更高预测质量 (MOBILE_TF_HIGH_ACCURACY_1)

在下面选择您的语言或环境对应的标签页:

REST

在使用任何请求数据之前,请先进行以下替换:

  • LOCATION:数据集所在且模型在其中创建的区域。例如 us-central1
  • PROJECT:您的项目 ID
  • TRAININGPIPELINE_DISPLAYNAME:必填。trainingPipeline 的显示名称。
  • DATASET_ID:用于训练的数据集的 ID 编号。
  • fractionSplit:可选。数据的多个可能的机器学习用途拆分选项之一。对于 fractionSplit,值的总和必须为 1。例如:
    • {"trainingFraction": "0.7","validationFraction": "0.15","testFraction": "0.15"}
  • MODEL_DISPLAYNAME*:TrainingPipeline 上传(创建)的模型的显示名称。
  • MODEL_DESCRIPTION*:模型的说明。
  • modelToUpload.labels*:用于组织模型的任何键值对。例如:
    • "env": "prod"
    • "tier": "backend"
  • EDGE_MODELTYPE:要训练的 Edge 模型的类型。选项包括:
    • MOBILE_TF_LOW_LATENCY_1
    • MOBILE_TF_VERSATILE_1
    • MOBILE_TF_HIGH_ACCURACY_1
  • NODE_HOUR_BUDGET:实际训练费用将等于或小于此值。对于 Cloud 模型,预算必须为 2 万至 90 万毫节点时(含边界值)。默认值为 216,000,代表实际用时一天(假设使用 9 个节点)。
  • PROJECT_NUMBER:自动生成的项目编号

HTTP 方法和网址:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines

请求 JSON 正文:

 {   "displayName": "TRAININGPIPELINE_DISPLAYNAME",   "inputDataConfig": {     "datasetId": "DATASET_ID",     "fractionSplit": {       "trainingFraction": "DECIMAL",       "validationFraction": "DECIMAL",       "testFraction": "DECIMAL"     }   },   "modelToUpload": {     "displayName": "MODEL_DISPLAYNAME",     "description": "MODEL_DESCRIPTION",     "labels": {       "KEY": "VALUE"     }   },   "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml",   "trainingTaskInputs": {     "modelType": ["EDGE_MODELTYPE"],     "budgetMilliNodeHours": NODE_HOUR_BUDGET   } } 

如需发送请求,请选择以下方式之一:

curl

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines"

PowerShell

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines" | Select-Object -Expand Content

响应包含有关规范的信息以及 TRAININGPIPELINE_ID

您可以使用 TRAININGPIPELINE_ID 获取 trainingPipeline 作业的状态

视频

在下面选择您的目标对应的标签页:

动作识别

在训练时,选择以下 AutoML Edge 类型:

  • MOBILE_VERSATILE_1:通用目的

REST

在使用任何请求数据之前,请先进行以下替换:

  • PROJECT:您的项目 ID
  • LOCATION:数据集所在且模型在其中创建的区域。例如 us-central1
  • TRAINING_PIPELINE_DISPLAY_NAME:必填。TrainingPipeline 的显示名称。
  • DATASET_ID:训练数据集的 ID。
  • TRAINING_FRACTIONTEST_FRACTIONfractionSplit 对象是可选的;您使用它来控制数据拆分。如需详细了解如何控制数据拆分,请参阅 AutoML 模型的数据拆分简介。例如:
    • {"trainingFraction": "0.8","validationFraction": "0","testFraction": "0.2"}
  • MODEL_DISPLAY_NAME:经过训练的模型的显示名称。
  • MODEL_DESCRIPTION:模型的说明。
  • MODEL_LABELS:用于组织模型的任何键值对。例如:
    • "env": "prod"
    • "tier": "backend"
  • EDGE_MODEL_TYPE
    • MOBILE_VERSATILE_1:通用目的
  • PROJECT_NUMBER:您项目的自动生成的项目编号

HTTP 方法和网址:

POST https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines

请求 JSON 正文:

 {   "displayName": "TRAINING_PIPELINE_DISPLAY_NAME",   "inputDataConfig": {     "datasetId": "DATASET_ID",     "fractionSplit": {       "trainingFraction": "TRAINING_FRACTION",       "validationFraction": "0",       "testFraction": "TEST_FRACTION"     }   },   "modelToUpload": {     "displayName": "MODEL_DISPLAY_NAME",     "description": "MODEL_DESCRIPTION",     "labels": {       "KEY": "VALUE"     }   },   "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_action_recognition_1.0.0.yaml",   "trainingTaskInputs": {     "modelType": ["EDGE_MODEL_TYPE"],   } } 

如需发送请求,请选择以下方式之一:

curl

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines"

PowerShell

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines" | Select-Object -Expand Content

响应包含有关规范的信息以及 TRAININGPIPELINE_ID

您可以获取 trainingPipeline 进度的状态,以了解其完成时间。

分类

在训练时,选择以下 AutoML Edge 类型:

  • MOBILE_VERSATILE_1:通用目的

REST

在使用任何请求数据之前,请先进行以下替换:

  • PROJECT:您的项目 ID
  • LOCATION:数据集所在且模型在其中创建的区域。例如 us-central1
  • TRAINING_PIPELINE_DISPLAY_NAME:必填。TrainingPipeline 的显示名称。
  • DATASET_ID:训练数据集的 ID。
  • TRAINING_FRACTIONTEST_FRACTIONfractionSplit 对象是可选的;您使用它来控制数据拆分。如需详细了解如何控制数据拆分,请参阅 AutoML 模型的数据拆分简介。例如:
    • {"trainingFraction": "0.8","validationFraction": "0","testFraction": "0.2"}
  • MODEL_DISPLAY_NAME:经过训练的模型的显示名称。
  • MODEL_DESCRIPTION:模型的说明。
  • MODEL_LABELS:用于组织模型的任何键值对。例如:
    • "env": "prod"
    • "tier": "backend"
  • EDGE_MODEL_TYPE
    • MOBILE_VERSATILE_1:通用目的
  • PROJECT_NUMBER:您项目的自动生成的项目编号

HTTP 方法和网址:

POST https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines

请求 JSON 正文:

 {   "displayName": "TRAINING_PIPELINE_DISPLAY_NAME",   "inputDataConfig": {     "datasetId": "DATASET_ID",     "fractionSplit": {       "trainingFraction": "TRAINING_FRACTION",       "validationFraction": "0",       "testFraction": "TEST_FRACTION"     }   },   "modelToUpload": {     "displayName": "MODEL_DISPLAY_NAME",     "description": "MODEL_DESCRIPTION",     "labels": {       "KEY": "VALUE"     }   },   "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_classification_1.0.0.yaml",   "trainingTaskInputs": {     "modelType": ["EDGE_MODEL_TYPE"],   } } 

如需发送请求,请选择以下方式之一:

curl

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines"

PowerShell

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines" | Select-Object -Expand Content

响应包含有关规范的信息以及 TRAININGPIPELINE_ID

您可以获取 trainingPipeline 进度的状态,以了解其完成时间。

对象跟踪

在训练时,选择 AutoML Edge 类型:

  • MOBILE_VERSATILE_1:通用目的
  • MOBILE_CORAL_VERSATILE_1:提高 Google Coral 的预测质量
  • MOBILE_CORAL_LOW_LATENCY_1:缩短 Google Coral 的延迟时间
  • MOBILE_JETSON_VERSATILE_1:提高 NVIDIA Jetson 的预测质量
  • MOBILE_JETSON_LOW_LATENCY_1:降低 NVIDIA Jetson 的延迟

REST

在使用任何请求数据之前,请先进行以下替换:

  • PROJECT:您的项目 ID
  • LOCATION:数据集所在且模型在其中创建的区域。例如 us-central1
  • TRAINING_PIPELINE_DISPLAY_NAME:必填。TrainingPipeline 的显示名称。
  • DATASET_ID:训练数据集的 ID。
  • TRAINING_FRACTIONTEST_FRACTIONfractionSplit 对象是可选的;您使用它来控制数据拆分。如需详细了解如何控制数据拆分,请参阅 AutoML 模型的数据拆分简介。例如:
    • {"trainingFraction": "0.8","validationFraction": "0","testFraction": "0.2"}
  • MODEL_DISPLAY_NAME:经过训练的模型的显示名称。
  • MODEL_DESCRIPTION:模型的说明。
  • MODEL_LABELS:用于组织模型的任何键值对。例如:
    • "env": "prod"
    • "tier": "backend"
  • EDGE_MODEL_TYPE:以下之一:
    • MOBILE_VERSATILE_1:通用目的
    • MOBILE_CORAL_VERSATILE_1:提高 Google Coral 的预测质量
    • MOBILE_CORAL_LOW_LATENCY_1:缩短 Google Coral 的延迟时间
    • MOBILE_JETSON_VERSATILE_1:提高 NVIDIA Jetson 的预测质量
    • MOBILE_JETSON_LOW_LATENCY_1:降低 NVIDIA Jetson 的延迟
  • PROJECT_NUMBER:自动生成的项目编号

HTTP 方法和网址:

POST https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines

请求 JSON 正文:

 {   "displayName": "TRAINING_PIPELINE_DISPLAY_NAME",   "inputDataConfig": {     "datasetId": "DATASET_ID",     "fractionSplit": {       "trainingFraction": "TRAINING_FRACTION",       "validationFraction": "0",       "testFraction": "TEST_FRACTION"     }   },   "modelToUpload": {     "displayName": "MODEL_DISPLAY_NAME",     "description": "MODEL_DESCRIPTION",     "labels": {       "KEY": "VALUE"     }   },   "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_object_tracking_1.0.0.yaml",   "trainingTaskInputs": {     "modelType": ["EDGE_MODEL_TYPE"],   } } 

如需发送请求,请选择以下方式之一:

curl

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines"

PowerShell

将请求正文保存在名为 request.json 的文件中,然后执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/beta1/projects/PROJECT/locations/LOCATION/trainingPipelines" | Select-Object -Expand Content

响应包含有关规范的信息以及 TRAININGPIPELINE_ID

您可以获取 trainingPipeline 进度的状态,以了解其完成时间。

获取 trainingPipeline 状态

使用以下代码以编程方式获取 trainingPipeline 创建的状态。

REST

在使用任何请求数据之前,请先进行以下替换:

  • LOCATION:TrainingPipeline 所在的区域。
  • PROJECT:您的项目 ID
  • TRAININGPIPELINE_ID:特定 TrainingPipeline 的 ID。
  • PROJECT_NUMBER:自动生成的项目编号

HTTP 方法和网址:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines/TRAININGPIPELINE_ID

如需发送请求,请选择以下方式之一:

curl

执行以下命令:

curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines/TRAININGPIPELINE_ID"

PowerShell

执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines/TRAININGPIPELINE_ID" | Select-Object -Expand Content

"state" 字段显示操作的当前状态。完成的 trainingPipeline 将显示

创建 trainingPipeline 操作完成后,您应会看到如下输出:

Java

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Java 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Java API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证

 import com.google.cloud.aiplatform.v1.DeployedModelRef; import com.google.cloud.aiplatform.v1.EnvVar; import com.google.cloud.aiplatform.v1.FilterSplit; import com.google.cloud.aiplatform.v1.FractionSplit; import com.google.cloud.aiplatform.v1.InputDataConfig; import com.google.cloud.aiplatform.v1.Model; import com.google.cloud.aiplatform.v1.ModelContainerSpec; import com.google.cloud.aiplatform.v1.PipelineServiceClient; import com.google.cloud.aiplatform.v1.PipelineServiceSettings; import com.google.cloud.aiplatform.v1.Port; import com.google.cloud.aiplatform.v1.PredefinedSplit; import com.google.cloud.aiplatform.v1.PredictSchemata; import com.google.cloud.aiplatform.v1.TimestampSplit; import com.google.cloud.aiplatform.v1.TrainingPipeline; import com.google.cloud.aiplatform.v1.TrainingPipelineName; import com.google.rpc.Status; import java.io.IOException;  public class GetTrainingPipelineSample {   public static void main(String[] args) throws IOException {     // TODO(developer): Replace these variables before running the sample.     String project = "YOUR_PROJECT_ID";     String trainingPipelineId = "YOUR_TRAINING_PIPELINE_ID";     getTrainingPipeline(project, trainingPipelineId);   }    static void getTrainingPipeline(String project, String trainingPipelineId) throws IOException {     PipelineServiceSettings pipelineServiceSettings =         PipelineServiceSettings.newBuilder()             .setEndpoint("us-central1-aiplatform.googleapis.com:443")             .build();      // Initialize client that will be used to send requests. This client only needs to be created     // once, and can be reused for multiple requests. After completing all of your requests, call     // the "close" method on the client to safely clean up any remaining background resources.     try (PipelineServiceClient pipelineServiceClient =         PipelineServiceClient.create(pipelineServiceSettings)) {       String location = "us-central1";       TrainingPipelineName trainingPipelineName =           TrainingPipelineName.of(project, location, trainingPipelineId);        TrainingPipeline trainingPipelineResponse =           pipelineServiceClient.getTrainingPipeline(trainingPipelineName);        System.out.println("Get Training Pipeline Response");       System.out.format("\tName: %s\n", trainingPipelineResponse.getName());       System.out.format("\tDisplay Name: %s\n", trainingPipelineResponse.getDisplayName());       System.out.format(           "\tTraining Task Definition: %s\n", trainingPipelineResponse.getTrainingTaskDefinition());       System.out.format(           "\tTraining Task Inputs: %s\n", trainingPipelineResponse.getTrainingTaskInputs());       System.out.format(           "\tTraining Task Metadata: %s\n", trainingPipelineResponse.getTrainingTaskMetadata());       System.out.format("\tState: %s\n", trainingPipelineResponse.getState());       System.out.format("\tCreate Time: %s\n", trainingPipelineResponse.getCreateTime());       System.out.format("\tStart Time: %s\n", trainingPipelineResponse.getStartTime());       System.out.format("\tEnd Time: %s\n", trainingPipelineResponse.getEndTime());       System.out.format("\tUpdate Time: %s\n", trainingPipelineResponse.getUpdateTime());       System.out.format("\tLabels: %s\n", trainingPipelineResponse.getLabelsMap());       InputDataConfig inputDataConfig = trainingPipelineResponse.getInputDataConfig();        System.out.println("\tInput Data Config");       System.out.format("\t\tDataset Id: %s\n", inputDataConfig.getDatasetId());       System.out.format("\t\tAnnotations Filter: %s\n", inputDataConfig.getAnnotationsFilter());       FractionSplit fractionSplit = inputDataConfig.getFractionSplit();        System.out.println("\t\tFraction Split");       System.out.format("\t\t\tTraining Fraction: %s\n", fractionSplit.getTrainingFraction());       System.out.format("\t\t\tValidation Fraction: %s\n", fractionSplit.getValidationFraction());       System.out.format("\t\t\tTest Fraction: %s\n", fractionSplit.getTestFraction());       FilterSplit filterSplit = inputDataConfig.getFilterSplit();        System.out.println("\t\tFilter Split");       System.out.format("\t\t\tTraining Filter: %s\n", filterSplit.getTrainingFilter());       System.out.format("\t\t\tValidation Filter: %s\n", filterSplit.getValidationFilter());       System.out.format("\t\t\tTest Filter: %s\n", filterSplit.getTestFilter());       PredefinedSplit predefinedSplit = inputDataConfig.getPredefinedSplit();        System.out.println("\t\tPredefined Split");       System.out.format("\t\t\tKey: %s\n", predefinedSplit.getKey());       TimestampSplit timestampSplit = inputDataConfig.getTimestampSplit();        System.out.println("\t\tTimestamp Split");       System.out.format("\t\t\tTraining Fraction: %s\n", timestampSplit.getTrainingFraction());       System.out.format("\t\t\tTest Fraction: %s\n", timestampSplit.getTestFraction());       System.out.format("\t\t\tValidation Fraction: %s\n", timestampSplit.getValidationFraction());       System.out.format("\t\t\tKey: %s\n", timestampSplit.getKey());       Model modelResponse = trainingPipelineResponse.getModelToUpload();        System.out.println("\t\tModel to upload");       System.out.format("\t\tName: %s\n", modelResponse.getName());       System.out.format("\t\tDisplay Name: %s\n", modelResponse.getDisplayName());       System.out.format("\t\tDescription: %s\n", modelResponse.getDescription());       System.out.format("\t\tMetadata Schema Uri: %s\n", modelResponse.getMetadataSchemaUri());       System.out.format("\t\tMeta Data: %s\n", modelResponse.getMetadata());       System.out.format("\t\tTraining Pipeline: %s\n", modelResponse.getTrainingPipeline());       System.out.format("\t\tArtifact Uri: %s\n", modelResponse.getArtifactUri());       System.out.format(           "\t\tSupported Deployment Resources Types: %s\n",           modelResponse.getSupportedDeploymentResourcesTypesList().toString());       System.out.format(           "\t\tSupported Input Storage Formats: %s\n",           modelResponse.getSupportedInputStorageFormatsList().toString());       System.out.format(           "\t\tSupported Output Storage Formats: %s\n",           modelResponse.getSupportedOutputStorageFormatsList().toString());       System.out.format("\t\tCreate Time: %s\n", modelResponse.getCreateTime());       System.out.format("\t\tUpdate Time: %s\n", modelResponse.getUpdateTime());       System.out.format("\t\tLabels: %s\n", modelResponse.getLabelsMap());       PredictSchemata predictSchemata = modelResponse.getPredictSchemata();        System.out.println("\tPredict Schemata");       System.out.format("\t\tInstance Schema Uri: %s\n", predictSchemata.getInstanceSchemaUri());       System.out.format(           "\t\tParameters Schema Uri: %s\n", predictSchemata.getParametersSchemaUri());       System.out.format(           "\t\tPrediction Schema Uri: %s\n", predictSchemata.getPredictionSchemaUri());        for (Model.ExportFormat supportedExportFormat :           modelResponse.getSupportedExportFormatsList()) {         System.out.println("\tSupported Export Format");         System.out.format("\t\tId: %s\n", supportedExportFormat.getId());       }       ModelContainerSpec containerSpec = modelResponse.getContainerSpec();        System.out.println("\tContainer Spec");       System.out.format("\t\tImage Uri: %s\n", containerSpec.getImageUri());       System.out.format("\t\tCommand: %s\n", containerSpec.getCommandList());       System.out.format("\t\tArgs: %s\n", containerSpec.getArgsList());       System.out.format("\t\tPredict Route: %s\n", containerSpec.getPredictRoute());       System.out.format("\t\tHealth Route: %s\n", containerSpec.getHealthRoute());        for (EnvVar envVar : containerSpec.getEnvList()) {         System.out.println("\t\tEnv");         System.out.format("\t\t\tName: %s\n", envVar.getName());         System.out.format("\t\t\tValue: %s\n", envVar.getValue());       }        for (Port port : containerSpec.getPortsList()) {         System.out.println("\t\tPort");         System.out.format("\t\t\tContainer Port: %s\n", port.getContainerPort());       }        for (DeployedModelRef deployedModelRef : modelResponse.getDeployedModelsList()) {         System.out.println("\tDeployed Model");         System.out.format("\t\tEndpoint: %s\n", deployedModelRef.getEndpoint());         System.out.format("\t\tDeployed Model Id: %s\n", deployedModelRef.getDeployedModelId());       }        Status status = trainingPipelineResponse.getError();       System.out.println("\tError");       System.out.format("\t\tCode: %s\n", status.getCode());       System.out.format("\t\tMessage: %s\n", status.getMessage());     }   } }

Python

如需了解如何安装或更新 Vertex AI SDK for Python,请参阅安装 Vertex AI SDK for Python。 如需了解详情,请参阅 Python API 参考文档

from google.cloud import aiplatform   def get_training_pipeline_sample(     project: str,     training_pipeline_id: str,     location: str = "us-central1",     api_endpoint: str = "us-central1-aiplatform.googleapis.com", ):     # The AI Platform services require regional API endpoints.     client_options = {"api_endpoint": api_endpoint}     # Initialize client that will be used to create and send requests.     # This client only needs to be created once, and can be reused for multiple requests.     client = aiplatform.gapic.PipelineServiceClient(client_options=client_options)     name = client.training_pipeline_path(         project=project, location=location, training_pipeline=training_pipeline_id     )     response = client.get_training_pipeline(name=name)     print("response:", response)  

获取模型信息

训练流水线创建完成后,您可以使用模型的显示名来获取更详细的模型信息。

REST

在使用任何请求数据之前,请先进行以下替换:

  • LOCATION:模型所在的区域。例如 us-central1
  • PROJECT:。
  • MODEL_DISPLAYNAME:您在创建 trainingPipeline 作业时指定的模型的显示名。
  • PROJECT_NUMBER:自动生成的项目编号

HTTP 方法和网址:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models?filter=display_name=MODEL_DISPLAYNAME

如需发送请求,请选择以下方式之一:

curl

执行以下命令:

curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models?filter=display_name=MODEL_DISPLAYNAME"

PowerShell

执行以下命令:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models?filter=display_name=MODEL_DISPLAYNAME" | Select-Object -Expand Content

AutoML Edge 模型训练完成后,您应该会看到类似如下所示的输出:以下是图片 AutoML Edge 模型的示例输出:

Java

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Java 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Java API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证

 import com.google.cloud.aiplatform.v1.DeployedModelRef; import com.google.cloud.aiplatform.v1.EnvVar; import com.google.cloud.aiplatform.v1.Model; import com.google.cloud.aiplatform.v1.Model.ExportFormat; import com.google.cloud.aiplatform.v1.ModelContainerSpec; import com.google.cloud.aiplatform.v1.ModelName; import com.google.cloud.aiplatform.v1.ModelServiceClient; import com.google.cloud.aiplatform.v1.ModelServiceSettings; import com.google.cloud.aiplatform.v1.Port; import com.google.cloud.aiplatform.v1.PredictSchemata; import java.io.IOException;  public class GetModelSample {    public static void main(String[] args) throws IOException {     // TODO(developer): Replace these variables before running the sample.     String project = "YOUR_PROJECT_ID";     String modelId = "YOUR_MODEL_ID";     getModelSample(project, modelId);   }    static void getModelSample(String project, String modelId) throws IOException {     ModelServiceSettings modelServiceSettings =         ModelServiceSettings.newBuilder()             .setEndpoint("us-central1-aiplatform.googleapis.com:443")             .build();      // Initialize client that will be used to send requests. This client only needs to be created     // once, and can be reused for multiple requests. After completing all of your requests, call     // the "close" method on the client to safely clean up any remaining background resources.     try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {       String location = "us-central1";       ModelName modelName = ModelName.of(project, location, modelId);        Model modelResponse = modelServiceClient.getModel(modelName);       System.out.println("Get Model response");       System.out.format("\tName: %s\n", modelResponse.getName());       System.out.format("\tDisplay Name: %s\n", modelResponse.getDisplayName());       System.out.format("\tDescription: %s\n", modelResponse.getDescription());        System.out.format("\tMetadata Schema Uri: %s\n", modelResponse.getMetadataSchemaUri());       System.out.format("\tMetadata: %s\n", modelResponse.getMetadata());       System.out.format("\tTraining Pipeline: %s\n", modelResponse.getTrainingPipeline());       System.out.format("\tArtifact Uri: %s\n", modelResponse.getArtifactUri());        System.out.format(           "\tSupported Deployment Resources Types: %s\n",           modelResponse.getSupportedDeploymentResourcesTypesList());       System.out.format(           "\tSupported Input Storage Formats: %s\n",           modelResponse.getSupportedInputStorageFormatsList());       System.out.format(           "\tSupported Output Storage Formats: %s\n",           modelResponse.getSupportedOutputStorageFormatsList());        System.out.format("\tCreate Time: %s\n", modelResponse.getCreateTime());       System.out.format("\tUpdate Time: %s\n", modelResponse.getUpdateTime());       System.out.format("\tLabels: %s\n", modelResponse.getLabelsMap());        PredictSchemata predictSchemata = modelResponse.getPredictSchemata();       System.out.println("\tPredict Schemata");       System.out.format("\t\tInstance Schema Uri: %s\n", predictSchemata.getInstanceSchemaUri());       System.out.format(           "\t\tParameters Schema Uri: %s\n", predictSchemata.getParametersSchemaUri());       System.out.format(           "\t\tPrediction Schema Uri: %s\n", predictSchemata.getPredictionSchemaUri());        for (ExportFormat exportFormat : modelResponse.getSupportedExportFormatsList()) {         System.out.println("\tSupported Export Format");         System.out.format("\t\tId: %s\n", exportFormat.getId());       }        ModelContainerSpec containerSpec = modelResponse.getContainerSpec();       System.out.println("\tContainer Spec");       System.out.format("\t\tImage Uri: %s\n", containerSpec.getImageUri());       System.out.format("\t\tCommand: %s\n", containerSpec.getCommandList());       System.out.format("\t\tArgs: %s\n", containerSpec.getArgsList());       System.out.format("\t\tPredict Route: %s\n", containerSpec.getPredictRoute());       System.out.format("\t\tHealth Route: %s\n", containerSpec.getHealthRoute());        for (EnvVar envVar : containerSpec.getEnvList()) {         System.out.println("\t\tEnv");         System.out.format("\t\t\tName: %s\n", envVar.getName());         System.out.format("\t\t\tValue: %s\n", envVar.getValue());       }        for (Port port : containerSpec.getPortsList()) {         System.out.println("\t\tPort");         System.out.format("\t\t\tContainer Port: %s\n", port.getContainerPort());       }        for (DeployedModelRef deployedModelRef : modelResponse.getDeployedModelsList()) {         System.out.println("\tDeployed Model");         System.out.format("\t\tEndpoint: %s\n", deployedModelRef.getEndpoint());         System.out.format("\t\tDeployed Model Id: %s\n", deployedModelRef.getDeployedModelId());       }     }   } }

Node.js

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Node.js 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Node.js API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭证。 如需了解详情,请参阅为本地开发环境设置身份验证

/**  * TODO(developer): Uncomment these variables before running the sample.\  * (Not necessary if passing values as arguments)  */  // const modelId = 'YOUR_MODEL_ID'; // const project = 'YOUR_PROJECT_ID'; // const location = 'YOUR_PROJECT_LOCATION';  // Imports the Google Cloud Model Service Client library const {ModelServiceClient} = require('@google-cloud/aiplatform');  // Specifies the location of the api endpoint const clientOptions = {   apiEndpoint: 'us-central1-aiplatform.googleapis.com', };  // Instantiates a client const modelServiceClient = new ModelServiceClient(clientOptions);  async function getModel() {   // Configure the parent resource   const name = `projects/${project}/locations/${location}/models/${modelId}`;   const request = {     name,   };   // Get and print out a list of all the endpoints for this resource   const [response] = await modelServiceClient.getModel(request);    console.log('Get model response');   console.log(`\tName : ${response.name}`);   console.log(`\tDisplayName : ${response.displayName}`);   console.log(`\tDescription : ${response.description}`);   console.log(`\tMetadata schema uri : ${response.metadataSchemaUri}`);   console.log(`\tMetadata : ${JSON.stringify(response.metadata)}`);   console.log(`\tTraining pipeline : ${response.trainingPipeline}`);   console.log(`\tArtifact uri : ${response.artifactUri}`);   console.log(     `\tSupported deployment resource types : \       ${response.supportedDeploymentResourceTypes}`   );   console.log(     `\tSupported input storage formats : \       ${response.supportedInputStorageFormats}`   );   console.log(     `\tSupported output storage formats : \       ${response.supportedOutputStoragFormats}`   );   console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);   console.log(`\tUpdate time : ${JSON.stringify(response.updateTime)}`);   console.log(`\tLabels : ${JSON.stringify(response.labels)}`);    const predictSchemata = response.predictSchemata;   console.log('\tPredict schemata');   console.log(`\tInstance schema uri : ${predictSchemata.instanceSchemaUri}`);   console.log(     `\tParameters schema uri : ${predictSchemata.prametersSchemaUri}`   );   console.log(     `\tPrediction schema uri : ${predictSchemata.predictionSchemaUri}`   );    const [supportedExportFormats] = response.supportedExportFormats;   console.log('\tSupported export formats');   console.log(`\t${supportedExportFormats}`);    const containerSpec = response.containerSpec;   console.log('\tContainer Spec');   if (!containerSpec) {     console.log(`\t\t${JSON.stringify(containerSpec)}`);     console.log('\t\tImage uri : {}');     console.log('\t\tCommand : {}');     console.log('\t\tArgs : {}');     console.log('\t\tPredict route : {}');     console.log('\t\tHealth route : {}');     console.log('\t\tEnv');     console.log('\t\t\t{}');     console.log('\t\tPort');     console.log('\t\t{}');   } else {     console.log(`\t\t${JSON.stringify(containerSpec)}`);     console.log(`\t\tImage uri : ${containerSpec.imageUri}`);     console.log(`\t\tCommand : ${containerSpec.command}`);     console.log(`\t\tArgs : ${containerSpec.args}`);     console.log(`\t\tPredict route : ${containerSpec.predictRoute}`);     console.log(`\t\tHealth route : ${containerSpec.healthRoute}`);     const env = containerSpec.env;     console.log('\t\tEnv');     console.log(`\t\t\t${JSON.stringify(env)}`);     const ports = containerSpec.ports;     console.log('\t\tPort');     console.log(`\t\t\t${JSON.stringify(ports)}`);   }    const [deployedModels] = response.deployedModels;   console.log('\tDeployed models');   console.log('\t\t', deployedModels); } getModel();

Python

如需了解如何安装或更新 Vertex AI SDK for Python,请参阅安装 Vertex AI SDK for Python。 如需了解详情,请参阅 Python API 参考文档

def get_model_sample(project: str, location: str, model_name: str):      aiplatform.init(project=project, location=location)      model = aiplatform.Model(model_name=model_name)      print(model.display_name)     print(model.resource_name)     return model  

后续步骤