Now, click on Send and we should see something like this in Postman:
The response we have should look something like this:
// SNIPP SNIPP
{
"name": "asia-east1.13405238892700014097",
"metadata":
{ ...
},
"done": true,
"response":
{
"@type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoResponse",
"annotationResults": [
{
"segmentLabelAnnotations": [
{
"entity":
{
"entityId": "...",
"description": "...",
"languageCode": "..."
},
"segments": [
{
"segment":
{
"startTimeOffset": "...",
"endTimeOffset": "..."
},
"confidence": ...
}]
},
...
],
"shotLabelAnnotations": [
{
"entity":
{
"entityId": "...",
"description": "...",
"languageCode": "..."
},
"segments": [
{
"segment":
{
"startTimeOffset": "...",
"endTimeOffset": "..."
},
"confidence": ...
}]
},
...
]
}]
}
}
// SNIPP SNIPP
I have removed a lot of data for brevity. The name and metadata properties return the request information, done indicates whether the request is still in process or if it is completed, and the response property consists of the actual response. For label detection, we use segmentLabelAnnotations and shotLabelAnnotations. segmentLabelAnnotations provides us with information on various labels at given time segments, along with confidence. shotLabelAnnotations also provides us with entity information at various segments.
This means that we are making two API calls to get the actual response, and not one. Imagine if our operation status request returned done as false (done will be in JSON response if the request is still being processed); we will make another request in, say, 10 seconds. That means we have made three API calls so far and there is no response.
You can view the total API calls made on the Dashboard of the APIs & Services service. Now that we have an idea of the API, let's get started with integrating the video intelligence API with SmartExchange.