Updating the controller

Now that are done with defining the routes and updating the model, we will work on the logic as discussed in the Solution design section earlier. We are going to add a new method to CloudAIAPI named uploadAudio. Open server/controllers/cloud-ai-api.ts and we will first add the required imports, and SpeechClientconfiguration. Before the class definition, add this code:

// SNIPP SNIPP
const speech = require('@google-cloud/speech');
const speechClient = new speech.SpeechClient({
credentials: JSON.parse(process.env.GCP_SK_CREDENTIALS)
});
// SNIPP SNIPP

Using the environment variable we set in the .env file, we are using the GCP_SK_CREDENTIALS value to initialize a new SpeechClient. Next, we are going to create a new method named uploadAudio and get started with converting the audio file uploaded by the user to a base64 string, like we did for the image upload:

// SNIPP SNIPP
uploadAudio = (req, res) => {
// console.log('req.file', req.file);
const filePath = req.file.path;
this.base64_encode(filePath).then((BASE64_CONTENT) => {})
}
// SNIPP SNIPP

Inside the base64_encode() callback, we get started with constructing a request to send to the Cloud Speech API:

// SNIPP SNIPP
const config = {
encoding: 'LINEAR16',
sampleRateHertz: 44100,
languageCode: 'en-us',
};
const audio = {
content: BASE64_CONTENT
};
const request = {
config: config,
audio: audio,
};
speechClient
.recognize(request)
.then((data) => {
// CODE BELOW
})
.catch(err => {
console.error('ERROR:', err);
return res.status(500).json(err);
});
// SNIPP SNIPP

Using recognize () on the instance of SpeechClient, we make a request submitting the audio and the config. The code present inside this section will be as follows:

// SNIPP SNIPP
const transcriptions = [];
const response = data[0];
response.results.forEach((result) => {
let o: any = {};
o.transcript = result.alternatives[0].transcript;
o.words = result.alternatives[0].words;
o.confidence = result.alternatives[0].confidence;
transcriptions.push(o);
});
cloudinary.v2.uploader.upload(filePath, {
resource_type: 'auto'
}, (error, result) => {
//CODE BELOW
});
// SNIPP SNIPP

In the previous code, we extracted the results from the response and, using forEach, we processed alternatives[0] and stored the transcript, words, and confidence. Now that we have the text for the audio, we will upload it to Cloudinary:

// SNIPP SNIPP
if (error) {
return res.status(400).json({
message: error.message
});
}
let msg: any = {};
msg.thread = req.params.threadId;
msg.createdBy = req.user;
msg.lastUpdatedBy = req.user;
msg.transcriptions = transcriptions;
msg.cloudinaryProps = result;
msg.description = `<div align="center" class="embed-responsive-16by9">
<audio class="embed-responsive-item" controls>
<source src="${result.secure_url}">
Your browser does not support the audio tag.
</audio>
</div>`
let message = new Message(msg);
message.save((err, msg) => {
if (err) {
console.log(err);
return this.respondErrorMessage(res, err);
}
res.status(200).json(msg);
});
// Delete the local file so we don't clutter
this.deleteFile(filePath);
// SNIPP SNIPP

Once the upload is completed, we will extract the audio URL, and build a message description that can display the HTML5 audio player and then save the message to the database. This wraps up our controller logic and our service side logic. In the next section, we are going work on the client-side logic.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.237.164