Inside the client/app/view-thread folder, create another folder named upload-audio-modal and inside that, create two files named upload-audio-modal.html and upload-audio-modal.ts. Update client/app/view-thread/upload-audio-modal/upload-audio-modal.html as shown here:
// SNIPP SNIPP
<div class="modal-header">
<h4 class="modal-title">Reply with Audio</h4>
<button type="button" class="close" aria-label="Close" (click)="activeModal.dismiss('x')">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body">
<div class="form-group">
<div class="text-center">
<audio #audio class="audio" controls></audio>
</div>
<br>
<button type="button" class="btn btn-success" [disabled]="isRecording || isProcessing" (click)="startRecording()">Record</button>
<button type="button" class="btn btn-warning" [disabled]="!isRecording || isProcessing" (click)="stopRecording()">Stop</button>
<button type="button" class="btn btn-info" (click)="download()" [disabled]="!hasRecorded || isProcessing">Download</button>
</div>
<ngb-alert type="danger" [dismissible]="false" *ngIf="error">
<strong>Error!</strong> {{error.details || error.message || error}}
</ngb-alert>
</div>
<div class="modal-footer">
<label *ngIf="isProcessing">This might take a couple of minutes. Please be patient</label>
<i *ngIf="isProcessing" class="fa fa-circle-o-notch fa-spin fa-3x"></i>
<button type="button" class="btn btn-success" [disabled]="isProcessing || !hasRecorded" (click)="reply()">Reply</button </div>
// SNIPP SNIPP
Here, we have an audio tag, which we are going to use to show the audio preview. We have three buttons, one to start recording, one to stop recording, and one to download the recorded audio. Apart from that, we have the required error messages and loading indicators. For the required logic, we will get started by adding the imports to client/app/view-thread/upload-audio-modal/upload-audio-modal.ts:
// SNIPP SNIPP
import { Component, Input, Output, EventEmitter, ViewChild, AfterViewInit } from '@angular/core';
import { NgbActiveModal } from '@ng-bootstrap/ng-bootstrap';
import { ActivatedRoute } from '@angular/router';
import { AudioAPIService } from '../../services/audio.api.service';
import * as RecordRTC from 'recordrtc/RecordRTC.min';
// SNIPP SNIPP
We are going to create the missing dependencies in a moment. Next, we are going to define the UploadAudioModal component as shown here:
// SNIPP SNIPP
@Component({
selector: 'sm-create-asset-modal',
templateUrl: './upload-audio-modal.html'
})
export class UploadAudioModal implements AfterViewInit {
@Input() threadId; // fetch from view-thread page
@Output() updateThread = new EventEmitter < any > (); // Update main thread with new message
error: string = '';
isProcessing: boolean = false;
isRecording: boolean = false;
hasRecorded: boolean = false;
private stream: MediaStream;
private recordRTC: any;
@ViewChild('audio') audio;
}
// SNIPP SNIPP
The constructor will be as follows:
// SNIPP SNIPP
constructor(
public activeModal: NgbActiveModal,
public audioAPIService: AudioAPIService,
private route: ActivatedRoute
) {}
// SNIPP SNIPP
After the component has been initialized, we fetch the audio element and set the default state as follows:
// SNIPP SNIPP
ngAfterViewInit() {
// set the initial state of the video
let audio: HTMLAudioElement = this.audio.nativeElement;
audio.muted = false;
audio.controls = true;
audio.autoplay = false;
audio.preload = 'auto';
}
// SNIPP SNIPP
Now, we will implement the logic to start recording, as shown here:
// SNIPP SNIPP
startRecording() {
this.isRecording = true;
const mediaConstraints: MediaStreamConstraints = {
video: false, // Only audio recording
audio: true
};
navigator
.mediaDevices
.getUserMedia(mediaConstraints)
.then(this.successCallback.bind(this), this.errorCallback.bind(this));
// allow users to record maximum of 10 second audios
setTimeout(() => {
this.stopRecording();
}, 10000)
}
// SNIPP SNIPP
As we can see, we are using the navigator called.mediaDevices.getUserMedia() to start the recording. Once we do that, we will be asked to give permissions to record audio, if not done already. Then, the successcallback or errorcallback is called appropriately. To make the learning process simple and quick, I am allowing users to record a maximum 10 seconds of audio. This way, uploads are faster and Cloud Speech API responds quickly for us to see results in near real-time. Next, we are going to set up the success and error callbacks:
// SNIPP SNIPP
successCallback(stream: MediaStream) {
let options = {
recorderType: RecordRTC.StereoAudioRecorder,
mimeType: 'audio/wav',
// Must be single channel: https://cloud.google.com/speech/reference/rest/v1/RecognitionConfig#AudioEncoding
numberOfAudioChannels: 1
};
this.stream = stream;
this.recordRTC = RecordRTC(stream, options);
this.recordRTC.startRecording();
let audio: HTMLAudioElement = this.audio.nativeElement;
audio.src = window.URL.createObjectURL(stream);
this.toggleControls();
}
// SNIPP SNIPP
Here, using the RecordRTC API, we kick off the recording and then stream the recording to the audio tag. And, here is errorCallback():
// SNIPP SNIPP
errorCallback() {
console.error('Something went horribly wrong!!', this);
}
// SNIPP SNIPP
Now, we will set up stopRecording() as shown here:
// SNIPP SNIPP
stopRecording() {
let recordRTC = this.recordRTC;
recordRTC.stopRecording(this.processAudio.bind(this));
let stream = this.stream;
stream.getAudioTracks().forEach(track => track.stop());
stream.getVideoTracks().forEach(track => track.stop());
this.hasRecorded = true;
this.isRecording = false
}
// SNIPP SNIPP
Using the recordRTC.stopRecording API, we stop the recording and pass in a callback named processAudio to process the audio after the recording has stopped. processAudio would be as follows:
// SNIPP SNIPP
processAudio(audioURL) {
let audio: HTMLAudioElement = this.audio.nativeElement;
audio.src = audioURL;
this.toggleControls();
}
// SNIPP SNIPP
Here, we read the data URL of the stream and then set it to the audio element for a preview. For the user to download the audio, we will have the following method:
// SNIPP SNIPP
download() {
this.recordRTC.save(this.genFileName());
}
// SNIPP SNIPP
We have a couple of helper functions, shown here:
// SNIPP SNIPP
genFileName() {
return 'audio_' + (+new Date()) + '_.wav';
}
toggleControls() {
let audio: HTMLAudioElement = this.audio.nativeElement;
audio.muted = !audio.muted;
audio.autoplay = !audio.autoplay;
audio.preload = 'auto';
}
// SNIPP SNIPP
Once the recording is done and the user is ready to reply with that audio, we will use reply(), as shown here:
// SNIPP SNIPP
reply() {
this.error = '';
this.isProcessing = true;
let recordedBlob = this.recordRTC.getBlob();
recordedBlob.name = this.genFileName();
this.audioAPIService.postFile(this.threadId, recordedBlob).subscribe(data => {
console.log(data);
this.updateThread.emit(data);
this.isProcessing = false;
this.activeModal.close();
}, error => {
console.log(error);
this.error = error.error;
this.isProcessing = false;
});
}
// SNIPP SNIPP
This concludes our upload audio modal component. Before we proceed, we need to add this component to client/app/app.module.ts. First, let's import UploadAudioModal into client/app/app.module.ts:
// SNIPP SNIPP
import { UploadAudioModal } from './view-thread/upload-audio-modal/upload-audio-modal';
// SNIPP SNIPP
Next, we will add this modal to the declarations and entryComponents, as shown here:
// SNIPP SNIPP
declarations: [
AppComponent,
AboutComponent,
RegisterComponent,
LoginComponent,
LogoutComponent,
AccountComponent,
AdminComponent,
NotFoundComponent,
HomeComponent,
CreateThreadComponent,
ViewThreadComponent,
FilterThreadPipe,
EditThreadComponent,
UploadImageModal,
UploadVideoModal,
UploadAudioModal
],
entryComponents: [
UploadImageModal,
UploadVideoModal,
UploadAudioModal
],
// SNIPP SNIPP
Save all the files, and we are good to move on.