Using nativescript-audio's TNSRecorder for Android in our RecordModel

We could use some fancy Android APIs and/or libraries for our recorder, but in this case, the nativescript-audio plugin we're using for our cross-platform multitrack player also provides a cross-platform recorder. We could have even used it with iOS, but we wanted to specifically work with AudioKit's powerful APIs there. However, here on Android, let's use the recorder from the plugin and make the following modifications to record.model.android.ts:

import { Observable } from 'data/observable';
import { IRecordModel, IRecordEvents, RecordState, documentsFilePath } from './common';
import { TNSRecorder, AudioRecorderOptions } from 'nativescript-audio';
import { Subject } from 'rxjs/Subject';
import * as permissions from 'nativescript-permissions';

declare var android: any;
const RECORD_AUDIO = android.Manifest.permission.RECORD_AUDIO;


export class RecordModel extends Observable implements IRecordModel {

// available events to listen to
private _events: IRecordEvents;

// target as an Observable
private _target$: Subject<number>;

// recorder
private _recorder: TNSRecorder;
// recorder options
private _options: AudioRecorderOptions;
// recorder mix meter handling
private _meterInterval: number;

// state
private _state: number = RecordState.readyToRecord;

// tmp file path
private _filePath: string;
// the final saved path to use
private _savedFilePath: string;

constructor() {
super();
this._setupEvents();

// prepare Observable as our target
this._target$ = new Subject();

// create recorder
this._recorder = new TNSRecorder();
this._filePath = documentsFilePath(`recording-${Date.now()}.m4a`);
this._options = {
filename: this._filePath,
format: android.media.MediaRecorder.OutputFormat.MPEG_4,
encoder: android.media.MediaRecorder.AudioEncoder.AAC,
metering: true, // critical to feed our waveform view
infoCallback: (infoObject) => {
// just log for now
console.log(JSON.stringify(infoObject));
},
errorCallback: (errorObject) => {
console.log(JSON.stringify(errorObject));
}
};
}

public get events(): IRecordEvents {
return this._events;
}

public get target() {
return this._target$;
}

public get recorder(): any {
return this._recorder;
}

public get audioFilePath(): string {
return this._filePath;
}

public get state(): number {
return this._state;
}

public set state(value: number) {
this._state = value;
this._emitEvent(this._events.stateChange, this._state);
}

public get savedFilePath() {
return this._savedFilePath;
}

public set savedFilePath(value: string) {
this._savedFilePath = value;
if (this._savedFilePath)
this.state = RecordState.saved;
}

public toggleRecord() {
if (this._state !== RecordState.recording) {
// just force ready to record
// when coming from any state other than recording
this.state = RecordState.readyToRecord;
}

switch (this._state) {
case RecordState.readyToRecord:
if (this._hasPermission()) {
this._recorder.start(this._options).then((result) => {
this.state = RecordState.recording;
this._initMeter();
}, (err) => {
this._resetMeter();
});
} else {
permissions.requestPermission(RECORD_AUDIO).then(() => {
// simply engage again
this.toggleRecord();
}, (err) => {
console.log('permissions error:', err);
});
}
break;
case RecordState.recording:
this._resetMeter();
this._recorder.stop();
this.state = RecordState.readyToPlay;
break;
}
}

public togglePlay() {
if (this._state === RecordState.readyToPlay) {
this.state = RecordState.playing;
} else {
this.stopPlayback();
}
}

public stopPlayback() {
if (this.state !== RecordState.recording) {
this.state = RecordState.readyToPlay;
}
}

public save() {
// With Android, filePath will be the same, just make it final
this.savedFilePath = this._filePath;
}

public dispose() {
if (this.state === RecordState.recording) {
this._recorder.stop();
}
this._recorder.dispose();
}

public finish() {
this.state = RecordState.finish;
}

private _initMeter() {
this._resetMeter();
this._meterInterval = setInterval(() => {
let meters = this.recorder.getMeters();
this._target$.next(meters);
}, 200); // use 50 for production - perf is better on devices
}

private _resetMeter() {
if (this._meterInterval) {
clearInterval(this._meterInterval);
this._meterInterval = undefined;
}
}

private _hasPermission() {
return permissions.hasPermission(RECORD_AUDIO);
}

private _emitEvent(eventName: string, data?: any) {
let event = {
eventName,
data,
object: this
};
this.notify(event);
}

private _setupEvents() {
this._events = {
stateChange: 'stateChange'
};
}
}

Wow! Okay, a lot of interesting things going on here. Let's get one necessary thing out of the way for Android and ensure for API level 23+ that permissions are properly handled. For this, you can install the permissions plugin:

tns plugin add nativescript-permissions

We also want to ensure our manifest file contains the proper permission key.

Open app/App_Resources/Android/AndroidManifest.xml and add the following in the correct place:

<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.RECORD_AUDIO"/>

We use the nativescript-audio plugin's TNSRecorder as our implementation and wire things up accordingly to its API. AudioRecorderOptions provides a metering option, allowing the ability to monitor the microphone's meters via an interval.

What is most versatile about our overall design is that our model's target can literally be anything. In this case, we create a RxJS Subject observable as _target$, which is then returned as our target getter. This allows us to emit the microphone's meter value through the Subject observable for consumption by our Waveform. You will see in a moment how we will take advantage of this.

We are now ready to move on to our Waveform implementation for Android.

Just like we did for the model, we will want to refactor the common bits into a shared file and handle the suffix.

Create app/modules/shared/native/waveform-common.ts:

import { View } from 'ui/core/view';

export type WaveformType = 'mic' | 'file';

export interface IWaveformModel {
readonly target: any;
dispose(): void;
}

export interface IWaveform extends View {
type: WaveformType;
model: IWaveformModel;
createNativeView(): any;
initNativeView(): void;
disposeNativeView(): void;
}

Then, just adjust app/modules/shared/native/waveform.ts to use it:

...
import { IWaveform, IWaveformModel, WaveformType } from './waveform-common';

export class Waveform extends View implements IWaveform {
...

Before renaming our waveform to contain an .ios suffix, let's generate a TypeScript definition file for it first:

tsc app/modules/shared/native/waveform.ts references.d.ts -d true --lib es6,dom,es2015.iterable --target es5

You may again see TypeScript errors or warnings, but we don't need to worry about those, as it should have still generated a waveform.d.ts file. Let's simplify it slightly to contain only the parts that are applicable to both iOS and Android:

import { View } from 'ui/core/view';
export declare type WaveformType = 'mic' | 'file';
export interface IWaveformModel {
readonly target: any;
dispose(): void;
}
export interface IWaveform extends View {
type: WaveformType;
model: IWaveformModel;
createNativeView(): any;
initNativeView(): void;
disposeNativeView(): void;
}
export declare class Waveform extends View implements IWaveform {}

Okay, now, rename waveform.ts to waveform.ios.ts and create app/modules/shared/native/waveform.android.ts:

import { View } from 'ui/core/view';
import { Color } from 'color';
import { IWaveform, IWaveformModel, WaveformType } from './common';

export class Waveform extends View implements IWaveform {
private _model: IWaveformModel;
private _type: WaveformType;

public set type(value: WaveformType) {
this._type = value;
}

public get type() {
return this._type;
}

public set model(value: IWaveformModel) {
this._model = value;
}

public get model() {
return this._model;
}

createNativeView() {
switch (this.type) {
case 'mic':
// TODO: this.nativeView = ?
break;
case 'file':
// TODO: this.nativeView = ?
break;
}
return this.nativeView;
}

initNativeView() {
// TODO
}

disposeNativeView() {
if (this.model && this.model.dispose) this.model.dispose();
}
}

Okay, excellent! This is the barebones setup we will need, but what native Android view should we use?

If you're looking around for open source Android libs, you may come across a group of very talented developers with Yalantis, a fantastic mobile development company based out of Ukraine. Roman Kozlov and his team created an open source project, Horizon, which provides beautiful audio visualizations:
https://github.com/Yalantis/Horizon
https://yalantis.com/blog/horizon-open-source-library-for-sound-visualization/

Just like for iOS, we also want to prepare for a multifaceted Waveform view that can also render a static waveform for just a file. Looking further through the open source options, we may come across another wonderfully talented team with Semantive, based in Warsaw, the sprawling capital of Poland. They created an incredibly powerful Waveform view for Android:
https://github.com/Semantive/waveform-android

Let's integrate both of these libraries for our Android Waveform integration.

Similar to how we integrated AudioKit for iOS, let's create a folder in the root called android-waveform-libs with the following setup, that provides include.gradle:

Why deviate from the nativescript- prefix when including native libs?
The prefix is a good way to go if you plan to refactor the internal plugin into an open source plugin published via npm for the community down the road, using https://github.com/NathanWalker/nativescript-plugin-seed for instance.

Sometimes, you just need to integrate several native libs for a specific platform, as we are in this case, so we don't really need the nativescript- prefix on our folder.

We make sure to add package.json, so we can add these native libs like we would any other plugin:

{
"name": "android-waveform-libs",
"version": "1.0.0",
"nativescript": {
"platforms": {
"android": "3.0.0"
}
}
}

Now, we simply add them as a plugin to our project:

tns plugin add android-waveform-libs

We are now ready to integrate these libs into our Waveform view.
Let's make the following modifications to the app/modules/shared/native/waveform.android.ts file:

import { View } from 'ui/core/view';
import { Color } from 'color';
import { Subscription } from 'rxjs/Subscription';
import { IWaveform, IWaveformModel, WaveformType } from './common';
import { screen } from 'platform';

declare var com;
declare var android;

const GLSurfaceView = android.opengl.GLSurfaceView;
const AudioRecord = android.media.AudioRecord;

// Horizon recorder waveform
// https://github.com/Yalantis/Horizon
const Horizon = com.yalantis.waves.util.Horizon;
// various recorder settings
const RECORDER_SAMPLE_RATE = 44100;
const RECORDER_CHANNELS = 1;
const RECORDER_ENCODING_BIT = 16;
const RECORDER_AUDIO_ENCODING = 3;
const MAX_DECIBELS = 120;

// Semantive waveform for files
// https://github.com/Semantive/waveform-android
const WaveformView = com.semantive.waveformandroid.waveform.view.WaveformView;
const CheapSoundFile = com.semantive.waveformandroid.waveform.soundfile.CheapSoundFile;
const ProgressListener = com.semantive.waveformandroid.waveform.soundfile.CheapSoundFile.ProgressListener;

export class Waveform extends View implements IWaveform {
private _model: IWaveformModel;
private _type: WaveformType;
private _initialized: boolean;
private _horizon: any;
private _javaByteArray: Array<any>;
private _waveformFileView: any;
private _sub: Subscription;

public set type(value: WaveformType) {
this._type = value;
}

public get type() {
return this._type;
}

public set model(value: IWaveformModel) {
this._model = value;
this._initView();
}

public get model() {
return this._model;
}

createNativeView() {
switch (this.type) {
case 'mic':
this.nativeView = new GLSurfaceView(this._context);
this.height = 200; // GL view needs height
break;
case 'file':
this.nativeView = new WaveformView(this._context, null);
this.nativeView.setSegments(null);
this.nativeView.recomputeHeights(screen.mainScreen.scale);

// disable zooming and touch events
this.nativeView.mNumZoomLevels = 0;
this.nativeView.onTouchEvent = function (e) { return false; }
break;
}
return this.nativeView;
}

initNativeView() {
this._initView();
}

disposeNativeView() {
if (this.model && this.model.dispose) this.model.dispose();
if (this._sub) this._sub.unsubscribe();
}

private _initView() {
if (!this._initialized && this.nativeView && this.model) {
if (this.type === 'mic') {
this._initialized = true;
this._horizon = new Horizon(
this.nativeView,
new Color('#000').android,
RECORDER_SAMPLE_RATE,
RECORDER_CHANNELS,
RECORDER_ENCODING_BIT
);

this._horizon.setMaxVolumeDb(MAX_DECIBELS);
let bufferSize = 2 * AudioRecord.getMinBufferSize(
RECORDER_SAMPLE_RATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING);
this._javaByteArray = Array.create('byte', bufferSize);

this._sub = this._model.target.subscribe((value) => {
this._javaByteArray[0] = value;
this._horizon.updateView(this._javaByteArray);
});
} else {
let soundFile = CheapSoundFile.create(this._model.target,
new ProgressListener({
reportProgress: (fractionComplete: number) => {
console.log('fractionComplete:', fractionComplete);
return true;
}
}));

setTimeout(() => {
this.nativeView.setSoundFile(soundFile);
this.nativeView.invalidate();
}, 0);
}
}
}
}

We begin our Android implementation by defining the const references to the various packaged classes we need to access, to alleviate having to reference the fully qualified package location each time in our Waveform. Just like on the iOS side, we design a dual-purpose Waveform by allowing the type ('mic' or 'file') to drive which rendering to use. This allows us to reuse this with our record view for real-time microphone visualization and the other to statically render our tracks as Waveforms (more on that soon!).

The Horizon lib utilizes Android's GLSurfaceView as the primary rendering, hence:

this.nativeView = new GLSurfaceView(this._context);
this.height = 200; // GL view needs height

During development, we found that GLSurfaceView requires at least a height to constrain it, otherwise it would render at full screen height. Therefore, we explicitly set a reasonable height of 200 to the custom NativeScript view, which will automatically handle measuring the native view for us. Interestingly, we also found that sometimes our model setter would fire before initNativeView and other times after. Because the model is a critical binding for initializing our Horizon view, we designed a custom internal _initView method with the appropriate conditional, which could be called from initNativeView, as well as after our model setter fired. The condition (!this._initialized && this.nativeView && this.model) ensures it's only ever initialized once though. This is the way to handle any potential race conditions around the sequence of these method calls.

The native Horizon.java class provides an update method that expects a Java byte array with a signature:

updateView(byte[] buffer)

What we do in NativeScript for this is retain a reference to a construct that will represent this native Java byte array with the following:

let bufferSize = 2 * AudioRecord.getMinBufferSize(
RECORDER_SAMPLE_RATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING);
this._javaByteArray = Array.create('byte', bufferSize);

Utilizing Android's android.media.AudioRecord class, in conjunction with the various recorder settings that we set up, we are able to gather an initial bufferSize, that we use to initialize our byte array size.

We then take advantage of our overall versatile design, wherein our model's target in this implementation is an rxjs Subject Observable, allowing us to subscribe to its event stream. For the 'mic' type, this stream will be the metering value changes from the recorder, which we use to fill our byte array and in turn update the Horizon view:

this._sub = this._model.target.subscribe((value) => {
this._javaByteArray[0] = value;
this._horizon.updateView(this._javaByteArray);
});

This provides our recorder a nice visualization, which will animate as the input level changes. Here's a preview; however, the style is still a little ugly, since we have not applied any CSS polish just yet:

For our static audio file waveform rendering, we initialize WaveformView with the Android context. We then use its API to configure it for our use during construction in createNativeView.

During initialization, we create an instance of CheapSoundFile as required by WaveformView, and interestingly, we use setSoundFile inside setTimeout, alongside a call to this.nativeView.invalidate(), which is calling invalidate on WaveformView. This causes the native view to update with the processed file, as follows (again, we will address the styling polish later):

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.99.28