Giter Site home page Giter Site logo

edimuj / cordova-plugin-audioinput Goto Github PK

View Code? Open in Web Editor NEW
161.0 11.0 87.0 172 KB

This iOS/Android Cordova/PhoneGap plugin enables audio capture from the device microphone, by in near real-time forwarding audio to the web layer of your application. A typical usage scenario for this plugin would be to use the captured audio as source for a web audio node chain, where it then can be analyzed, manipulated and/or played.

Home Page: https://github.com/edimuj/app-audioinput-demo

License: MIT License

JavaScript 45.90% Java 28.40% Objective-C 25.70%
cordova-plugin microphone audio cordova-android-plugin cordova-ios-plugin audionode capture microphone-capture audioinput device-microphone

cordova-plugin-audioinput's Introduction

cordova-plugin-audioinput

This Cordova plugin enables audio capture from the device microphone, by in (near) real-time forwarding raw audio data to the web layer of your web application. A typical usage scenario for this plugin would be to use the captured microphone audio as an audio source for Web audio API based applications.

Since Navigator.getUserMedia() and Navigator.mediaDevices.getUserMedia() aren't supported by all browsers, this plugin provides similar functionality.

The plugin supports two different methods for microphone capture:

  1. Let the plugin handle the encoding of raw data by using the audioinput object as an AudioNode, which can be connected to your Web audio API node chain.
  2. Subscribing to audioinput events in order to receive chunks of raw audio data, which then can be processed by your app. Using this method doesn't require Web audio support on the device.

Supported Platforms

  • Android
  • iOS
  • browser

Installation

From the Cordova Plugin Repository:

cordova plugin add cordova-plugin-audioinput

or by using the GitHub project URL:

cordova plugin add https://github.com/edimuj/cordova-plugin-audioinput.git

I haven't tested the plugin with PhoneGap build and ionic build, so feel free to message me if you tried it with success there.

Events

When using the event based approach, the plugin emits the following window events:

  • audioinput
  • audioinputerror

Basic Usage Example - AudioNode

After the Cordova deviceready event has fired:

// Start with default values and let the plugin handle conversion of 
// raw data, and therefore will not send any audioinput events.
// If an audio context is not provided, the plugin will create one for you.``

function startCapture() {
	audioinput.start({
		streamToWebAudio: true
	});
	
	// Connect the audioinput to the device speakers in order to hear the captured sound.
	audioinput.connect(audioinput.getAudioContext().destination);
}

// First check whether we already have permission to access the microphone.
window.audioinput.checkMicrophonePermission(function(hasPermission) {
	if (hasPermission) {
		console.log("We already have permission to record.");
		startCapture();
	} 
	else {	        
		// Ask the user for permission to access the microphone
		window.audioinput.getMicrophonePermission(function(hasPermission, message) {
			if (hasPermission) {
				console.log("User granted us permission to record.");
				startCapture();
			} else {
				console.warn("User denied permission to record.");
			}
		});
	}
});

Advanced Usage Example - Events

Use the event based method if you need more control over the capture process.

Subscribe to audioinput events: The event will continuously be fired during capture, allowing the application to receive chunks of raw audio data.

You can also subscribe to audioinputerror error events as seen in the example below:

function onAudioInput( evt ) {
    // 'evt.data' is an integer array containing raw audio data
    //   
    console.log( "Audio data received: " + evt.data.length + " samples" );
    
    // ... do something with the evt.data array ...
}

// Listen to audioinput events
window.addEventListener( "audioinput", onAudioInput, false );

var onAudioInputError = function( error ) {
    alert( "onAudioInputError event recieved: " + JSON.stringify(error) );
};

// Listen to audioinputerror events
window.addEventListener( "audioinputerror", onAudioInputError, false );

After the Cordova deviceready event has fired (don't forget to first check/get microphone permissions as shown in the basic example above):

// Start capturing audio from the microphone
audioinput.start({
    // Here we've changed the bufferSize from the default to 8192 bytes.
    bufferSize: 8192 
});

// Stop capturing audio input
audioinput.stop()

Advanced Usage Example - Saving to files

Use fileUrl in the captureCfg if you want to save audio files directly to the file system.

This requires adding cordova-plugin-file to your project:

// Get access to the file system
window.requestFileSystem(window.TEMPORARY, 5*1024*1024, function(fs) {
    console.log("Got file system: " + fs.name);
    fileSystem = fs;

    // Now you can initialize audio, telling it about the file system you want to use.
    var captureCfg = {
		sampleRate: 16000,
		bufferSize: 8192,
		channels: 1,
		format: audioinput.FORMAT.PCM_16BIT,
		audioSourceType: audioinput.AUDIOSOURCE_TYPE.DEFAULT,
		fileUrl: cordova.file.cacheDirectory
    };
    
    // Initialize the audioinput plugin.
    window.audioinput.initialize(captureCfg, function() {	
		// Now check whether we already have permission to access the microphone.
		window.audioinput.checkMicrophonePermission(function(hasPermission) {
		    if (hasPermission) {
				console.log("Already have permission to record.");
		    } 
		    else {	        
			    // Ask the user for permission to access the microphone
				window.audioinput.getMicrophonePermission(function(hasPermission, message) {
				    if (hasPermission) {
						console.log("User granted permission to record.");
				    } else {
						console.warn("User denied permission to record.");
				    }
				});
		    }
		});
    });
}, function (e) {
	console.log("Couldn't access file system: " + e.message)
});

// Later, when we want to record to a file...
var captureCfg = {
    fileUrl : cordova.file.cacheDirectory + "temp.wav"
}

// Start the capture.
audioinput.start(captureCfg);

// ...and when we're ready to stop recording.
audioinput.stop(function(url) {
    // Now you have the URL (which might be different to the one passed in to audioinput.start())
    // You might, for example, read the data into a blob.
    window.resolveLocalFileSystemURL(url, function (tempFile) {
	tempFile.file(function (tempWav) {
		    var reader = new FileReader();	    
		    reader.onloadend = function(e) {
		        // Create the blob from the result.
				var blob = new Blob([new Uint8Array(this.result)], { type: "audio/wav" });
				// Delete the temporary file.
				tempFile.remove(function (e) { console.log("temporary WAV deleted"); }, fileError);			
				// Do something with the blob.
				doSomethingWithWAVData(blob);		
		    }
		    reader.readAsArrayBuffer(tempWav);
		});
    }, function(e) {
		console.log("Could not resolveLocalFileSystemURL: " + e.message);
    });
});

Demo app

app-audioinput-demo is a Cordova app project using this plugin based on the examples below.

Examples

The demo folder contains some usage examples.

Remember that unfiltered microphone output likely will create a nasty audio feedback loop, so lower the volume before trying out the demos!

  • webaudio-demo - How to use the audioinput object as a Web Audio API AudioNode that can be connected to your own chain of AudioNodes.
  • events-demo - How to subscribe to the audioinput events to get and handle chunks of raw audio data.
  • wav-demo - How to encode recorded data to WAV format and use the resulting blob as a source for Audio elements.
  • file-demo - How to encode recorded data to WAV format and save the resulting blob as a file. To run this demo cordova plugin add cordova-plugin-file is required.

Usage from Typescript

Typings are included in the package to facilitate usage from Typescript. The following typings are defined:

  • AudioInput class exposing all plugin functions
  • AudioInputConfiguration interface
  • AudioInputSettings namespace which contains all available settings:
    • BUFFERSIZE
    • FORMAT
    • CHANNELS
    • SAMPLERATE
    • AUDIOSOURCE_TYPE

The following example shows how to use it:

import { AudioInput, AudioInputConfiguration }  from 'cordova-plugin-audioinput';

declare var audioinput: AudioInput;

let audioCfg: AudioInputConfiguration = {
	sampleRate: audioinput.SAMPLERATE.CD_AUDIO_44100Hz,
	channels: audioinput.CHANNELS.STEREO,
	bufferSize: 4096
}

...
audioinput.start(audioCfg);
...

API

Prepare for capturing audio from the microphone. Performs any required preparation for recording audio on the given platform.

audioinput.initialize( captureCfg, onInitialized );

Check whether the module already has permission to access the microphone. The callback function has a single boolean argument, which is true if access to the microphone has been granted, and false otherwise. The check is silent - the user is not asked for permission if they haven't already granted it.

audioinput.checkMicrophonePermission( onComplete );

Obtains permission to access the microphone from the user. This function will prompt the user for access to the microphone if they haven't already granted it. The callback function has two arguments:

  • hasPermission - true if access to the microphone has been granted, and false otherwise.
audioinput.getMicrophonePermission( onComplete );

Start capturing audio from the microphone. Ensure that initialize and at least checkMicrophonePermission have been called before calling this. The captureCfg parameter can include more configuration than previously passed to initialize.

audioinput.start( captureCfg );

Where captureCfg can either be empty, null or contain/override any of the following parameters and their default values. Please note that not all audio configuration combinations are supported by all devices, the default settings seems to work on most devices though:

var captureCfg = {

    // The Sample Rate in Hz.
    // For convenience, use the audioinput.SAMPLERATE constants to set this parameter.
    sampleRate: audioinput.SAMPLERATE.CD_AUDIO_44100Hz,
    
    // Maximum size in bytes of the capture buffer. Should be a power of two and <= 16384.
    bufferSize: 16384,
    
    // The number of channels to use: Mono (1) or Stereo (2).
    // For convenience, use the audioinput.CHANNELS constants to set this parameter.
    channels: audioinput.CHANNELS.MONO,
    
    // The audio format. Currently PCM_16BIT and PCM_8BIT are supported.
    // For convenience, use the audioinput.FORMAT constant to access the possible 
    // formats that the plugin supports.
    format: audioinput.FORMAT.PCM_16BIT,
    
    // Specifies if the audio data should be normalized or not.
    normalize: true,
    
    // Specifies the factor to use if normalization is performed.
    normalizationFactor: 32767.0,
    
    // If set to true, the plugin will handle all conversion of the data to 
    // web audio. The plugin can then act as an AudioNode that can be connected 
    // to your web audio node chain.
    streamToWebAudio: false,
    
    // Used in conjunction with streamToWebAudio. If no audioContext is given, 
    // one (prefixed) will be created by the plugin.
    audioContext: null,
    
    // Defines how many chunks will be merged each time, a low value means lower latency
    // but requires more CPU resources.
    concatenateMaxChunks: 10,
    
    // Specifies the type of the type of source audio your app requires.
    // For convenience, use the audioinput.AUDIOSOURCE_TYPE constants to set this parameter:
    // -DEFAULT
    // -CAMCORDER - Microphone audio source with same orientation as camera if available.
    // -UNPROCESSED - Unprocessed sound if available.
    // -VOICE_COMMUNICATION - Tuned for voice communications such as VoIP.
    // -MIC - Microphone audio source. (Android only)
    // -VOICE_RECOGNITION - Tuned for voice recognition if available (Android only)
    audioSourceType: audioinput.AUDIOSOURCE_TYPE.DEFAULT,

	// If you have your own error handler, you can set a callback to your function 
	// using the onError parameter. The callback function will be called with a single string parameter
	// that contains the error message.
	onError: undefined,
	
    // Optionally specifies a file://... URL to which the audio should be saved.
    // If this is set, then no audioinput events will be raised during recording.
    // When stop is called, a single audioinputfinished event will be raised, with
    // a "file" argument that contains the URL to which the audio was written,
    // and the callback passed into stop() will be invoked.
    // Currently, only WAV format files are guaranteed to be supported on all platforms.
    // When called initialize(), this should be a URL to the directory in which files will
    // be saved when calling start(), so that initialize() can ensure access to the directory
    // is available.
    fileUrl: null
    
};

Stop capturing audio from the microphone: The callback function has a single string argument, which is the url where the file was saved, if a fileUrl was passed in to start as part of captureCfg. Note that the url passed out from stop is not guaranteed to be the same as the fileUrl passed in.

audioinput.stop( onStopped );

Check if the plugin is capturing, i.e. if it is started or not:

audioinput.isCapturing(); // Returns true if it is started

Get the current configuration from the plugin:

audioinput.getCfg();

When using the streamToWebAudio option, you can connect the plugin to your own Web audio node chain:

audioinput.connect( audioNode );

When using streamToWebAudio you can disconnect the previously connected plugin from your your own Web audio node chain:

audioinput.disconnect();

When using streamToWebAudio, and have not supplied the plugin with an Audio context, the following method is used to get the internally created Web Audio context:

audioinput.getAudioContext();

Todo list

Enhancements

Motivate us!

Do you use this plugin in an published app? Feel free to star the project and/or message me about it. It is always super-exciting to see real-world applications using this plugin, and it helps us to prioritize new features and bug fixes.

And if you find this plugin useful, ensure that it is kept alive by donating:

paypal

Contributing

This project is open-source, so contributions are welcome. Just ensure that your changes doesn't break backward compatibility!

  1. Fork the project.
  2. Create your feature branch (git checkout -b my-new-feature).
  3. Commit your changes (git commit -am 'Add some feature').
  4. Push to the branch (git push origin my-new-feature).
  5. Create a new Pull Request.

Credits

  • The plugin is created by Edin Mujkanovic.

Other contributors

License

MIT License

cordova-plugin-audioinput's People

Contributors

ddddm avatar edimuj avatar j3k0 avatar mreinstein avatar robertfromont avatar sertal70 avatar stefek99 avatar tattomoosa avatar taxilian avatar weareu avatar zyf0330 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cordova-plugin-audioinput's Issues

Add functionality for saving the audio input to files

Being able to specify that the audio input from the microphone simultaneously is saved to file, would be a really nice feature. This should be optional though.

Target format should probably be M4A since it is compressed and supported by both Android and iOS.

If anybody out there has the knowledge and time to do this, it would be highly appreciated.

Using with Ionic Platform

I am currently testing with Ionic platform but it stopped the application once audioinput.start(captureCfg); has been called. It just closes off the application so I can't see the error message as well.

IOS :: how reproduce from /var/mobile/Containers/Data/Application/.../Library/NoCloud/...

Hello,

The problem I have because at the end of the recording I want to show as a "Preview" of what I recorded, but it gives me error when playing it.

And it is only at the time of reproducing it, because at the time I upload it to the server with that same URL if it works.

I have seen in several forums that it is because it doesn't have permissions to use from that "url", which should be of a "tmp".

But I don't know how to solve it, or how to change that url for a temporary to be able to access it.

Thanks.

Capturing android speaker output ?!

Hi;
I am asking if it is possible per your experience to use android speaker output as 'input' for the steam.
I saw an application doing this already, but I am wondering if it is possible? I can fork it for this feature but I want to discuss it first.

Thank you!

Getting the volume level

Hi, I like your plugin and it is possible to get the volume level directly from the microphone? Or detect the volume change with the plugins?

Cannot read property '0' of undefined when encoding

I get this error, and seriously not getting why, seams the encoder fails to encode.
It fails calling the encode method, I'm using the same parameters as the config, which is CD_AUDIO_44100Hz and STEREO, the array as some data in it (not zero length)
Any clues anyone ? Thanks.
Here is a screenshot of the failing portion of code.
screen shot 2018-02-07 at 2 41 18 pm

Very high latency

Hello,
I am getting high latency while running via the WebAudioAPI. I am testing on a Nexus 6P, Android 7.1.1 with a crosswalk 19 app.

audioinput.start({
    streamToWebAudio: true,
    normalize: true, //got noisy but still delayed audio
    concatenateMaxChunks: 1, // attempted to lower latency, went from 5 seconds (10) to 2 seconds (1)
    audioSourceType: audioinput.AUDIOSOURCE_TYPE.UNPROCESSED //got no audio
});
audioinput.connect(audioinput.getAudioContext().destination);

Is there anything else I can tweak?

Audio output low after initializing plugin on iOS

This is truly a great plugin which helped me a lot. However, I have a problem on iOS.
After initializing the plugin like so:

audioinput.start({bufferSize: 8192, sampleRate: audioinput.SAMPLERATE.VOIP_16000Hz});

the subsequent audio playback, for example through <audio> or <video> tags, all have a very low volume, whereas the system level volume is at 100%. I tested it on iPhone 4S, 5S and 6S and it happens on all devices. Any idea what could cause this?

Phonegap app not finding audio-input plugin

I have created a new phonegap project and added the cordova-plugin-audioinput plugin. Then I copied the demo folder from the plugin to www directory of the phonegap app. When I tried running the demos, I am getting the warning 'cordova-plugin-audioinput not found!'. Any help would be greatly appreciated to resolve my issue.

No audio on Android N

Hello,
I am testing this plugin with my Nexus 6p and am not able to get any data from the microphone using the WebAudioAPI option. I tried from the example too. It works on other devices though.

Something abnormal on iOS

I attempt to use this plugin on iOS.
When I set {streamToWebAudio: true} and start, nothing plays. When I inspect code and debug, everything is normal as expected.
If I play the data with my webAudio, it works. But one thing is unusual, when I click button to stop recording, it executes after a few time about 200ms or more. And it seems not smooth.
I think, code in native api on iOS performs differently from on Android and it obstructs web js code executing.

Xcode Error: Can't find variable: audioinput

Hi, I like your plugin
It works in Android
However, when I tested in Xcode, the follow message occurred
" Error: Can't find variable: audioinput "

Xcode version is 7.3.1
Cordova version is 6.3.1

Please help.

Cordova Plugin Repository is not up to date

I first installed this plugin from the cordova plugin registry:

$ cordova plugin add cordova-plugin-audioinput

But I got an error that it was missing the initialize method, and after checking the code, I can verify that it is missing.

Installing from the github page works though

(iOS) Odd Latency Issue

The latency starts off at less than half a second behind, but gets worse and worse the longer the app runs.

This is without doing any processing of the audio, and I am using the default settings for audioinput.start except 'streamToWebAudio' is true.

Maybe related:

XCode logs a warning:
2016-07-17 19:56:51.384 loops[5423:2056990] THREAD WARNING: ['AudioInputCapture'] took '373.946777' ms. Plugin should use a background thread.

I don't need low latency for my app, but I do need predictable latency. Is there any way to accomplish that?

Crashing in iOS after returning from background

I'm getting a crash when my app returns from background. I'm able to record and stop recording all day long when the app first comes up.. but hitting record after coming back from background the app crashes.

It throws EXC_BAD_ACCESS(code=1, adress=0xWtver123) at line 89 in
cordova-plugin-audioinput/src/ios/CDVAudioInputCapture.m

Here:
https://github.com/edimuj/cordova-plugin-audioinput/blob/master/src/ios/CDVAudioInputCapture.m#L89

I don't ObjC from a hole in the ground so any help would be much appreciated. Love the plugin it does exactly what I needed. I was also a little confused as to how to connect it up to my WebAudio lib. I'm using Recorder.js (https://github.com/mattdiamond/Recorderjs) but this is how I got it to work... perhaps there is a better way of connecting the 2 and I'm not sure if it's related?

Any advice would be much appreciated.

This is what my code essentially looks like minus some DOM stuff.

startRecording: function(){
        // Start audioinput streaming to WebAudio        
        audioinput.start({ streamToWebAudio: true});

        if(this.recorder){
            this.recorder.clear();
            this.recorder.record();
        } else {
            // Here I use audioinput._micGainNode as the source for a new recorder.js instance
            this.recorder = new Recorder(audioinput._micGainNode, {numChannels:1});
            this.recorder.record();
        }
},

stopRecording: function(){
        this.recorder.stop();
        // I stop audioinput after I stop recording
        audioinput.stop();
        this.recorder.exportWAV(this.setRecordedFile);
}

raw audio mode sends data as a string?

@edimuj I noticed these lines in the code:

var audioData = JSON.parse(audioInputData.data);

(from https://github.com/edimuj/cordova-plugin-audioinput/blob/master/www/audioInputCapture.js#L215)

NSString *str = [mutableArray componentsJoinedByString:@","];
NSString *dataStr = [NSString stringWithFormat:@"[%@]", str];
NSDictionary* audioData = [NSDictionary dictionaryWithObject:[NSString stringWithString:dataStr] forKey:@"data"];

(from https://github.com/edimuj/cordova-plugin-audioinput/blob/master/src/ios/CDVAudioInputCapture.m#L93-L95)

It seems that the obj-c code is converting an array of (short ints?) to a string, and then the javascript code is parsing that into a plain javascript array.

I'm wondering if there's an opportunity here to make this more efficient: Instead of casting to/from a string, can we somehow send the array directly to javascript?

need a way to receive a notification when audio stops recording

Looking at this line: https://github.com/edimuj/cordova-plugin-audioinput/blob/master/www/audioInputCapture.js#L136

exec(null, audioinput._audioInputErrorEvent, "AudioInputCapture", "stop", []);

The first parameter is the success callback according to https://cordova.apache.org/docs/en/latest/guide/hybrid/plugins/#the-javascript-interface

I'm wondering if there's a way we can fire an event when the microphone stops. I'm having an issue with my application where I call audioinput.stop(), and then I immediately play some web audio. The audio starts playing with reduced volume. Eventually the microphone stops, and then the playing audio goes to full volume. Having a way to know when the microphone is no longer acquiring data enables me to wait before playing the audio

window.audioinput is undefined

Hi

I tried adding this plugin but none of the methods seem to be defined.

I'm trying to initialise after the deviceready event (all the other plugins work)

Cordova version: 8
Plugin version: 1.0.0

Am I missing something ?

Start then stop then start again failed !

First time start is called, I receive data on my buffer
Then I call stop, the capturing is false

If I call start again, the capturing says true, but I receive nothing on my buffer.
Neither the event listener for success or error is called in fact ...

I tried to remove it each time and adding it again but it changes nothing :(

Audio event not firing.

Hey everybody,

we wanted to include the plugin in our app, because we faced challenges with the webRTC possibilities on iOS.

After the plugin didn't work in our app, we built a demo app, but same problems there. I attached the files for this.

We are currently not getting an AudioInput event fired, but also no errors. We tested it on an iPad Mini 4 with iOS 11.3 and iPhone7 Plus with iOS 11.2.6.

Cordova: 8.0
Plugin-Version: 1.0.0

Do you have any ideas about this?

config.xml.txt
index.js.txt
index.html.txt

Help about choose speaker to play audio

When I use iPhone to play wav with audioContext.destination, sound comes from
stethoscope. And if it is Android, it is from loudspeaker.
So how to choose?

Working with audioinput on ionic2

Hello, I've tried running the audioinput plugin on an ioinic2 app but there is no response or indication that he plugin is working, for example, I did console.log(window.audioinput); but the audioinput object does not print to console log.
Does this plugin work in with another plugin? or will have to write a Web Audio API code to work with the window.audioinput?
or is it completely stand alone?
I would like to see a working demo code with ionic2 or angular2.
Note: I couldnt figure out how to use the example demo code found on the plugin.
Edit: window.audioinput prints undefined in console log.

Bug when working with different AudioContexts in WebAudio mode

Hi @edimuj ,

thanks for this nice plugin. While testing I discovered a minor bug, that lead to the crash of my webapp running in cordova under iOS. I am using webaudio and your plugin to record audio and to prevent blocking hardware resources I close and release the audio context once the recording is finished. Subsequent calls to audioinput.start would fail with a strange error.

How to reproduce:

call to start recording

audioinput.start({
	streamToWebAudio: true,
	channels: audioinput.CHANNELS.STEREO,
	audioContext: recordingAudioContext,
	bufferSize: bufferSize
});

connect the audioinput to my audio graph and once finished i call

audioinput.stop();

The first call is being handled successfully, but the next calls, as mentioned before, will fail. I create a new recordingAudioContext on every new tap on the record button of the user. The problem is, that you create a gainnode as the connecting node to the rest of the audio graph, which is totally fine. But while calling audioinput.stop() the gainnode should eventually be discarded, since it is possible to pass a new audioContext on every call of audioinput.start.

Long story short:

  • the gainnode is created during the first call of start() with the provided audioContext
  • I pass a new audioContext on every subsequent call of start(), but the plugin wants to use the "old" gainnode, which belongs the old already closed audioContext, which fails with an exception

My current workaround:

manually discard the gainnode via audioinput._micGainNode = null;, so a new one is created automatically thus resolving the issue, but I guess the plugin should handle this internally :)

Multiple game breakers on Android

Plugin basically doesn't work using with events. AudioInputCapture.java has this in initialize and start:

String fileUrlString = args.getString(5);
if (fileUrlString != null) {
   this.fileUrl = new URI(fileUrlString);
  // ensure it's a file URL
   new File(this.fileUrl);
 }

I don't know much about Java but that's clear broken. This seems to work:

 if (!args.isNull(5)) {
   String fileUrlString = args.getString(5);
   this.fileUrl = new URI(fileUrlString);
   // ensure it's a file URL
   new File(this.fileUrl);
}

I'd PR it but even with the fix the window audioinput event only fires once and not repeatedly as expected.

Separate capture cfg parameter for bit depth

It is a bit ugly to use a format string (e.g. PCM_16BIT) in the start configuration. It would be better if we could have a separate parameter for bit depth (e.g. 16 or 8), but also keep the old format parameter for backwards compatibility. This could be handled in the js part of the plugin, so that the native code always would get bit depth instead of format strings.

Base64 encoding instead of Array.toString for data transport?

Don't know how much better it would be in terms of Performance. Does anybody know?

Ideally it would be best if we could stream data from the native to webview layer, but I haven't yet found a way to do this. Maybe it could be achieved by starting up a server port that the webview can stream from, but it would probably be a nightmare in terms of security constraints?!

volume compare from mic ?

Hi,
is it possible to compare a volume from microphone ? something like low or high volume from microphone directly when we speak (ios cordova)
thanks

Use it for get data from mic and to play back audio

Hi, nice promising plugin, thank you

Is it possible to use this plugin to do these 2 things at the same time:?

1- get data from mic, use the events mode , for example: get the data from the buffer, copy it to my local buffer and prepare/process the bytes to be sent to the network for a remote end to receive and playback.

2- playback back audio that I obtain from another buffer at the same time : I receive bytes from the network , put them in a buffer and I want to playback the audio from the buffer without interrupting the collection of bytes from the mic described in step 1

I have been able to do step 1 above on Android to get data from the microphone and send it to the network very close to real time, very small latency. Now I need to play back (at the same time) data that I obtain from the network (I am making a VoIP app). using the events mode . How can I use this plugin to also playback the audio I get form the remote end ?

Is the same functionality available for iOS ? When I try my app on iOS (step1) the audio sounds very bad, and very choppy.
Please advise

Trying to play a sound to signal audio recording has started IOS

I'm using the cordova Media plugin to play a simple beep sound to signal the start and end of the audio input. If I play the beep before I call audioinput.start(), it is very quiet which is not what I need. I need it to be full volume. If I call the beep after I call audioinput.start() it immediately ends the recording, which is also what I don't want. I've tried using a few different plugins to play the sound and this seems to be the case for all of them. It works fine on Android.

Anyone know what I'm doing wrong? I'm assuming there is something in the audioinput code that is doing this intentionally.

Missing: cordova-plugin-file or cordova-plugin-audioinput!

Hello,
I have installed cordova-plugin-audioinput plugin and trying several ways of playing audo referencing demo folder. When i try using file-demo.html and copied its css and js files into www dir and run emulator, I get following error:

Missing: cordova-plugin-file or cordova-plugin-audioinput!

This error is coming from utils.js but not sure what is actual source.

Can you please suggest me how can I resolve this issue. Its urgent!
Thank you in advance.

Microphone Permissions on iOS 10

Will you add a permission request message for the microphone to comply with the iOS 10 security upgrade? The app crashes when the plugin is initiated without a specified permission request alert.

Play remote audio file

Hello,

I am trying to play remote audio file through this plugin. Here is my script https://pastebin.com/K08PbXU2

I can make sure that my audio file url is correct but still this plugin is not playing the audio.

Next, I have downloaded the audio file into cordova file system and tried to play it but its not playing. I have checked that the audio file exists in the system.

This is critical to my project and the project I am doing will help for blind people to know about surrounding so your answer is really very important.
Looking forward.

Native code formatting

Since I wrote the native code in a standard text editor, the formatting is pretty bad. It should be fixed, so that the code is easier to read.

cordova-plugin-audioinput not found

When I try to use your demo examples, I get the error message: "cordova-plugin-audioinput not found".

I'm using your script in version 0.5.0 with Android 7.0 and cordova 8.0.0.

Could you please give me additional help?

Error connecting to AudioNodes

I supplied an audio context to audioinput.start:
audioinput.start({ streamToWebAudio: true, audioContext: myContext });

but connecting audioinput like this doesn't work:
audioinput.connect(myContext.destination);
and this also doesn't work:
audioinput.connect(myGain);

it throws an error alert (i am using try/catch set up just like the demo) which says 'startCapture exception: Error: SyntaxError: DOM Exception 12'

but this way does work, at least it gets mic input coming out of the speakers:
audioinput.connect(audioinput.getAudioContext().destination)

I dug around in the source a bit and then checked if myContext == audioinput._audioContext and it doesn't. But it should, right?

Both myContext and audioinput._audioContext are valid audio contexts. I have some backup getUserMedia code for testing in browser, and the media stream source node it creates connects to exactly the same nodes in myContext and works just fine.

I am using Ionic, building with Ionic CLI (ionic build ios) then running on an iPhone 6 through XCode if that matters, but this seems like an issue somewhere in the javascript.

Add support for cordova browser platform and/or cordova-simulate

For development purposes, so that the plugin can run in the browser when running in the browser platform:
cordova platform add browser

The browser proxy should either provide dummy audio data (i.e. white noise) or rely on getUserMedia if it is supported in the current browser.

Android microphone permissions

Although is added in the AndroidManifest.xml,
The record function will not work until the permission is manually granted
(Settings -> Apps -> "appName" -> Permissions -> Microphone enable)
I believe this is a new to Marshmallow.

[Android 7] audioinput event not firing

Hei @edimuj - first thanks for the awesome project! ;)

For some odd reason the audioinput event is not firing on Android 7.0, my configuration is basic:

var audioCaptureCfg = {
        sampleRate: audioinput.SAMPLERATE.CD_AUDIO_44100Hz,
        bufferSize: 16384,
        channels: audioinput.CHANNELS.MONO,
        format: audioinput.FORMAT.PCM_16BIT,
        normalize: true,
        normalizationFactor: 32767.0,
        concatenateMaxChunks: 10,
        audioSourceType: audioinput.AUDIOSOURCE_TYPE.DEFAULT
    };

I'm pretty much following the file-demo.js example, which is pretty straight forward, no errors are being thrown either.(window.audioinput && !audioinput.isCapturing())is not falsy and start() gets executed.

Any tips on what might be causing the issue or how to debug it on my side?

Thanks

App crash on iOS 11

Hi. First, I love this plugin because it gets the microphone permission on iOS without opening a separate recorder app. We use it in a language learning app to parse speech with the Google Speech API.

Has the plugin been tested with iOS 11 yet? I'm using version 0.3.0 of the plugin, but I have tried master and get a different error. For version 0.3.0, my app is crashing on iOS 11 after the user gives permission for the microphone. I see the popup asking for permission to use the microphone, and click OK, and then the app crashes. I see the following log messages in the Xcode console:

-[__NSCFNumber length]: unrecognized selector sent to instance 0xb00000000306c645
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFNumber length]: unrecognized selector sent to instance 0xb00000000306c645'
*** First throw call stack:
(0x182951d04 0x181ba0528 0x18295f1c8 0x1829576b0 0x18283d01c 0x1830b5328 0x182f99608 0x182f30eb8 0x1012e6d2c 0x1012e1d30 0x1012e30b0 0x1012e2da4 0x1012e1bcc 0x1012b5518 0x1012e9830 0x1012dd088 0x1019a549c 0x1019a545c 0x1019b4280 0x1019a89a4 0x1019b5104 0x1019bc100 0x18257afd0 0x18257ac20)
libc++abi.dylib: terminating with uncaught exception of type NSException

I have tried quite a few things to pinpoint the problem in the Objective C code, but I don't know Objective C very well. Here is my JavaScript code:

    this.recorder = this.$window.audioinput;
    this.captureConfig = {
        sampleRate: 16000,
        channels: this.recorder ? this.recorder.CHANNELS.MONO : null,
        format: this.recorder ? this.recorder.FORMAT.PCM_16BIT : null,
        audioSourceType: this.recorder ? this.recorder.AUDIOSOURCE_TYPE.DEFAULT : null
    };
    this.recorder.start(this.captureConfig);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.