Skip to content

🎤 React Native Voice Recognition library for iOS and Android (Online and Offline Support)

License

Notifications You must be signed in to change notification settings

AlexLerman/voice

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CircleCI branch npm

React Native Voice

A speech-to-text library for React Native.

chat on Discord
yarn add @react-native-voice/voice

# or

npm i @react-native-voice/voice --save

Link the iOS package

npx pod-install

Table of contents

Linking

Manually or automatically link the NativeModule

react-native link @react-native-voice/voice

Manually Link Android

  • In android/setting.gradle
...
include ':@react-native-voice_voice', ':app'
project(':@react-native-voice_voice').projectDir = new File(rootProject.projectDir, '../node_modules/@react-native-voice/voice/android')
  • In android/app/build.gradle
...
dependencies {
    ...
    compile project(':@react-native-voice_voice')
}
  • In MainApplication.java
import android.app.Application;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactPackage;
...
import com.wenkesj.voice.VoicePackage; // <------ Add this!
...

public class MainActivity extends Activity implements ReactApplication {
...
    @Override
    protected List<ReactPackage> getPackages() {
      return Arrays.<ReactPackage>asList(
        new MainReactPackage(),
        new VoicePackage() // <------ Add this!
        );
    }
}

Manually Link iOS

  • Drag the Voice.xcodeproj from the @react-native-voice/voice/ios folder to the Libraries group on Xcode in your poject. Manual linking

  • Click on your main project file (the one that represents the .xcodeproj) select Build Phases and drag the static library, lib.Voice.a, from the Libraries/Voice.xcodeproj/Products folder to Link Binary With Libraries

Prebuild Plugin

This package cannot be used in the "Expo Go" app because it requires custom native code.

After installing this npm package, add the config plugin to the plugins array of your app.json or app.config.js:

{
  "expo": {
    "plugins": ["@react-native-voice/voice"]
  }
}

Next, rebuild your app as described in the "Adding custom native code" guide.

Props

The plugin provides props for extra customization. Every time you change the props or plugins, you'll need to rebuild (and prebuild) the native app. If no extra properties are added, defaults will be used.

  • speechRecognition (string | false): Sets the message for the NSSpeechRecognitionUsageDescription key in the Info.plist message. When undefined, a default permission message will be used. When false, the permission will be skipped.
  • microphone (string | false): Sets the message for the NSMicrophoneUsageDescription key in the Info.plist. When undefined, a default permission message will be used. When false, the android.permission.RECORD_AUDIO will not be added to the AndroidManifest.xml and the iOS permission will be skipped.

Example

{
  "plugins": [
    [
      "@react-native-voice/voice",
      {
        "microphonePermission": "CUSTOM: Allow $(PRODUCT_NAME) to access the microphone",
        "speechRecognitionPermission": "CUSTOM: Allow $(PRODUCT_NAME) to securely recognize user speech"
      }
    ]
  ]
}

Usage

Full example for Android and iOS.

Example

import Voice from '@react-native-voice/voice';
import React, {Component} from 'react';

class VoiceTest extends Component {
  constructor(props) {
    Voice.onSpeechStart = this.onSpeechStartHandler.bind(this);
    Voice.onSpeechEnd = this.onSpeechEndHandler.bind(this);
    Voice.onSpeechResults = this.onSpeechResultsHandler.bind(this);
  }
  onStartButtonPress(e){
    Voice.start('en-US');
  }
  ...
}

API

Static access to the Voice API.

All methods now return a new Promise for async/await compatibility.

Method Name Description Platform
Voice.isAvailable() Checks whether a speech recognition service is available on the system. Android, iOS
Voice.start(locale) Starts listening for speech for a specific locale. Returns null if no error occurs. Android, iOS
Voice.stop() Stops listening for speech. Returns null if no error occurs. Android, iOS
Voice.cancel() Cancels the speech recognition. Returns null if no error occurs. Android, iOS
Voice.destroy() Destroys the current SpeechRecognizer instance. Returns null if no error occurs. Android, iOS
Voice.removeAllListeners() Cleans/nullifies overridden Voice static methods. Android, iOS
Voice.isRecognizing() Return if the SpeechRecognizer is recognizing. Android, iOS
Voice.getSpeechRecognitionServices() Returns a list of the speech recognition engines available on the device. (Example: ['com.google.android.googlequicksearchbox'] if Google is the only one available.) Android

Events

Callbacks that are invoked when a native event emitted.

Event Name Description Event Platform
Voice.onSpeechStart(event) Invoked when .start() is called without error. { error: false } Android, iOS
Voice.onSpeechRecognized(event) Invoked when speech is recognized. { error: false } Android, iOS
Voice.onSpeechEnd(event) Invoked when SpeechRecognizer stops recognition. { error: false } Android, iOS
Voice.onSpeechError(event) Invoked when an error occurs. { error: Description of error as string } Android, iOS
Voice.onSpeechResults(event) Invoked when SpeechRecognizer is finished recognizing. { value: [..., 'Speech recognized'] } Android, iOS
Voice.onSpeechPartialResults(event) Invoked when any results are computed. { value: [..., 'Partial speech recognized'] } Android, iOS
Voice.onSpeechVolumeChanged(event) Invoked when pitch that is recognized changed. { value: pitch in dB } Android

Permissions

Arguably the most important part.

Android

While the included VoiceTest app works without explicit permissions checks and requests, it may be necessary to add a permission request for RECORD_AUDIO for some configurations. Since Android M (6.0), user need to grant permission at runtime (and not during app installation). By default, calling the startSpeech method will invoke RECORD AUDIO permission popup to the user. This can be disabled by passing REQUEST_PERMISSIONS_AUTO: true in the options argument.

If you're running an ejected expo/expokit app, you may run into issues with permissions on Android and get the following error host.exp.exponent.MainActivity cannot be cast to com.facebook.react.ReactActivity startSpeech. This can be resolved by prompting for permssion using the expo-permission package before starting recognition.

import { Permissions } from "expo";
async componentDidMount() {
	const { status, expires, permissions } = await Permissions.askAsync(
		Permissions.AUDIO_RECORDING
	);
	if (status !== "granted") {
		//Permissions not granted. Don't show the start recording button because it will cause problems if it's pressed.
		this.setState({showRecordButton: false});
	} else {
		this.setState({showRecordButton: true});
	}
}

Notes on Android

Even after all the permissions are correct in Android, there is one last thing to make sure this libray is working fine on Android. Please make sure the device has Google Speech Recognizing Engine such as com.google.android.googlequicksearchbox by calling Voice.getSpeechRecognitionServices(). Since Android phones can be configured with so many options, even if a device has googlequicksearchbox engine, it could be configured to use other services. You can check which serivce is used for Voice Assistive App in following steps for most Android phones:

Settings > App Management > Default App > Assistive App and Voice Input > Assistive App

Above flow can vary depending on the Android models and manufactures. For Huawei phones, there might be a chance that the device cannot install Google Services.

How can I get com.google.android.googlequicksearchbox in the device?

Please ask users to install Google Search App.

iOS

Need to include permissions for NSMicrophoneUsageDescription and NSSpeechRecognitionUsageDescription inside Info.plist for iOS. See the included VoiceTest for how to handle these cases.

<dict>
  ...
  <key>NSMicrophoneUsageDescription</key>
  <string>Description of why you require the use of the microphone</string>
  <key>NSSpeechRecognitionUsageDescription</key>
  <string>Description of why you require the use of the speech recognition</string>
  ...
</dict>

Please see the documentation provided by ReactNative for this: PermissionsAndroid

Contributors

  • @asafron
  • @BrendanFDMoore
  • @brudny
  • @chitezh
  • @ifsnow
  • @jamsch
  • @misino
  • @Noitidart
  • @ohtangza & @hayanmind
  • @rudiedev6
  • @tdonia
  • @wenkesj

About

🎤 React Native Voice Recognition library for iOS and Android (Online and Offline Support)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 37.8%
  • TypeScript 28.2%
  • Objective-C 27.1%
  • Ruby 4.7%
  • JavaScript 1.1%
  • Starlark 0.9%
  • Shell 0.2%