public class SpeechRecognitionServiceFactory
extends java.lang.Object
(1) A DataRecognitionClient -- for speech recognition with data (for example from a file or audio source). The data is broken up into buffers and each buffer is sent to the Speech Recognition Service. No modification is done to the buffers, so the user can apply their own Silence Detection. Returns only text recognition results. The audio must be PCM, mono, 16-bit sample, with sample rate of 8000 Hz or 16000 Hz.
(2) A DataRecognitionClientWithIntent -- for speech recognition with data (for example from a file or audio source). The data is broken up into buffers and each buffer is sent to the Speech Recognition Service. No modification is done to the buffers, so the user can apply their own Silence Detection. Returns both text recognition results as well as structured intent results in JSON form from the LUIS (see https://LUIS.ai) service. The audio must be PCM, mono, 16-bit sample, with sample rate of 8000 Hz or 16000 Hz.
(3) A MicrophoneRecognitionClient -- for speech recognition from the microphone. The microphone is turned on and data from the microphone is sent to the Speech Recognition Service. A built in Silence Detector is applied to the microphone data before it is sent to the recognition service. Returns only text recognition results.
(4) A MicrophoneRecognitionClient -- for speech recognition from the microphone. The microphone is turned on and data from the microphone is sent to the Speech Recognition Service. A built in Silence Detector is applied to the microphone data before it is sent to the recognition service. Returns both text recognition results as well as structured intent results in JSON form from the LUIS (see https://LUIS.ai) service.
Modifier and Type | Field and Description |
---|---|
static java.lang.String |
DictationContext |
Modifier and Type | Method and Description |
---|---|
static DataRecognitionClient |
createDataClient(android.app.Activity activity,
SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey)
Deprecated.
|
static DataRecognitionClient |
createDataClient(android.app.Activity activity,
SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey)
Create a DataRecognitionClient -- for speech recognition with data (for example from a file or audio source).
|
static DataRecognitionClient |
createDataClient(android.app.Activity activity,
SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String url)
Create a DataRecognitionClient with Acoustic Model Adaptation -- for speech recognition with data (for example from a file or audio source).
|
static DataRecognitionClient |
createDataClient(SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey)
Deprecated.
|
static DataRecognitionClient |
createDataClient(SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey)
Create a DataRecognitionClient -- for speech recognition with data (for example from a file or audio source).
|
static DataRecognitionClient |
createDataClient(SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String url)
Create a DataRecognitionClient with Acoustic Model Adaptation -- for speech recognition with data (for example from a file or audio source).
|
static DataRecognitionClientWithIntent |
createDataClientWithIntent(android.app.Activity activity,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Deprecated.
|
static DataRecognitionClientWithIntent |
createDataClientWithIntent(android.app.Activity activity,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Create a DataRecognitionClientWithIntent -- for speech recognition with data (for example from a file or audio source).
|
static DataRecognitionClientWithIntent |
createDataClientWithIntent(android.app.Activity activity,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId,
java.lang.String url)
Create a DataRecognitionClientWithIntent with Acoustic Model Adaptation -- for speech recognition with data (for example from a file or audio source).
|
static DataRecognitionClientWithIntent |
createDataClientWithIntent(java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Deprecated.
|
static DataRecognitionClientWithIntent |
createDataClientWithIntent(java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Create a DataRecognitionClientWithIntent -- for speech recognition with data (for example from a file or audio source).
|
static DataRecognitionClientWithIntent |
createDataClientWithIntent(java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId,
java.lang.String url)
Create a DataRecognitionClientWithIntent with Acoustic Model Adaptation -- for speech recognition with data (for example from a file or audio source).
|
static MicrophoneRecognitionClient |
createMicrophoneClient(android.app.Activity activity,
SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey)
Deprecated.
|
static MicrophoneRecognitionClient |
createMicrophoneClient(android.app.Activity activity,
SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey)
Create a MicrophoneRecognitionClient -- for speech recognition from the microphone.
|
static MicrophoneRecognitionClient |
createMicrophoneClient(android.app.Activity activity,
SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String url)
Create a MicrophoneRecognitionClient with Acoustic Model Adaptation -- for speech recognition from the microphone.
|
static MicrophoneRecognitionClient |
createMicrophoneClient(SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey)
Deprecated.
|
static MicrophoneRecognitionClient |
createMicrophoneClient(SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey)
Create a MicrophoneRecognitionClient -- for speech recognition from the microphone.
|
static MicrophoneRecognitionClient |
createMicrophoneClient(SpeechRecognitionMode speechRecognitionMode,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String url)
Create a MicrophoneRecognitionClient with Acoustic Model Adaptation -- for speech recognition from the microphone.
|
static MicrophoneRecognitionClientWithIntent |
createMicrophoneClientWithIntent(android.app.Activity activity,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Deprecated.
|
static MicrophoneRecognitionClientWithIntent |
createMicrophoneClientWithIntent(android.app.Activity activity,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Create a MicrophoneRecognitionClientWithIntent -- for speech recognition from the microphone.
|
static MicrophoneRecognitionClientWithIntent |
createMicrophoneClientWithIntent(android.app.Activity activity,
java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId,
java.lang.String url)
Create a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation -- for speech recognition from the microphone.
|
static MicrophoneRecognitionClientWithIntent |
createMicrophoneClientWithIntent(java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryOrSecondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Deprecated.
|
static MicrophoneRecognitionClientWithIntent |
createMicrophoneClientWithIntent(java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId)
Create a MicrophoneRecognitionClientWithIntent -- for speech recognition from the microphone.
|
static MicrophoneRecognitionClientWithIntent |
createMicrophoneClientWithIntent(java.lang.String language,
ISpeechRecognitionServerEvents eventHandlers,
java.lang.String primaryKey,
java.lang.String secondaryKey,
java.lang.String luisAppId,
java.lang.String luisSubscriptionId,
java.lang.String url)
Create a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation -- for speech recognition from the microphone.
|
static java.lang.String |
getAPIVersion() |
public static final java.lang.String DictationContext
public static java.lang.String getAPIVersion()
@Deprecated public static DataRecognitionClient createDataClient(android.app.Activity activity, SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey)
activity
- The hosting activity context.speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.public static DataRecognitionClient createDataClient(android.app.Activity activity, SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey)
activity
- The hosting activity context.speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.public static DataRecognitionClient createDataClient(android.app.Activity activity, SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String url)
activity
- The hosting activity context.speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.@Deprecated public static DataRecognitionClient createDataClient(SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey)
speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.public static DataRecognitionClient createDataClient(SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey)
speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.public static DataRecognitionClient createDataClient(SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String url)
speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.@Deprecated public static DataRecognitionClientWithIntent createDataClientWithIntent(android.app.Activity activity, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
activity
- The hosting activity context.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static DataRecognitionClientWithIntent createDataClientWithIntent(android.app.Activity activity, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
activity
- The hosting activity context.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static DataRecognitionClientWithIntent createDataClientWithIntent(android.app.Activity activity, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId, java.lang.String url)
activity
- The hosting activity context.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.@Deprecated public static DataRecognitionClientWithIntent createDataClientWithIntent(java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static DataRecognitionClientWithIntent createDataClientWithIntent(java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static DataRecognitionClientWithIntent createDataClientWithIntent(java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId, java.lang.String url)
language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.@Deprecated public static MicrophoneRecognitionClient createMicrophoneClient(android.app.Activity activity, SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey)
activity
- The hosting activity context.speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.public static MicrophoneRecognitionClient createMicrophoneClient(android.app.Activity activity, SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey)
activity
- The hosting activity context.speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.public static MicrophoneRecognitionClient createMicrophoneClient(android.app.Activity activity, SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String url)
activity
- The hosting activity context.speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.@Deprecated public static MicrophoneRecognitionClient createMicrophoneClient(SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey)
speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.public static MicrophoneRecognitionClient createMicrophoneClient(SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey)
speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.public static MicrophoneRecognitionClient createMicrophoneClient(SpeechRecognitionMode speechRecognitionMode, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String url)
speechRecognitionMode
- In ShortPhrase mode the client gets one final multiple n-best choice result and
in LongDictation mode the client will receive multiple final results, based on
where the server thinks sentence pauses are.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.@Deprecated public static MicrophoneRecognitionClientWithIntent createMicrophoneClientWithIntent(android.app.Activity activity, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
activity
- The hosting activity context.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static MicrophoneRecognitionClientWithIntent createMicrophoneClientWithIntent(android.app.Activity activity, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
activity
- The hosting activity context.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static MicrophoneRecognitionClientWithIntent createMicrophoneClientWithIntent(android.app.Activity activity, java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId, java.lang.String url)
activity
- The hosting activity context.language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.@Deprecated public static MicrophoneRecognitionClientWithIntent createMicrophoneClientWithIntent(java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryOrSecondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryOrSecondaryKey
- The primary or the secondary key. If you want to change the key (which is probably a good idea to do every once in a while
in case it gets leaked somehow), you would have some downtime if there was only one key available.
So you get two, the primary and the secondary. If you disable one key the other key will still work
giving you time to replace the disabled one.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static MicrophoneRecognitionClientWithIntent createMicrophoneClientWithIntent(java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId)
language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.public static MicrophoneRecognitionClientWithIntent createMicrophoneClientWithIntent(java.lang.String language, ISpeechRecognitionServerEvents eventHandlers, java.lang.String primaryKey, java.lang.String secondaryKey, java.lang.String luisAppId, java.lang.String luisSubscriptionId, java.lang.String url)
language
- The language of the speech being recognized. Examples include:
eventHandlers
- An implementation of ISpeechRecognitionServerEvents that has all of the handler
methods for each kind of event.primaryKey
- The primary key. It's a best practice that the application rotate keys periodically.
Between rotations, you would disable the primary key, making the secondary key the
default, giving you time to swap out the primary.secondaryKey
- The secondary key. Intended to be used when the primary key has been disabled.luisAppId
- Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai)
you will be given an Application ID guid. Use that GUID here.luisSubscriptionId
- Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID.
Use that secret here.url
- The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.