diff options
author | nia <nia@pkgsrc.org> | 2019-11-28 15:56:03 +0000 |
---|---|---|
committer | nia <nia@pkgsrc.org> | 2019-11-28 15:56:03 +0000 |
commit | c1f7623a07053e883c44b569ead06fbfeea184ea (patch) | |
tree | fd84da87bd8f2752b8fbb55929cd14c429f8860f /audio/speech-dispatcher | |
parent | 0f93eae371e052608cbf13cc973543b04895f596 (diff) | |
download | pkgsrc-c1f7623a07053e883c44b569ead06fbfeea184ea.tar.gz |
speech-dispatcher: Slightly less ridiculous DESCR
Diffstat (limited to 'audio/speech-dispatcher')
-rw-r--r-- | audio/speech-dispatcher/DESCR | 72 |
1 files changed, 3 insertions, 69 deletions
diff --git a/audio/speech-dispatcher/DESCR b/audio/speech-dispatcher/DESCR index 25729917905..2849e6f4522 100644 --- a/audio/speech-dispatcher/DESCR +++ b/audio/speech-dispatcher/DESCR @@ -1,69 +1,3 @@ -Speech Dispatcher: - -Key features: - - * Common interface to different TTS engines - * Handling concurrent synthesis requests -- requests may come - assynchronously from multiple sources within an application and/or - from different applications - * Subsequent serialization, resolution of conflicts and priorities of - incomming requests - * Context switching -- state is maintained for each client connection - independently, event for connections from within one application - * High-level client interfaces for popular programming languages - * Common sound output handling -- audio playback is handled by Speech - Dispatcher rather than the TTS engine, since most engines have limited - sound output capabilities - -What is a very high level GUI library to graphics, Speech Dispatcher is -to speech synthesis. The application neither needs to talk to the devices -directly nor to handle concurrent access, sound output and other tricky -aspects of the speech subsystem. - -Supported TTS engines: - - * Festival - * Flite - * Espeak - * Cicero - * IBM TTS - * Espeak+MBROLA (through a generic driver) - * Epos (through a generic driver) - * DecTalk software (through a generic driver) - * Cepstral Swift (through a generic driver) - * Ivona - * Pico - -Supported sound output subsystems: - - * OSS - * ALSA - * PulseAudio - * NAS - -The architecture is based on a client/server model. The clients are all -the applications in the system that want to produce speech (typically -assistive technologies). The basic means of client communication with -Speech Dispatcher is through a Unix socket or Inet TCP connection using -the Speech Synthesis Interface Protocol (See the SSIP documentation for -more information). High-level client libraries for many popular -programming languages implement this protocol to make its usage as -simple as possible. - -Supported client interfaces: - - * C/C++ API - * Python 3 API - * Java API - * Emacs Lisp API - * Common Lisp API - * Guile API - * Simple command line client - -Existing assistive technologies known to work with Speech Dispatcher: - - * speechd-el - * Orca (see http://live.gnome.org/Orca/SpeechDispatcher) - * Yasr - * LSR - * BrlTTY +Speech Dispatcher project provides a high-level device independent layer +for access to speech synthesis through a simple, stable and well documented +interface. |