IMPORTANT COVID-19 UPDATES

9783110157352

Spoken Language System Assessment

by ; ;
  • ISBN13:

    9783110157352

  • ISBN10:

    3110157357

  • Format: Paperback
  • Copyright: 1999-07-01
  • Publisher: Mouton De Gruyter

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $154.00 Save up to $38.50
  • Buy Used
    $115.50
    Add to Cart Free Shipping

    USUALLY SHIPS IN 2-4 BUSINESS DAYS

Supplemental Materials

What is included with this book?

Table of Contents

User's guide
1(28)
Background
1(3)
EAGLES objectives
1(1)
EAGLES organisational structure
2(1)
EAGLES workplan
3(1)
Spoken Language systems, standards and resources
4(5)
Spoken Language systems
4(1)
Standards and resources for Spoken Language systems
5(4)
The EAGLES Spoken Language Working Group (WG5)
9(5)
Subgroups of the EAGLES Spoken Language Working Group
11(1)
Relationships with the other EAGLES Working Groups
11(1)
Workshops
11(2)
Production of the handbook
13(1)
Consultation with the R&D Community
13(1)
Overview of the handbook
14(11)
Intended readership
14(1)
Scope
15(6)
The main chapters of the handbook
21(4)
The current state of play
25(1)
Possible future actions
25(2)
Revision and completion of existing documentation
25(1)
Extended survey of existing practice
25(1)
Extension of language base
26(1)
Terminology
26(1)
Move to prescriptive recommendations
26(1)
Publication and dissemination
26(1)
Coordination with other bodies
26(1)
Contact points
27(1)
Acknowledgements
28(1)
Part III: Spoken language system assessment 29(274)
Assessment methodologies and experimental design
30(37)
Introduction
30(3)
How to read this chapter
30(2)
Role of statistical analysis and experimentation in Language Engineering Standards (LES)
32(1)
Statistical and experimental procedures for analysing data corpora
33(13)
Statistical analysis
33(1)
Populations, samples and other terminology
33(1)
Sampling
34(1)
Biases
34(1)
Estimating sample means, proportions and variances
35(5)
Hypothesis testing
40(6)
Experimental procedures
46(6)
Experimental selection of material
46(3)
Segmentation
49(2)
Classification
51(1)
Assessing recognisers
52(7)
Baseline performance
52(1)
Progress
53(3)
Functional adequacy and user acceptance
56(2)
Methodology
58(1)
Experimental design
59(1)
Assessing speaker verification and recognition systems
59(2)
Sampling rare events in speaker verification and recognition systems
60(1)
Employing expert judgments to augment speaker verification and assessment for forensic aspects of speaker verification and recognition
60(1)
Interactive dialogue systems
61(6)
Wizard of Oz (WOZ)
61(3)
Dialogue metrics
64(3)
Assessment of recognition systems
67(27)
Introduction
67(4)
Classification of recognition systems
67(2)
Speech quality and conditions
69(1)
Capability profile versus requirement profile
70(1)
Assessment purpose versus methodology
71(1)
Definitions and nomenclature
71(4)
The performance measure as percentage
71(1)
Recognition score
72(2)
Confusions
74(1)
Vocabulary
74(1)
Analysis of Variance design
75(1)
Description of methodologies
75(3)
Representative databases
75(1)
Reference methods
76(1)
Specific calibrated databases
77(1)
Diagnostic methods with a specific vocabulary
77(1)
Artificial test signals
78(1)
Parameters
78(4)
Pre-production parameters
78(1)
Post-production parameters
79(1)
Linguistic parameters
79(1)
Recogniser specific parameters
79(1)
Assessment parameters
80(2)
Experimental design of small vocabulary word recognition
82(5)
Technical set-up
82(2)
Training
84(1)
Test procedure
84(3)
Scoring the results
87(1)
Analysis of results
87(1)
Experimental design of large vocabulary continuous speech recognition
87(7)
Training material
88(2)
Development test
90(1)
Dry run
91(1)
Test material selection
91(1)
Evaluation protocol
92(1)
Scoring method
93(1)
Assessment of speaker verification systems
94(73)
Presentation
94(2)
Speaker classification tasks
94(1)
General definitions
95(1)
A taxonomy of speaker recognition systems
96(8)
Task typology
96(2)
Levels of text dependence
98(1)
Interaction mode with the user
98(1)
Definitions
99(2)
Examples
101(3)
Influencing factors
104(10)
Speech quality
104(1)
Temporal drift
105(1)
Speech quantity and variety
105(1)
Speaker population size and typology
106(1)
Speaker purpose and other human factors
107(2)
Recommendations
109(3)
Example
112(2)
Scoring procedures
114(33)
Notation
114(3)
Closed-set identification
117(5)
Verification
122(22)
Open-set identification
144(1)
Recommendations
145(2)
Comparative and indirect assessment
147(3)
Reference systems
147(2)
Human calibration
149(1)
Transformation of speech databases
150(1)
Applications, systems and products
150(9)
Terminology
151(1)
Typology of applications
152(3)
Examples of speaker verification systems
155(1)
Examples of speaker verification products
156(1)
Alternative techniques
157(2)
Conclusions
159(1)
System and product assessment
159(4)
System assessment
160(1)
Product assessment
161(1)
Recommendations
162(1)
Forensic applications
163(3)
Listener method
163(1)
Spectrographic method
164(1)
Semi-automatic method
165(1)
Recommendations
165(1)
Conclusions
166(1)
Assessment of synthesis systems
167(83)
Introduction
167(4)
What are speech output systems?
167(1)
Why speech output assessment?
168(1)
Users of this chapter
169(2)
Towards a taxonomy of assessment tasks and techniques
171(5)
Glass box vs. black box
171(2)
Laboratory vs. field
173(1)
Linguistic vs. acoustic
174(1)
Human vs. automated
174(1)
Judgment vs. functional testing
175(1)
Global vs. analytic assessment
176(1)
Methodology
176(12)
Subjects
177(3)
Test procedures
180(3)
Benchmarks
183(1)
Reference conditions
183(4)
Comparability across languages
187(1)
Black box approach
188(9)
Laboratory testing
188(6)
Field testing
194(3)
Glass box approach
197(29)
Linguistic aspects
197(7)
Acoustic aspects
204(22)
Further developments in speech output testing
226(10)
Introduction
226(1)
Long-term strategy: Towards predictive tests
227(3)
Linguistic testing: Creating test environments for linguistic interfaces
230(2)
Acoustic testing: Developments for the near future
232(4)
Conclusion: summary of test descriptions
236(14)
SAM Standard Segmental Test
237(1)
CLuster IDentification Test (CLID)
238(1)
The Bellcore Test
239(1)
Diagnostic Rhyme Test (DRT)
240(1)
Modified Rhyme Test (MRT)
241(1)
Haskins Syntactic Sentences
242(1)
SAM Semantically Unpredictable Sentences (SUS)
243(1)
Harvard Psychoacoustic Sentences
244(1)
SAM Prosodic Form Test
245(1)
SAM Prosodic Function Test
246(1)
SAM Overall Quality Test
247(1)
ITU-T Overall Quality Test
248(2)
Assessment of interactive systems
250(53)
Introduction
250(3)
About this chapter
250(1)
Reading guide
251(2)
Interactive dialogue systems
253(4)
Definitions
253(4)
Specification and design
257(23)
Design by intuition
259(5)
Design by observation
264(3)
Design by simulation
267(10)
Iterative design methodology for spoken language dialogue systems
277(3)
Readings in interactive dialogue system specification
280(1)
Evaluation
280(23)
Background
281(2)
Characterisation
283(4)
Assessment framework
287(12)
Recommendations on evaluation methodology
299(1)
Readings in interactive dialogue system evaluation
300(3)
Bibliographical references 303(18)
Glossary 321(14)
List of abbreviations 335(6)
Index 341

Rewards Program

Write a Review