6 - Speech communications
Published online by Cambridge University Press: 05 June 2016
Summary
Chapters 1 to 4 covered the foundations of speech signal processing including the characteristics of audio signals, methods of handling and processing them, the human speech production mechanism and the human auditory system. Chapter 5 then looked in more detail at psychoacoustics – the difference between what a human perceives and what is actually physically present. This chapter will now build upon these foundations as we embark on an exploration of the handling of speech in more depth, in particular in the coding of speech for communications purposes.
The chapter will consider typical speech processing in terms of speech coding and compression (rather than in terms of speech classification and recognition, which we will describe separately in later chapters). We will first consider the important topic of quantisation, which assumes speech to be a general audio waveform (i.e. the technique does not incorporate any specialist knowledge of the characteristics of speech).
Knowledge of speech features and characteristics allows parameterisation of the speech signal, in particular the important source filter model. Perhaps the pinnacle of achievement in these approaches is the CELP (codebook excited linear prediction) speech compression technique, which will be discussed in the final section.
Quantisation
As mentioned at the beginning of Chapter 1, audio samples need to be quantised in some way during the conversion from analogue quantities to their representations on computer. In effect, the quantisation process acts to reduce the amount of information stored: the fewer bits used to quantise the signal, the less audio information is preserved.
Most real-world systems are bandwidth (rate) or size constrained, such as an MP3 player only being able to store 4 or 8 Gbyte of audio, or a Bluetooth connected speaker only being able to replay sound at 44.1 kHz in 16 bits because this results in the maximum bandwidth audio signal that Bluetooth wireless can convey.
Manufacturers ofMP3 devices may quote how many songs their devices can store, or how many hours of audio they can contain – these are both considered more customerfriendly than specifying memory capacity in Gbytes – however, it is the memory capacity in Gbytes that tends to influence the cost of the device. It is therefore also evident that a method of reducing the size of audio recordings is important, since it allows more songs to be stored on a device with smaller memory capacity.
- Type
- Chapter
- Information
- Speech and Audio ProcessingA MATLAB-based Approach, pp. 140 - 194Publisher: Cambridge University PressPrint publication year: 2016