数字通信原理(英文版) 加拉格(RobertG.Gallager) ..
- 所属分类:
电子通信培..
- 作者:
- 出版社:
- ISBN:9787115223364
- 出版日期:
-
原价:
¥69.00元
现价:¥0.00元
图书简介
品牌:图书详情
商品基本信息,请以下列介绍为准 商品名称: 数字通信原理(英文版) 作者: 加拉格(RobertG.Gallager) 市场价: 69元 文轩网价: 55.9元【81折】 ISBN号: 9787115223364 出版社: 人民邮电出版社 商品类型: 图书
其他参考信息(以实物为准) 装帧:平装 开本: 语种:中文 出版时间:2010-04-01 版次: 页数:407 印刷时间:2010-04-01 印次: 字数: 温馨提示:出版时间超过3年的图书,因长时间储存可能会产生纸张缺陷,敬请谅解!
内容简介 本书是世界通信权威、信息领域泰斗Robert G. Gallager博士新作,在数字通信原理的基础上精炼,重点阐述了理论、问题和工程设计之间的关系。内容涉及离散源编码、量化、信道波形、向量空间和信号空间、随机过程和噪声、编码、解码等数字通信基本问题,最后还简单介绍了无线数字通信。
本书是通信专业高年级本科生和研究生教材,也可供工程技术人员参考。
目录 1 Introduction to digital communication 1
1.1 Standardized interfaces and layering 3
1.2 Communication sources 5
1.2.1 Source coding 6
1.3 Communication channels 7
1.3.1 Channel encoding (modulation) 10
1.3.2 Error correction 11
1.4 Digital interface 12
1.4.1 Network aspects of the digital interface 12
1.5 Supplementary reading 14
2 Coding for discrete sources 16
2.1 Introduction 16
2.2 Fixed-length codes for discrete sources 18
2.3 Variable-length codes for discrete sources 19
2.3.1 Unique decodability 20
2.3.2 Prefix-free codes for discrete sources 21
2.3.3 The Kraft inequality for prefix-free codes 23
2.4 Probability models for discrete sources 26
2.4.1 Discrete memoryless sources 26
2.5 Minimum L for prefix-free codes 27
2.5.1 Lagrange multiplier solution for the minimum L 28
2.5.2 Entropy bounds on L 29
2.5.3 Huffman’s algorithm for optimal source codes 31
2.6 Entropy and fixed-to-variable-length codes 35
2.6.1 Fixed-to-variable-length codes 37
2.7 The AEP and the source coding theorems 38
2.7.1 The weak law of large numbers 39
2.7.2 The asymptotic equipartition property 40
2.7.3 Source coding theorems 43
2.7.4 The entropy bound for general classes of codes 44
2.8 Markov sources 46
2.8.1 Coding for Markov sources 48
2.8.2 Conditional entropy 48
2.9 Lempel-Ziv universal data compression 51
2.9.1 The LZ77 algorithm 51
2.9.2 Why LZ77 works 53
2.9.3 Discussion 54
2.10 Summary of discrete source coding 55
2.11 Exercises 56
3 Quantization 67
3.1 Introduction to quantization 67
3.2 Scalar quantization 68
3.2.1 Choice of intervals for given representation points 69
3.2.2 Choice of representation points for given intervals 69
3.2.3 The Lloyd-Max algorithm 70
3.3 Vector quantization 72
3.4 Entropy-coded quantization 73
3.5 High-rate entropy-coded quantization 75
3.6 Differential entropy 76
3.7 Performance of uniform high-rate scalar quantizers 78
3.8 High-rate two-dimensional quantizers 81
3.9 Summary of quantization 84
3.10 Appendixes 85
3.10.1 Nonuniform scalar quantizers 85
3.10.2 Nonuniform 2D quantizers 87
3.11 Exercises 88
4 Source and channel waveforms 93
4.1 Introduction 93
4.1.1 Analog sources 93
4.1.2 Communication channels 95
4.2 Fourier series 96
4.2.1 Finite-energy waveforms 98
4.3 L2 functions and Lebesgue integration over[-T/2,T/2] 101
4.3.1 Lebesgue measure for a union of intervals 102
4.3.2 Measure for more general sets 104
4.3.3 Measurable functions and integration over [-T/2,T/2] 106
4.3.4 Measurability of functions defined by other functions 108
4.3.5 L1 and L2 functions over [-T/2,T/2] 108
4.4 Fourier series for L2 waveforms 109
4.4.1 The T-spaced truncated sinusoid expansion 111
4.5 Fourier transforms and L2 waveforms 114
4.5.1 Measure and integration over R 116
4.5.2 Fourier transforms of L2 functions 118
4.6 The DTFT and the sampling theorem 120
4.6.1 The discrete-time Fourier transform 121
4.6.2 The sampling theorem 122
4.6.3 Source coding using sampled waveforms 124
4.6.4 The sampling theorem for[Δ-W,Δ+W] 125
4.7 Aliasing and the sinc-weighted sinusoid expansion 126
4.7.1 The T-spaced sinc-weighted sinusoid expansion 127
4.7.2 Degrees of freedom 128
4.7.3 Aliasing ― a time-domain approach 129
4.7.4 Aliasing ― a frequency-domain approach 130
4.8 Summary 132
4.9 Appendix: Supplementary material and proofs 133
4.9.1 Countable sets 133
4.9.2 Finite unions of intervals over [-T/2,T/2] 135
4.9.3 Countable unions and outer measure over [-T/2,T/2] 136
4.9.4 Arbitrary measurable sets over[-T/2,T/2] 139
4.10 Exercises 143
5 Vector spaces and signal space 153
5.1 Axioms and basic properties of vector spaces 154
5.1.1 Finite-dimensional vector spaces 156
5.2 Inner product spaces 158
5.2.1 The inner product spaces Rn and Cn 158
5.2.2 One-dimensional projections 159
5.2.3 The inner product space of L2 functions 161
5.2.4 Subspaces of inner product spaces 162
5.3 Orthonormal bases and the projection theorem 163
5.3.1 Finite-dimensional projections 164
5.3.2 Corollaries of the projection theorem 165
5.3.3 Gram?Schmidt orthonormalization 166
5.3.4 Orthonormal expansions in L2 167
5.4 Summary 169
5.5 Appendix: Supplementary material and proofs 170
5.5.1 The Plancherel theorem 170
5.5.2 The sampling and aliasing theorems 174
5.5.3 Prolate spheroidal waveforms 176
5.6 Exercises 177
6 Channels, modulation, and demodulation 181
6.1 Introduction 181
6.2 Pulse amplitude modulation (PAM) 184
6.2.1 Signal constellations 184
6.2.2 Channel imperfections: a preliminary view 185
6.2.3 Choice of the modulation pulse 187
6.2.4 PAM demodulation 189
6.3 The Nyquist criterion 190
6.3.1 Band-edge symmetry 191
6.3.2 Choosing {p(t-kT);k∈Z} an orthonormal set 193
6.3.3 Relation between PAM and analog source coding 194
6.4 Modulation: baseband to passband and back 195
6.4.1 Double-sideband amplitude modulation 195
6.5 Quadrature amplitude modulation (QAM) 196
6.5.1 QAM signal set 198
6.5.2 QAM baseband modulation and demodulation 199
6.5.3 QAM: baseband to passband and back 200
6.5.4 Implementation of QAM 201
6.6 Signal space and degrees of freedom 203
6.6.1 Distance and orthogonality 204
6.7 Carrier and phase recovery in QAM systems 206
6.7.1 Tracking phase in the presence of noise 207
6.7.2 Large phase errors 208
6.8 Summary of modulation and demodulation 208
6.9 Exercises 209
7 Random processes and noise 216
7.1 Introduction 216
7.2 Random processes 217
7.2.1 Examples of random processes 218
7.2.2 The mean and covariance of a random process 220
7.2.3 Additive noise channels 221
7.3 Gaussian random variables, vectors, and processes 221
7.3.1 The covariance matrix of a jointly Gaussianrandom vector 224
7.3.2 The probability density of a jointly Gaussianrandom vector 224
7.3.3 Spe case of a 2D zero-mean Gaussian random vector 227
7.3.4 Z=AW, where A is orthogonal 228
7.3.5 Probability density for Gaussian vectors in terms ofprincipal axes 228
7.3.6 Fourier transforms for joint densities 230
7.4 Linear functionals and filters for random processes 231
7.4.1 Gaussian processes defined over orthonormalexpansions 232
7.4.2 Linear filtering of Gaussian processes 233
7.4.3 Covariance for linear functionals and filters 234
7.5 Stationarity and related concepts 235
7.5.1 Wide-sense stationary (WSS) random processes 236
7.5.2 Effectively stationary and effectively WSSrandom processes 238
7.5.3 Linear functionals for effectively WSS random processes 239
7.5.4 Linear filters for effectively WSS random processes 239
7.6 Stationarity in the frequency domain 242
7.7 White Gaussian noise 244
7.7.1 The sinc expansion as an approximation to WGN 246
7.7.2 Poisson process noise 247
7.8 Adding noise to modulated communication 248
7.8.1 Complex Gaussian random variables and vectors 250
7.9 Signal-to-noise ratio 251
7.10 Summary of random processes 254
7.11 Appendix: Supplementary topics 255
7.11.1 Properties of covariance matrices 255
7.11.2 The Fourier series expansion of a truncated random process 257
7.11.3 Uncorrelated coefficients in a Fourier series 259
7.11.4 The Karhunen?Loeve expansion 262
7.12 Exercises 263
8 Detection, coding, and decoding 268
8.1 Introduction 268
8.2 Binary detection 271
8.3 Binary signals in white Gaussian noise 273
8.3.1 Detection for PAM antipodal signals 273
8.3.2 Detection for binary nonantipodal signals 275
8.3.3 Detection for binary real vectors in WGN 276
8.3.4 Detection for binary complex vectors in WGN 279
8.3.5 Detection of binary antipodal waveforms in WGN 281
8.4 M-ary detection and sequence detection 285
8.4.1 M-ary detection 285
8.4.2 Successive transmissions of QAM signals in WGN 286
8.4.3 Detection with arbitrary modulation schemes 289
8.5 Orthogonal signal sets and simple channel coding 292
8.5.1 Simplex signal sets 293
8.5.2 Biorthogonal signal sets 294
8.5.3 Error probability for orthogonal signal sets 294
8.6 Block coding 298
8.6.1 Binary orthogonal codes and Hadamard matrices 298
8.6.2 Reed?Muller codes 300
8.7 Noisy-channel coding theorem 302
8.7.1 Discrete memoryless channels 303
8.7.2 Capacity 304
8.7.3 Converse to the noisy-channel coding theorem 306
8.7.4 Noisy-channel coding theorem, forward part 307
8.7.5 The noisy-channel coding theorem for WGN 311
8.8 Convolutional codes 312
8.8.1 Decoding of convolutional codes 314
8.8.2 The Viterbi algorithm 315
8.9 Summary of detection, coding, and decoding 317
8.10 Appendix: Neyman?Pearson threshold tests 317
8.11 Exercises 322
9 Wireless digital communication 330
9.1 Introduction 330
9.2 Physical modeling for wireless channels 334
9.2.1 Free-space, fixed transmitting and receiving antennas 334
9.2.2 Free-space, moving antenna 337
9.2.3 Moving antenna, reflecting wall 337
9.2.4 Reflection from a ground plane 340
9.2.5 Shadowing 340
9.2.6 Moving antenna, multiple reflectors 341
9.3 Input/output models of wireless channels 341
9.3.1 The system function and impulse response for LTV systems 343
9.3.2 Doppler spread and coherence time 345
9.3.3 Delay spread and coherence frequency 348
9.4 Baseband system functions and impulse responses 350
9.4.1 A discrete-time baseband model 353
9.5 Statistical channel models 355
9.5.1 Passband and baseband noise 358
9.6 Data detection 359
9.6.1 Binary detection in flat Rayleigh fading 360
9.6.2 Noncoherent detection with known channel magnitude 363
9.6.3 Noncoherent detection in flat Rician fading 365
9.7 Channel measurement 367
9.7.1 The use of probing signals to estimate the channel 368
9.7.2 Rake receivers 373
9.8 Diversity 376
9.9 CDMA: the IS95 standard 379
9.9.1 Voice compression 380
9.9.2 Channel coding and decoding 381
9.9.3 Viterbi decoding for fading channels 382
9.9.4 Modulation and demodulation 383
9.9.5 Multiaccess interference in IS95 386
9.10 Summary of wireless communication 388
9.11 Appendix: Error probability for noncoherent detection 390
9.12 Exercises 391
References 398
Index 400
目录
品牌:图书
商品基本信息,请以下列介绍为准 | |
商品名称: | 数字通信原理(英文版) |
作者: | 加拉格(RobertG.Gallager) |
市场价: | 69元 |
文轩网价: | 55.9元【81折】 |
ISBN号: | 9787115223364 |
出版社: | 人民邮电出版社 |
商品类型: | 图书 |
其他参考信息(以实物为准) | ||
装帧:平装 | 开本: | 语种:中文 |
出版时间:2010-04-01 | 版次: | 页数:407 |
印刷时间:2010-04-01 | 印次: | 字数: |
温馨提示:出版时间超过3年的图书,因长时间储存可能会产生纸张缺陷,敬请谅解! |
内容简介 | |
本书是世界通信权威、信息领域泰斗Robert G. Gallager博士新作,在数字通信原理的基础上精炼,重点阐述了理论、问题和工程设计之间的关系。内容涉及离散源编码、量化、信道波形、向量空间和信号空间、随机过程和噪声、编码、解码等数字通信基本问题,最后还简单介绍了无线数字通信。 本书是通信专业高年级本科生和研究生教材,也可供工程技术人员参考。 |
目录 | |
1 Introduction to digital communication 1 1.1 Standardized interfaces and layering 3 1.2 Communication sources 5 1.2.1 Source coding 6 1.3 Communication channels 7 1.3.1 Channel encoding (modulation) 10 1.3.2 Error correction 11 1.4 Digital interface 12 1.4.1 Network aspects of the digital interface 12 1.5 Supplementary reading 14 2 Coding for discrete sources 16 2.1 Introduction 16 2.2 Fixed-length codes for discrete sources 18 2.3 Variable-length codes for discrete sources 19 2.3.1 Unique decodability 20 2.3.2 Prefix-free codes for discrete sources 21 2.3.3 The Kraft inequality for prefix-free codes 23 2.4 Probability models for discrete sources 26 2.4.1 Discrete memoryless sources 26 2.5 Minimum L for prefix-free codes 27 2.5.1 Lagrange multiplier solution for the minimum L 28 2.5.2 Entropy bounds on L 29 2.5.3 Huffman’s algorithm for optimal source codes 31 2.6 Entropy and fixed-to-variable-length codes 35 2.6.1 Fixed-to-variable-length codes 37 2.7 The AEP and the source coding theorems 38 2.7.1 The weak law of large numbers 39 2.7.2 The asymptotic equipartition property 40 2.7.3 Source coding theorems 43 2.7.4 The entropy bound for general classes of codes 44 2.8 Markov sources 46 2.8.1 Coding for Markov sources 48 2.8.2 Conditional entropy 48 2.9 Lempel-Ziv universal data compression 51 2.9.1 The LZ77 algorithm 51 2.9.2 Why LZ77 works 53 2.9.3 Discussion 54 2.10 Summary of discrete source coding 55 2.11 Exercises 56 3 Quantization 67 3.1 Introduction to quantization 67 3.2 Scalar quantization 68 3.2.1 Choice of intervals for given representation points 69 3.2.2 Choice of representation points for given intervals 69 3.2.3 The Lloyd-Max algorithm 70 3.3 Vector quantization 72 3.4 Entropy-coded quantization 73 3.5 High-rate entropy-coded quantization 75 3.6 Differential entropy 76 3.7 Performance of uniform high-rate scalar quantizers 78 3.8 High-rate two-dimensional quantizers 81 3.9 Summary of quantization 84 3.10 Appendixes 85 3.10.1 Nonuniform scalar quantizers 85 3.10.2 Nonuniform 2D quantizers 87 3.11 Exercises 88 4 Source and channel waveforms 93 4.1 Introduction 93 4.1.1 Analog sources 93 4.1.2 Communication channels 95 4.2 Fourier series 96 4.2.1 Finite-energy waveforms 98 4.3 L2 functions and Lebesgue integration over[-T/2,T/2] 101 4.3.1 Lebesgue measure for a union of intervals 102 4.3.2 Measure for more general sets 104 4.3.3 Measurable functions and integration over [-T/2,T/2] 106 4.3.4 Measurability of functions defined by other functions 108 4.3.5 L1 and L2 functions over [-T/2,T/2] 108 4.4 Fourier series for L2 waveforms 109 4.4.1 The T-spaced truncated sinusoid expansion 111 4.5 Fourier transforms and L2 waveforms 114 4.5.1 Measure and integration over R 116 4.5.2 Fourier transforms of L2 functions 118 4.6 The DTFT and the sampling theorem 120 4.6.1 The discrete-time Fourier transform 121 4.6.2 The sampling theorem 122 4.6.3 Source coding using sampled waveforms 124 4.6.4 The sampling theorem for[Δ-W,Δ+W] 125 4.7 Aliasing and the sinc-weighted sinusoid expansion 126 4.7.1 The T-spaced sinc-weighted sinusoid expansion 127 4.7.2 Degrees of freedom 128 4.7.3 Aliasing ― a time-domain approach 129 4.7.4 Aliasing ― a frequency-domain approach 130 4.8 Summary 132 4.9 Appendix: Supplementary material and proofs 133 4.9.1 Countable sets 133 4.9.2 Finite unions of intervals over [-T/2,T/2] 135 4.9.3 Countable unions and outer measure over [-T/2,T/2] 136 4.9.4 Arbitrary measurable sets over[-T/2,T/2] 139 4.10 Exercises 143 5 Vector spaces and signal space 153 5.1 Axioms and basic properties of vector spaces 154 5.1.1 Finite-dimensional vector spaces 156 5.2 Inner product spaces 158 5.2.1 The inner product spaces Rn and Cn 158 5.2.2 One-dimensional projections 159 5.2.3 The inner product space of L2 functions 161 5.2.4 Subspaces of inner product spaces 162 5.3 Orthonormal bases and the projection theorem 163 5.3.1 Finite-dimensional projections 164 5.3.2 Corollaries of the projection theorem 165 5.3.3 Gram?Schmidt orthonormalization 166 5.3.4 Orthonormal expansions in L2 167 5.4 Summary 169 5.5 Appendix: Supplementary material and proofs 170 5.5.1 The Plancherel theorem 170 5.5.2 The sampling and aliasing theorems 174 5.5.3 Prolate spheroidal waveforms 176 5.6 Exercises 177 6 Channels, modulation, and demodulation 181 6.1 Introduction 181 6.2 Pulse amplitude modulation (PAM) 184 6.2.1 Signal constellations 184 6.2.2 Channel imperfections: a preliminary view 185 6.2.3 Choice of the modulation pulse 187 6.2.4 PAM demodulation 189 6.3 The Nyquist criterion 190 6.3.1 Band-edge symmetry 191 6.3.2 Choosing {p(t-kT);k∈Z} an orthonormal set 193 6.3.3 Relation between PAM and analog source coding 194 6.4 Modulation: baseband to passband and back 195 6.4.1 Double-sideband amplitude modulation 195 6.5 Quadrature amplitude modulation (QAM) 196 6.5.1 QAM signal set 198 6.5.2 QAM baseband modulation and demodulation 199 6.5.3 QAM: baseband to passband and back 200 6.5.4 Implementation of QAM 201 6.6 Signal space and degrees of freedom 203 6.6.1 Distance and orthogonality 204 6.7 Carrier and phase recovery in QAM systems 206 6.7.1 Tracking phase in the presence of noise 207 6.7.2 Large phase errors 208 6.8 Summary of modulation and demodulation 208 6.9 Exercises 209 7 Random processes and noise 216 7.1 Introduction 216 7.2 Random processes 217 7.2.1 Examples of random processes 218 7.2.2 The mean and covariance of a random process 220 7.2.3 Additive noise channels 221 7.3 Gaussian random variables, vectors, and processes 221 7.3.1 The covariance matrix of a jointly Gaussianrandom vector 224 7.3.2 The probability density of a jointly Gaussianrandom vector 224 7.3.3 Spe case of a 2D zero-mean Gaussian random vector 227 7.3.4 Z=AW, where A is orthogonal 228 7.3.5 Probability density for Gaussian vectors in terms ofprincipal axes 228 7.3.6 Fourier transforms for joint densities 230 7.4 Linear functionals and filters for random processes 231 7.4.1 Gaussian processes defined over orthonormalexpansions 232 7.4.2 Linear filtering of Gaussian processes 233 7.4.3 Covariance for linear functionals and filters 234 7.5 Stationarity and related concepts 235 7.5.1 Wide-sense stationary (WSS) random processes 236 7.5.2 Effectively stationary and effectively WSSrandom processes 238 7.5.3 Linear functionals for effectively WSS random processes 239 7.5.4 Linear filters for effectively WSS random processes 239 7.6 Stationarity in the frequency domain 242 7.7 White Gaussian noise 244 7.7.1 The sinc expansion as an approximation to WGN 246 7.7.2 Poisson process noise 247 7.8 Adding noise to modulated communication 248 7.8.1 Complex Gaussian random variables and vectors 250 7.9 Signal-to-noise ratio 251 7.10 Summary of random processes 254 7.11 Appendix: Supplementary topics 255 7.11.1 Properties of covariance matrices 255 7.11.2 The Fourier series expansion of a truncated random process 257 7.11.3 Uncorrelated coefficients in a Fourier series 259 7.11.4 The Karhunen?Loeve expansion 262 7.12 Exercises 263 8 Detection, coding, and decoding 268 8.1 Introduction 268 8.2 Binary detection 271 8.3 Binary signals in white Gaussian noise 273 8.3.1 Detection for PAM antipodal signals 273 8.3.2 Detection for binary nonantipodal signals 275 8.3.3 Detection for binary real vectors in WGN 276 8.3.4 Detection for binary complex vectors in WGN 279 8.3.5 Detection of binary antipodal waveforms in WGN 281 8.4 M-ary detection and sequence detection 285 8.4.1 M-ary detection 285 8.4.2 Successive transmissions of QAM signals in WGN 286 8.4.3 Detection with arbitrary modulation schemes 289 8.5 Orthogonal signal sets and simple channel coding 292 8.5.1 Simplex signal sets 293 8.5.2 Biorthogonal signal sets 294 8.5.3 Error probability for orthogonal signal sets 294 8.6 Block coding 298 8.6.1 Binary orthogonal codes and Hadamard matrices 298 8.6.2 Reed?Muller codes 300 8.7 Noisy-channel coding theorem 302 8.7.1 Discrete memoryless channels 303 8.7.2 Capacity 304 8.7.3 Converse to the noisy-channel coding theorem 306 8.7.4 Noisy-channel coding theorem, forward part 307 8.7.5 The noisy-channel coding theorem for WGN 311 8.8 Convolutional codes 312 8.8.1 Decoding of convolutional codes 314 8.8.2 The Viterbi algorithm 315 8.9 Summary of detection, coding, and decoding 317 8.10 Appendix: Neyman?Pearson threshold tests 317 8.11 Exercises 322 9 Wireless digital communication 330 9.1 Introduction 330 9.2 Physical modeling for wireless channels 334 9.2.1 Free-space, fixed transmitting and receiving antennas 334 9.2.2 Free-space, moving antenna 337 9.2.3 Moving antenna, reflecting wall 337 9.2.4 Reflection from a ground plane 340 9.2.5 Shadowing 340 9.2.6 Moving antenna, multiple reflectors 341 9.3 Input/output models of wireless channels 341 9.3.1 The system function and impulse response for LTV systems 343 9.3.2 Doppler spread and coherence time 345 9.3.3 Delay spread and coherence frequency 348 9.4 Baseband system functions and impulse responses 350 9.4.1 A discrete-time baseband model 353 9.5 Statistical channel models 355 9.5.1 Passband and baseband noise 358 9.6 Data detection 359 9.6.1 Binary detection in flat Rayleigh fading 360 9.6.2 Noncoherent detection with known channel magnitude 363 9.6.3 Noncoherent detection in flat Rician fading 365 9.7 Channel measurement 367 9.7.1 The use of probing signals to estimate the channel 368 9.7.2 Rake receivers 373 9.8 Diversity 376 9.9 CDMA: the IS95 standard 379 9.9.1 Voice compression 380 9.9.2 Channel coding and decoding 381 9.9.3 Viterbi decoding for fading channels 382 9.9.4 Modulation and demodulation 383 9.9.5 Multiaccess interference in IS95 386 9.10 Summary of wireless communication 388 9.11 Appendix: Error probability for noncoherent detection 390 9.12 Exercises 391 References 398 Index 400 |
同类热销图书
书名 | 作者 | 出版社 | 价格 | 购买 |
PLC职业技能培训及视频精讲——三菱FX 系列(附光盘) | 张运刚,宋.. | 人民邮电出版社 | ¥58.00¥46.40 |
详情 |
PLC运动控制实例及解析 (松下) | 常斗南 主.. | 机械工业出版社 | ¥26.00¥19.50 |
详情 |
LED照明设计及工程案例 | 房海明,肖.. | 化学工业出版社 | ¥48.00¥41.20 |
详情 |
助理电子商务师国家职业资格考试(三级)(培训教程) | 《助理电子.. | 中央广播电视大学.. | ¥48.00¥34.90 |
详情 |
PLC与变频器应用技术项目教程 (西门子) | 段刚 主编 | 机械工业出版社 | ¥25.00¥18.70 |
详情 |
维修电工(初级职业技能训练用书) | 王建 等主.. | 机械工业出版社 | ¥19.00¥13.80 |
详情 |
电工作业(特种作业人员安全技术复审教材) | 杨有启 主.. | 中国劳动社会保障.. | ¥15.00¥12.00 |
详情 |
家用电器产品维修工(基础知识)——国家职业资格培训教程 | 中国就业培.. | 中国劳动社会保障.. | ¥36.00¥27.40 |
详情 |
激光原理与技术(职业技术教育“十二五”课程改革规划教材.. | 施亚齐 主.. | 华中科技大学出版.. | ¥22.80¥15.80 |
详情 |
电子产品制作与调试(廖轶涵) | 廖轶涵 主.. | 化学工业出版社 | ¥24.00¥18.20 |
详情 |
维修电工职业技能鉴定指导/职业技能鉴定丛书 | 张栩 主编 | 高等教育出版社 | ¥16.90¥13.20 |
详情 |
(教材)模拟电子技术(铁路职业教育铁道部规划教材) | 王彦 主编 | 中国铁道出版社 | ¥19.50¥15.60 |
详情 |
家用电热器具与电动器具维修工(初级中级高级)/职业技能鉴.. | 《职业技能.. | 中国劳动社会保障.. | ¥21.50¥16.10 |
详情 |
电子综合电路设计与安装调试(电气自动化技术金蓝领技师教.. | 徐丽萍 主.. | 中国劳动社会保障.. | ¥37.00¥30.60 |
详情 |
内外线电工工艺/职业技能培训教材 | 劳动和社会.. | 中国劳动社会保障.. | ¥18.00¥13.70 |
详情 |
电工与电子基础/中级电工培训教材 | 劳动和社会.. | 中国劳动社会保障.. | ¥19.00¥14.40 |
详情 |
维修电工技能(农村劳动力转移技能培训用书) | 王兆晶 主.. | 机械工业出版社 | ¥17.00¥12.30 |
详情 |
电工学/职业技术教育机电类规划教材 | 丁承浩 主.. | 机械工业出版社 | ¥21.00¥15.70 |
详情 |
2009年全国大学生电子设计竞赛试题剖析 | 高吉祥 主.. | 电子工业出版社 | ¥29.00¥21.70 |
详情 |
等离子彩电快修技能图解精答 | 张新德 等.. | 机械工业出版社 | ¥30.00¥21.10 |
详情 |
无线电装接技能 | 董武,卢津.. | 福建科技出版社 | ¥10.00¥6.70 |
详情 |
电子政务(第二版) | 杨路明 等.. | 电子工业出版社 | ¥38.00¥27.60 |
详情 |
单片机实战项目教程 | 高建国 主.. | 华中科技大学出版.. | ¥22.80¥15.80 |
详情 |
PLC运动控制实例及解析 (西门子) | 常斗南 主.. | 机械工业出版社 | ¥24.00¥21.90 |
详情 |
现代电子工艺 | 王天曦,王.. | 清华大学出版社 | ¥49.80¥37.80 |
详情 |