I use ABB a lot to join audiobook tracks downloaded from a couple of online vendors. On one, the typical audiobook is 48kbps, mono, with a sample rate of 24 kHz. (Some, where stereo would make a difference, are usually 96kbps, stereo, sample rate 32 kHz.) On the other, it's typically 64kbps, stereo, 44.1 kHz. My goal in ABB would be to get a file that's as close in quality to the original MP3 as possible.
I've found, through experimentation, that a happy medium within ABB is 48kbps, mono, 32 kHz. My question is this: if I have a 64kbps file and import it at 48kbps, am I going to get roughly 48kbps quality, or something less (for example, 64x0.48)? If the original sample rate was 44.1 and I use 32, am I going to get "32" quality, or am I further degrading an already sparse file? What if it's the reverse -- originally sampled at 24, say, and I use a sample rate of 32 to import it? What's the likely effect?
I'll make this more specific. I used ABB to join a 200-track audiobook that was 64/stereo/44.1. When I imported it with the same settings in ABB, it sounded scratchy. When I re-imported it as 48/mono/32, it actually sounded better. It may be in this case that switching from stereo to mono was more important than any of the other factors. But I'm still curious.
If it's possible to describe the function of bit rate and sample rate in a way that makes sense to someone who barely made it through high school physics ... I would be grateful. Which plays the biggest role in determining quality of the sound reproduction? How do they work together? If I'm importing a file that's already been compressed for online distribution, should I always try to match the original bit rate and sample rate in ABB?
|