Digital Music File Help

Status
Not open for further replies.
Joined
Oct 22, 2003
Messages
13,131
Location
By Detroit
I am running 128 Kbps wma files on my MP3 player, but would like to make the files smaller so I can fit more music on my 2 gig player. I have tried a 4 gig player and prefer the 2 gig as the batteries last 3 times as long in it.

I am told that wma files sound fine at 64 Kbps, as good as MP3 files running at 128 Kbps. Is that true?

I ran a quick test and saved a music file at 32 Kbps and couldn't tell the difference from the 128 Kbps on earbuds, but the real test would be in my truck with the 100 watt amp. Actually I may not go to 64 Kbps except on stuff that already is poor recording quality, but on my better music I will probably go no lower than 96 Kbps.
 
I would mix down to mono if using bitrates that low. On mp3 at least when the bitrate is below 200 or so the treble starts really sounding awful. Applause etc gets real muddy.

The wma > mp3 argument comes from microsoft and was "rigged" with the worst available mp3 encoder etc.

battery life should not be a function of memory size so the 2 gb player is better engineered somehow.
 
You could try encoding .mp3 with VBR and setting the bitrate range at 32-128kbs (you might try this @32000khz, too). The theory is when there's music to be heard it comes through at 128 but when there's just white noise or silence it doesn't get a high bit rate wasted on it, thereby saving space Obviously you're not going for audiophile-quality sound, but I think if you go much lower than this you'll really have trouble listening to it barring a mono mixdown, which I wouldn't do unless of course the source is mono.

Also, I would look at getting a quality mp3 player with a much larger capacity when budget allows. Sony Walkmen are very good IMO.
 
On my old Palm Lifedrive, I went through the same thing trying to get as many songs on it as possible using only the 4GB internal memory...

It depends on the type(s) of music you're listening to...64kbit WMA is decent enough for uncomplex music for the average person. If you have lots of cymbals in the music, you will hear artifacts though...

For example, a 64kbit WMA rip with Deborah Allen's (Country singer) 90's CD's sounded decent in the car stereo, but not exceptional. Amy Grant's last couple quieter releases were quite good using 64Kbit WMA as it was, again, uncomplex arrangements. The last 3 Sara Evans CD's sound decent enough at this bit rate, too. Tim McGraw's "Circus" CD from a few years ago was also decent.

Not the case when trying the same WMA bitrate with anything by Rush, Dream Theater, etc. Far too many artifacts at 64KBit to be acceptable for these sorts of bands.

All in all, I think you'll be happy with 64KBit WMA on your 2GB player.
cheers3.gif
 
Whatever you do, do *not* re-encode .mp3's! Their lossy effects will become cumulative.

Also be aware that compressing the living bujeepers outta these files will possibly sound OK in ear buds and really cheap or noisy listening circumstances; but you've just consigned that file to be played only in those circumstances, as the suckitude will become clear in better environments. I cannot suggest enough that if you do opt to squish the files that heavily, that you keep a lossless or original copy somewhere.

If playing these files on a portable player is the point, then I'd also stick with the (VBR) .mp3 format (Every now and again you'll find a player that doesn't speak .wma). I honestly cannot see much music surviving anything less than 64kbps.
 
Originally Posted By: TallPaul
I am told that wma files sound fine at 64 Kbps, as good as MP3 files running at 128 Kbps. Is that true?


They couldn't possibly sound any worse!
grin2.gif
 
Thanks. My 128 kbps wma files played fine in a friends Jeep that had a sub woofer. He really cranked it out for a 2 hour drive and I loved it. I am running a 100 watt amp in my truck and they sound great to me. I think that some of the live stuff that is not the greatest recording to begin with could suffer maybe 96 kbps, and I'll leave the rest alone. My 2 gig fits 260+ songs right now and it takes a long time to get through that much. I have another player with the rest of the songs I can switch off now and then.

I don't have much MP3 only a few I bought as Amazon downloads. Most are ripped from CD to 128 kbps wma.
 
Originally Posted By: TallPaul
Thanks. My 128 kbps wma files played fine in a friends Jeep that had a sub woofer. He really cranked it out for a 2 hour drive and I loved it. I am running a 100 watt amp in my truck and they sound great to me.


Compression distorts the high frequencies, so it shouldn't affect subwoofer performance. If there's distortion, you'll hear it from the tweeters.

I was just kidding in that other post. I apologize if I came across as offensive. It certainly can get far worse than a 128 kb/s mp3! Everyone has a different tolerance for compression. Mine is 192 kb/s using a good encoder like LAME. There shouldn't be any significant difference in sound quality between mp3 and wma at the same bitrate if a decent encoder is used.
 
Originally Posted By: TallPaul
It certainly can get far worse than a 128 kb/s mp3!


Yes it can! So many online radio/music feeds are 32Kbit/sec MP3. Awful quality! Why broadcast such a pathetic bit rate for music. ick!
 
Originally Posted By: ToyotaNSaturn

Yes it can! So many online radio/music feeds are 32Kbit/sec MP3. Awful quality! Why broadcast such a pathetic bit rate for music. ick!


Yeah, at 32 kbps, the only thing that still sounds decent is an AAC+ stream. DI.fm streams some of their stuff @ 24 kbps AAC+ and it's still relatively painless to listen to.
 
I agree, 24kb AAC+ is a worthwhile feed.

We like listening to WXRT's Saturday morning flashback show. But, since CBS owns the station, all we get is the dreaded 32Kb/sec MP3 stream.
frown.gif


Sure it's great for us to listen to it just about anywhere with the iPhone, but it's really sad when the iPhone's built-in speakers sound just as good as the 32Kbit feed!
 
Originally Posted By: rpn453
TallPaul said:
Compression distorts the high frequencies, so it shouldn't affect subwoofer performance. If there's distortion, you'll hear it from the tweeters.


Dynamic compression and data compression are two completely different things.

Dynamic compression at extreme levels will affect audio in differing ways depending on how it is applied. Compression applied during the (modern) mastering stage buggers up audio in all sorts of unseemly ways, across all areas of the frequency spectrum.

Data compression will bugger up top end first as well, but the thing to remember is that proper low end needs massive amounts of electrical current. Most listeners, especially those who listen to music on ear buds, car radios and ghetto blasters; not to mention low end home systems, have never heard proper low end (I'm not talking about applying a bazillion watts of muscle with a "sub woofer" to force *bad* low end up in level). Most data compression algorithms coagulate low end into a muddy, foggy pile of slop, but they do so in a way that is less than the destructive effects of most playback systems.

Low end distortion is as detectable as top end, if you can hear it to begin with. Intentionally allowing, or at least trying to craft the nature of the low end distortion goes a *very long* way in determining the overall feel of a record. Putting a small diaphragm, transformerless mic in front of a kick drum, then a large diaphragm mic with a transformer the size of my fist is an eye-opening experience.
 
Originally Posted By: ToyotaNSaturn
Why broadcast such a pathetic bit rate for music. ick!



$$$$$$$$$$$$$$$

I think internet radio broadcasters know that most of their listeners are using computer speakers or ear buds. They're paying a boatload for bandwidth, and any savings in streaming and the processing power required to stream the same thing to a lot of people in a lot of different locations is considerable; especially considering the prohibitive fees these stations are paying in royalties nowadays.
 
Originally Posted By: TallPaul
I am running 128 Kbps wma files on my MP3 player, but would like to make the files smaller so I can fit more music on my 2 gig player. I have tried a 4 gig player and prefer the 2 gig as the batteries last 3 times as long in it.

I am told that wma files sound fine at 64 Kbps, as good as MP3 files running at 128 Kbps. Is that true?
...


No. Lossy music sucks to begin with, and as Frank Zappa once said (paraphrasing), "listening to digital music is like viewing a painting through a screen door." That's not as true as it used to be with "remastered" CDs and the like. On my iPod, I've actually increased the file sizes to 160 or 192 Kps. Does it make a huge difference, probably not on a player. But you will notice a big drop in quality below 128 Kph IMO
 
Originally Posted By: uc50ic4more
Most data compression algorithms coagulate low end into a muddy, foggy pile of slop, but they do so in a way that is less than the destructive effects of most playback systems.

Low end distortion is as detectable as top end, if you can hear it to begin with.


Interesting. I've never noticed that, but maybe that's just because I'd never crank up any of my serious stereos for music where I can hear distortion of the cymbals. That noise just grates on me. I'm willing to listen to low-quality music, like satellite radio or FM radio, if I really want to hear a particular song or as good variety background music, but it's not something I want to hear beyond low volume. I do notice bass muddiness in FM radio, along with the complete lack of high frequencies (which is better than distortion of the high frequencies). What causes that muddiness?
 
Originally Posted By: uc50ic4more
Originally Posted By: ToyotaNSaturn
Why broadcast such a pathetic bit rate for music. ick!



$$$$$$$$$$$$$$$

I think internet radio broadcasters know that most of their listeners are using computer speakers or ear buds. They're paying a boatload for bandwidth, and any savings in streaming and the processing power required to stream the same thing to a lot of people in a lot of different locations is considerable; especially considering the prohibitive fees these stations are paying in royalties nowadays.


I agree with all that you say. It would be nice of them to offer a 32Kbit AAC+ feed rather than a 32Kbit MP3 stream.
 
Originally Posted By: rpn453
I do notice bass muddiness in FM radio, along with the complete lack of high frequencies (which is better than distortion of the high frequencies). What causes that muddiness?


Broadcast regulations in the States state that your audio has to be way down at 15KHz. If I recall correctly, the stereo separation information is broadcast on a 19KHz carrier signal, so the music part has to be long gone by then. To compensate for this, the FTC (Federal Telecommunications Commission?) allows for a +15dB spike at 15KHz. Add this to the main problem: That stations are compressing and EQ'ing the *holy flippin' bujeepers* out of the dynamic range of the music (in order to be LOUDER THAN EVERYONE ELSE), which introduces scads of top end shrillness, and you've got yourself some bad audio; missing resolution, dynamic range and phase linearity.

Muddiness in the low end is due to the virtual impossibility of properly reproducing good low end in small in a car (small cubic space) or with ear buds (speakers too small to move considerable amounts of air) or poor quality reproduction systems (that have big, fat, flabby woofers that cannot move fast enough to reproduce bass detail, and poor quality amplifiers that cannot supply enough current to do same).
 
Originally Posted By: ToyotaNSaturn

I agree with all that you say. It would be nice of them to offer a 32Kbit AAC+ feed rather than a 32Kbit MP3 stream.


Sadly, these codecs are patent encumbered and are not free to use (Which is why I try to stick with broadcasters that use .ogg, which is F/LOSS.) Also, many of these broadcasters use dedicated hardware to encode the music stream in real time. To use both mp3 and AAC codecs, they'd have to pay royalties or license fees for two codecs, pay for twice the encoding hardware, and THEN deal with all of the extra bandwidth. Yuck!
 
Originally Posted By: Nickdfresh
Lossy music sucks to begin with, and as Frank Zappa once said (paraphrasing), "listening to digital music is like viewing a painting through a screen door." That's not as true as it used to be with "remastered" CDs and the like. On my iPod, I've actually increased the file sizes to 160 or 192 Kps. Does it make a huge difference, probably not on a player. But you will notice a big drop in quality below 128 Kph IMO


Early digital-to-analog converters; used to take the original analog master tapes and convert to digital for CD manufacturing, were *awful* up until the late 90's/ early 00's. You wanted to get audio up to and including 20KHz (which is now commonly accepted to be *inadequate* in its own right) but you have to be -60dB or something ridiculous by 22KHz so as not to pass aliased sampling frequency artifacts. Introducing that drastic a slope to a low pass EQ *destroyed* the top octave of audio (from 10KHz - 20KHZ) with phase nonlinearities; AND chopped off anything above that severely. Add to that the general immaturity of the technology and the inadequate choice of format for CD (16bit, 44.1KHz) and you've got a decade and a half of poor sounding CD's.

I have an early CD of Van Morrison's "Astral Weeks", as well as a downloaded copy made by someone who sampled his or her vinyl pressing from the late 60's; *using consumer grade (but very modern) equipment to do so* and encoded it as a 24bit 96KHz .flac. The difference in detail is astounding. The CD sounds lifeless and flat.
 
Status
Not open for further replies.
Back
Top Bottom