The magic trick is called base64 encoding. The essence is that e-mail body with attachments is encoded so that it is represented in ASCII characters only. Encoded text becomes larger by 20-30%.
I just checked with my own Thunderbird, added the size column, and I confirmed your observation. Files are "bloated". Another good reason for deleting attachments from the "sent files" in the Sent folder.
My explanation for that is that TB is doing "MIME encoding" of all attachements ensure reliability of transfer.
Apparently, if we do drag+drop of attachments, which I always do, it may use the incorrect MIME type and use the default
application/octet-stream
I just sent myself another email with my same 8.2 MB PDF file, using the file navigator to select, as recommended by some, and it made no difference to the "bloat" to 11.3 MB. Looking at the email source, I get the following for the attachment part:
That shows correct mapping as PDF type, and it also shows transfer encoding as base64, which my gut tells me might be a misfit, does anyone know how to have that use base32 encoding instead ?
Thanks guys
This explanation will be useful to many
re: ” Another good reason for deleting attachments from the "sent files" in the Sent folder.”
Could make sense. But you know, many are keeping everything in their outbox to be able to search when they sent which version and to whom - making their back-in-time research easier.
Matter of choice, here.
I’ve wondered for a very long time if this ASCII-only encoding is still needed. Supposedly back in the early days for the Internet, back in the pre-AOL days when you had to either work on a DARPA sponsored project or “know somebody” to get Internet access, some of the main “backbone” was 7-bit only so this base-64 expansion was developed as a way to send binary data over the possibly 7-bit backbone segments.
Like Linus asking on the Kernel mailing list “does anyone still have working MFM/RLL hard drives running?", if not I’d like to remove support for them from the kernel”, so I wonder if any 7-bit networking hardware is still in use? I doubt if streaming services base64 encode/decode the videos that we watch, unless it is built into the codecs used.
IMO it was never actually needed. I never could really understand the need to distinguish text and binary data and files. Both of them are just byte values and the difference lays in the way they are interpreted, and not in the data as such.
Let me disagree, please. 7-bits was not network hardware limitation. It was software limitation by design. Core information interchange protocols (telnet, pop, smtp, ftp, etc.) were designed for textual information interchange in text. Hence the lock to 7-bit ASCII.