Hex dump of Gibe-F worm.

Availability Tools

Information Availability

Of the CIA triad of information security — Confidentiality, Integrity, and Availability — this one is different.

We have encryption for confidentiality. This is defensive cryptographic technology, attempting to prevent an adversary from reading our information. It cannot guarantee that an adversary could not discover the decryption key or otherwise obtain our information, but it makes that attack very difficult.

We have cryptographic hash functions for integrity. This is detective cryptographic technology, attempting to tell us if an adversary has modified our information. It cannot guarantee that an adversary could not somehow modify our information in a way that its content has different meaning but we do not notice this, but it makes that attack very difficult.

We have specific numbers in both cases for how much work it would take to successfully attack us. Attacks will always be possible in theory, but we can make them hard enough in practice that we do not need to worry.

Unfortunately, we have no cryptographic tools for availability. This means that we have no math, and so we have no numbers. We cannot rigorously prove the likelihood of data or any other resource remaining available. We cannot even say that any one data set is more likely to remain available than another.

The best thing we have is statistics on what has happened so far in a similar setting. If someone reports that a specific type of storage media has "a lifetime of 2 to 5 years", what they are really saying is that in some percentage of similar cases, maybe 95% of them, maybe 99% of them, the data was available for between two and five years. In a few cases, it did not last even two years, and in a few more it may have lasted for more than five. All you really know is that if you use a large number of these storage devices, most of your data will probably still be around two years later.

Availability simply cannot be guaranteed. Any unprivileged user on a Unix-family operating system can type the following:

$ a() { a|a & } ; a 

That defines a shell function a() which immediately calls itself and pipes that output into a second copy of itself. That would recurse down an endless hole, doubling the number of processes at each level. And then that line calls that disastrous function.

On Solaris 9 that immediately freezes the system.

On Linux with a 3.* kernel and a typical amount of RAM, you would have about one second before the system freezes.

On OpenBSD the system freezes for a few seconds before the kernel steps in and kills the out-of-control set of processes.

On Linux with a 4.* kernel, the system freezes for a few seconds, then is very sluggish for several seconds while a blizzard of error messages fly up the screen where you did this, a mix of:
"-bash: fork: retry: No child processes"
and:
"-bash: fork: retry: Resource temporarily unavailable"
messages. The load average can climb over 100 within a few seconds. I just tested this on a Raspberry Pi with just 512 MB of RAM. In another windows where I was connected in over SSH, I ran "top -d 0.2" to observe the freeze, the sluggishness, and the load average spike.

The systemd project is taking over more and more of the Linux operating system environment. Thanks to its reckless design, you may be able to freeze a system with this single line:

$ NOTIFY_SOCKET=/run/systemd/notify systemd-notify "" 

See that attacks's explanation here, along with discussion of how systemd's creeping takeover of the Linux operating system is a bad idea.

Media Longevity and Failure Rates

Have you considered the longevity of your storage media? The article "Ensuring the Longevity of Digital Documents" [Scientific American, January 1995, pg 42] discussed this. An updated revision is available for download. More recently, "Avoiding a Digital Dark Age" [American Scientist, v98, n2 (Mar-Apr 2010), pg 106] and "Now we know it..." [New Scientist, 30 Jan 2010, pp 37-39] also discussed it. Scientific American had another short feature on this topic in April 2011, "Seeing Forever", which pointed out that no electronic data format had been around for even fifty years (both ASCII and EBCDIC being first standardized in 1963). Also see Vivek Navale's paper "Predicting the Life Expectancy of Modern Tape and Optical Media" in RLG DigiNews, Aug 15, 2005, 9:4.

The U.S. Library of Congress studies these problems at their Center for the Library's Analytical Science Samples, their work was described in an Atlantic article.

The Machine Stops
E. M. Forster, 1909
What if
The Machine
stopped?

Summary: All media erodes, ink on paper is far better than any magnetic media, and we just don't have enough information to really say how long optical media is likely to last. The longevity of the logical format may be more important. Consider that the last heiroglyphs were carved in 396 AD, but soon after that we lost the ability to read the still very sharp and distinct writing. Similarly, Sumerian was used as a sacred, literary and scientific language in Mesopotamia until the 1st century AD, when it was quickly forgotten. Both Egyptian and Sumerian were deciphered in the 1800s through the use of trilingual inscriptions, although Egyptian is far better understood today. Also consider that the Dead Sea Scrolls are ink-on-parchment and ink-on-papyrus media about 1900 years old but still readable. Egyptian papyrus is up to twice as old and is also readable. The oldest true paper we have is from 868 AD. But computer media from a decade ago is often useless.

Estimated longevity of electronic storage media, in years
CD-R (cyanine & azo dyes, used by Taiyo Yuden and Verbatim) 7
CD-RW, DVD-RW, DVD+RW 7
Flash RAM 10
Digital tape 13
Analogue tape 20
Audio CD, DVD movie, CD-R (phthalocyanine dye and silver metal layer), DVD-R, DVD+R
Most CD-R media uses phthalocyanine, although Taiyo Yuden uses cyanine and Verbatim uses azo compound dyes.
25
CD-R (phthalocyanine dye and gold metal layer) 100
From "Now we know it...", New Scientist, 30 Jan 2010, pp 37-39. Our storage media longevity gets worse over time.

Paleolithic art, including the Venus figurines and especially the Venus of Hohle Fels, dates from up to 40,000 years ago.
Clay tablets were developed about 8,000 BC and have expected lifetimes of 4,000 years and more.
Pigment on paper or papyrus came along about 3,500 BC and lasts at least 2,000 years.
Oil-based paintings were developed about 600 AD and can be expected to last for centuries.
Silver halide monochrome photographic film was developed around 1820 and lasts over 100 years, but modern color photo films (early examples of which were developed around 1860) only last for decades.
Mitsumi QuickDisk media.

Mitsumi QuickDisk media. Ancient!

The article "Are We Losing Our Memory? Or: The Museum of Obsolete Technology", from Lost magazine, discussed this problem as experienced by the U.S. National Archives.

I have a personal story about attempts to recover data from old media in which both the logical format and the physical media had problems.

120 MB DC2120 QIC 80 magnetic tape.

Another relic from the collection of obsolete storage media: 120 MB DC2120 QIC 80 magnetic tape.

Available Digital Media Types

The three major categories are magnetic, flash, and optical.

Magnetic media comes in the form of tape and disk. Discs can be installed inside a computer system case, or they can be placed in small self-contained cases and used as portable external devices. Some require their own power supply, others can be powered over the same USB cable carrying the data connection.

Flash memory is electronic. It can be in the form of a small "chip" or "card" that slides into a slot in a camera, smart phone, or other device. Or, it can be in the form of a "USB stick" or "USB thumb drive". They are increasingly being used to replace or supplement magnetic disk storage inside computers. Internally, the memory cells are dual-gate MOSFET transistors, with the data stored in their gates as charges in low-leakage capacitors.

Optical media takes the form of optical discs, usually spelled that way and not "disk". CD or Compact Disc, DVD or Digital Video Disc, and BD or Blu-Ray Disc media have identical physical dimensions, but very different optical and data storage characteristics. CD holds just 700 MB, DVD holds 4.7 GB per layer (6.7 times one CD), and BD holds 25 GB per layer (35.7 times one CD), with two-layer discs the current industry standard.

1 GB xD memory card, 16 GB MicroSD memory card, USB flash drive.

1 GB xD memory card, 16 GB MicroSD memory card, USB flash drive.

Lifespan of Flash Memory

This is the media with the least public information on storage lifetime. Usage lifetime is the more useful measurement for most applications.

Flash memory has a finite number of program-erase cycles, which you will see described as P/E cycles. Most of the flash products you can buy are guaranteed for around 100,000 P/E cycles before the memory wear begins to degrade data integrity. Some chip firmware or operating system drivers can count the writes and remap write operations across sectors, this is called wear leveling. Another technique verifies write operations and remaps I/O to spare sectors, this is called bad block management.

Either way, you will start to lose data after about 100,000 program-erase cycles.

There is also a problem called read disturb, in which a large numbers of read operations on some data blocks can cause changes to nearby cells if those nearby cells are not re-written. These errors also begin to appear after hundreds of thousands of operations.

Now, what if you store data in flash memory and set it aside. For how long will you be able to read that data back out? The articles cited above, used to build the table of estimated longevity shown above, said that about 10 years is what you could expect.

With the rapidly dropping cost of flash memory, and the corresponding rapid growth of storage capacity you can get for a fixed price, I think that the industry sees this as a somewhat silly question. They would ask: "Instead of worrying about how long your data will safely reside in that old 128 MB thumb drive, why haven't you copied it into your brand-new, much cheaper device with over 100 times the storage capacity?"

Blank DVD+R media shows its translucency.

Much recordable optical media is translucent.

Lifespan of Optical Memory

First, realize that factory produced optical media is entirely different from what you record at home.

Factory produced optical media have names like CD-ROM and DVD-ROM to indicate that they are read-only memory storage. A glass master is used to mold a polycarbonate disc with the required pattern of pits. That surface is then metallized, with a thin layer of mostly aluminum plus traces of other metals sputtered onto it in a vacuum chamber. UV-curable lacquer is then applied to the metallized surface and cured under high intensity UV illumination. The result should be useful for 20 years or more.

Hold a factory produced CD or DVD up to a bright light. You will not see any light coming through the disc as it contains a thin but solid metal layer.

Compare this to a piece of writable or re-writable media, as in the picture at left. There is a thin reflective metal layer within the disc, but it is usually so thin that it is somewhat translucent. There are many specific forms: CD-R, CD-RW, DVD-R, DVD+R, DVD-RW, DVD+RW, BD-R, BD-RE. Most of these rely on optically sensitive dyes to allow recording data. The chemistry of these dyes gives them widely varying stability, all of it significantly worse than the metal layer of a factory version. Exposure to direct sunlight greatly accelerates data loss, as does high or varying temperature and humidity.

Executive Summary
CD-R
  • Small capacity and degrades quickly, especially if exposed to sunlight.
DVD-R
  • Should be good for a decade or so if you start with good media (stable dye, good metal reflective layer) and keep them away from sunlight.
  • A little more portable than DVD+R.
  • A better choice if you're sharing.
  • A better choice if you're copying a DVD to play in a DVD player.
DVD+R
  • Should be good for a decade or so if you start with good media (stable dye, good metal reflective layer) and keep them away from sunlight.
  • A little less portable than DVD-R, but you will waste fewer blanks due to burning errors.
  • A better choice if you keep them and use them only in the system where you burn them.
BD-R
  • Should be about like DVD-R with a little over five times the storage capacity.
CD-RW
DVD-RW
DVD+RW
BD-RE
  • Useful for short-term storage, as long as you don't try to re-use the media too many times.

CD-R discs rely on photosensitive dyes. Initially, cyanine dyes and hybrid dyes based on cyanine were used. They would fade and become unreadable in a few years even if carefully stored. They would become unreadable in just a few days if exposed to direct sunlight, with the "stabilized" ones lasting a week in the sun before losing data.

Azo and phthalocyanine dyes are more stable, with azo CD-Rs typically rated for decades and phthalocyanine CD-Rs rated for a hundred years or more (although recent studies seriously question these claims). Both are sensitive to UV radiation and therefore quickly degrade when exposed to sunlight. Phthalocyanine CD-Rs begin to degrade after two weeks of direct sunlight exposure, and azo CD-Rs are three to four weeks. Other factors leading to early degradation include the quality of the polycarbonate forming the disc and the metallic reflective layer behind the dye. Writer calibration and quality also effect the longevity of the recorded disc. A more marginal disc with recoverable errors will more quickly degrade to the point where its errors are no longer recoverable.

Quality writeable optical media will cost more, but it's worth it.

DVD-R and DVD+R are similar to CD-R in their reliance on chemical dyes that can fade or otherwise degrade over time. The laser wavelength is shorter, in order to read and write smaller pits on narrower tracks and therefore pack more data onto the disc. CD-R uses near infrared lasers at 780 nm, while DVD lasers use a red 640 nm laser. So, the DVD dyes are different from those used on CD-R.

DVD-R and DVD+R differ in some non-chemical details not effecting their lifetime. DVD+R may be a little more reliable when burning or recording, but DVD-R is a little more portable as some drives can read DVD-R but not DVD+R.

DVD-R DL and DVD+R DL are dual-layer versions, storing twice as much data as single-layer DVDs.

CD-RW uses an AgInSbTe alloy as its reflective layer. Its original state is polycrystalline and reflective, and would be read as a "1". To write a "0", the laser uses its maximum power of 8-14 mW to heat the material to 500-700 °C, liquifying the alloy and making it amorphous and non-reflective. To later change that bit back to a "1", the laser heats the bit with low power to about 200 °C, at which the alloy returns to its polycrystalline and reflective state. This can only be done a limited number of times, long-term data retention is quite poor, and the resulting media cannot be read in many drives. DVD-RW and DVD+RW are very similar to CD-RW in technical details and poor lifetime and portability, usually using a different alloy, GeSbTe. DVD-RW and DVD+RW differ in some non-chemical details not effecting their lifetime.

BD-R seems to be similar to DVD-R, again with changes in dye chemistry as now the laser illumination is blue, at 405 nm.

BD-RE seems to be similar to DVD-RW, possibly with changes in alloy chemistry.

500 GB PATA 3.5 inch disk drive and 1 TB SATA 2.5 inch disk drive.

Left: 500 GB PATA 3.5 inch internal-mount disk drive
Right: 1 TB SATA 2.5 inch external disk drive

Lifespan of Magnetic Memory

How long do you expect a magnetic disk drive to last before it fails? Is one brand better than another?

Who knows, and not especially....

Disk manufacturers do studies, but they are accelerated failure tests on their own systems only under very specific conditions. Any manufacturer can have a short run of worse or better devices, and comparisons between various manufacturers' products haven't been very meaningful.

Two papers presented at the 5th USENIX Conference on File And Storage Technology (FAST '07) have gotten quite a bit of attention.

The first is "Failure Trends in a Large Disk Drive Population", by Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso, of Google.

The second is "Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean To You?" by Bianca Schroeder and Garth A Gibson, of Carnegie Mellon University. You can read their CMU Technical Report or the FAST '07 paper.

Here is my summary of the Google paper:

Their study was based on over 100,000 disk drives, a variety of PATA and SATA from a variety of manufacturers, 80-400 GB and 5400-7400 RPM. They do not provide information about the specific manufacturers, but that really isn't all that important. All manufacturers have short runs of worse and better quality, and an attempt to measure who was better would probably be overwhelmed by measurement noise.

Some SMART parameters are highly correlated with disk failures. However, SMART parameters alone are not all that useful for predicting individual drive failures.

Contrary to common assumptions, temperature and activity are not highly correlated to drive failure.

Drive manufacturers quote yearly failure rates below 2%, but user studies report up to 6%. Many apparent failures in the field don't seem to be failures in the lab — maybe the problem was with a specific controller or data cable. They cite other studies of failure rates:

Some SMART data is clearly bogus. I agree — one of my disks seems to consistently report its temperature in degrees Farenheit instead of the expected Celsius, and so it appears to always be somewhere above the boiling temperature of water.

A significant number of drives fail within the first 3 months. The weak ones die quickly.... Then the failure rate climbs after two years. Annualized failure rates, approximated from their Figure 2:

Months 0-3 4-6 7-12 13-24 25-35 36-48 49-60
Annualized failure rate 2.8% 1.7% 1.7% 8.1% 8.6% 6.0% 7.8%

Four SMART parameters were significantly correlated with increased failure rates.

Error type Meaning After the first such occurrence of this error, this many times more likely to fail within 60 days than a drive without this error
Scan error Drives typically scan the disk surface in the background and report errors as they are found. Large scan error counts may indicate surface defects. 39 times more likely to fail
Reallocation counts Drive's logic has remapped a faulty sector number ot a new physical sector drawn from its pool of spares, because of recurring soft errors or a hard error. May indicate drive surface wear. 14 times more likely to fail
Offline reallocation Subset of the reallocation counts, counting only reallocated sectors found during background analysis. Should exclude sectors reallocated due to errors during actual I/O. 21 times more likely to fail
Probational counts Suspect bad sectors put "on probation". Weaker indication of possible problems. 16 times more likely to fail

But while that looks impressive, over 56% of the failed drives had zero counts in all four of those SMART parameters! So, models based only on those four signals will predict less than half the failed drives.

The Google report said that there was a strong correlation with manufacturer but they did not report it. That's fair enough, because the clusters of good and bad disks seem to be with manufacturing batches and not with manufacturers. Meaning, that is, that any manufacturer has both good and bad runs of disks.

If you want to see names, a Russian study included it. It was on the net at pro.sunrise.ru but the article is no longer there. You can, however, find it through the archive.org Wayback Machine.


Archiving data in AWS Glacier

Amazon's Glacier storage service is a great value. Long-term storage in Glacier costs just US$ 0.01 per gigabyte per month. It's meant for backups and archives retrieved infrequently or never, you pay a penalty for frequent access and downloads.

How do you upload data to Glacier? That's the non-obvious part. It's not point-and-click as with S3. Solutions include:

My limited experience with these tools is:

I saved the SAGU Java archive file and then created this shell script in ~/bin/glacier-sagu

#!/bin/sh

java -jar ~/bin/SimpleGlacierUploaderV0746.jar & 

Expect the occasional error message like the following:

Sep 30, 2016 7:18:05 AM org.apache.http.impl.client.DefaultHttpClient tryExecute
INFO: I/O exception (java.net.SocketException) caught when processing request: Broken pipe
Sep 30, 2016 7:18:05 AM org.apache.http.impl.client.DefaultHttpClient tryExecute
INFO: Retrying request 
SAGU or Simple Amazon Glacier Uploader main window.

The first image is the main SAGU window. The second is the "progress bar" window, which is neither exciting nor informative.

SAGU or Simple Amazon Glacier Uploader 'progress bar' window.

Here is the result of signing into AWS, selecting one of my vaults, and viewing the details.

To delete archives and vaults, first you must request an inventory with SAGU or similar and wait 4 hours. You will get a file with a 138-character ArchiveID. Select the vault in SAGU, then Delete in the top menu, then paste that ArchiveID into the new window that appears.

Once the vault is empty, wait for a few hours. Then you will be able to use the AWS dashboard to delete the vault.

This page has details on how to use Boto by writing your own Python code to get more than the rather limited command-line options.

Amazon Glacier vault description.

DDoS (Distributed Denial of Service)

Google has built a live all-Internet visualization of DDoS attacks. Also see Gizmodo's description of the tool. It's an interesting page, although it's very resource hungry.

DDoS is awfully hard to fight because you can't tell where it's really coming from. The very short description of the amplification type of attack is:

  1. The attacker is at home on their control system.
  2. The attacker gains access to a number of trigger systems, each on a network which allows source IP address spoofing. That is, the trigger system's ISP does not do sanity checking, and in particular egress filtering.
  3. A program running on each trigger system sends forged packets to a number of amplifier systems. The forged source IP address of these packets is that of the target. For a system to be an amplifier there must be a UDP service running with some combination of outdated software, misconfigured software, and/or missing or misconfigured packet filtering between the server and the Internet.
  4. Each packet requests information to be sent from the amplifier to the apparent sender, which is the target in a DDoS attacker. The amplification effect is caused by the logic of the abused protocol making the response much larger than the request. The amplification effect is up to 8× in DNS, 206× in NTP, and 650× and higher in SNMP.
Traffic graphs for NTP amplification DDoS attack on three 1Gbps ISP links.

The traffic graphs above are from a victim organization that had all three of their GigE ISP links completely saturated with an NTP amplification attack.

The forged packets were from UDP ports 80 and 443, so the amplified flood was directed to those ports. Only one ISP would implement an ACL to block UDP/80 and UDP/443 to them, the other two would only blackhole the six IP addresses being attacked. As two of those blackholed IP addresses were DNS servers, they could no longer talk to the root servers or any other DNS servers and so external name resolution was completely broken.

Another organization I talked to was cut off when 3 Gbps of NTP traffic was directed at their 1 Gbps ISP link.

For good explanations of DDoS attacks in more detail see Cloudflare's introductory Understanding and mitigating NTP-based DDoS attacks, and the more detailed and specific Technical Details Behind a 400Gbps NTP Amplification DDoS Attack. Earlier, they wrote Deep Inside a DNS Amplification DDoS Attack and The DDoS That Almost Broke the Internet, and before that and at a more basic level, How to Launch a 65Gbps DDoS, and How to Stop One.

Also see The New Normal: 200–400 Gbps DDoS Attacks at KrebsOnSecurity.

More recently, Arbor Networks' 10th Annual Worldwide Infrastructure Security Report reported a 50× increase in DDoS attack size over the past decade, with a 400 Gbps attack in December 2014.

SSDP, the Simple Service Discovery Protocol, was the top mechanism for DDoS attacks in early 2015.

Akamai reported on RIPv1 reflection attacks in mid 2015.

NTP amplification was behind late 2015 DDoS attacks.

The Register described a November 2015 attack on the DNS root servers, many of which were hit with 5 million queries per second.

In July 2016 Arbor announced that a study of the first half of 2016 included a peak attack size of 579 Gbps, and 274 attacks over 100 Gbps. That's about two per day. The average attack size in the first half of 2016 was 986 Mbps, projected to grow to 1.15 Gbps by the end of the year. This means that the average DDoS attack can knock most organization off-line.


Where not to place telco pedestals

Do not place them where this one was in Herndon, Virginia — right along a road winding through office parks, where the anxious commuters hit speeds around 50 m.p.h. despite that being almost twice the posted limit.

And especially not where a sidewalk ramp makes it so easy to drift off the road while texting and smash into the poor pedestal.

Telco pedestal smashed open by a car.
Telco pedestal smashed open by a car.
Telco pedestal smashed open by a car.
Telco pedestal smashed open by a car.

Archiving

How can Amazon claim such high availability for their cloud storage?

Their S3 and Glacier storage services store multiple copies of your data, in multiple physical locations. Three copies are stored in at least two physical locations. The hash value of each is periodically calculated. If any one is ever found to differ from the other two, it is recreated from the presumed good pair.

Meanwhile, those stored data objects are periodically re-written onto different physical storage media. And the underlying storage devices are rotated out of service after a specified period of time.

This process of periodic rewriting onto reasonably fresh hardware and comparing cryptographic hash values for the three current copies is designed to provide an average annual durability of 99.999999999% for an archive of data.

This estimate is based on the details of their design (frequency of disk replacement, frequency of re-writing the archive copies) and the probabilities of the scenario and environment (probability of RAID array failure leading to data loss, likelihood of cryptographic hash collisions).

Data Risk Management has a very interesting model for data archiving: www.datariskmgmt.com.


Data Loss Costs

Data loss is a huge problem. See these facts, from ontrack.com.

Hardware or Systems Malfunction 59%
Human Error 28%
Software Program Malfunction 9%
Viruses 4%
Natural Disaster 2%

This table shows the causes data loss according to Ontrack engineers (who seem to have lost no data to malicious intruders):

According to a Gallup poll, most businesses value 100 megabytes of data at US$ 1,000,000.


Counter-Availability and Destroying Media

If you want to quickly and easily destroy a CD or DVD, place it in a microwave for just a second or so.

Below you see the result of putting a commercial CD into a microwave oven for just one second. The oven was a General Electric E640J 002 nearly twenty years old, and it probably doesn't generate its original 970 watts of power at 2.45 GHz. However, just one second rendered this disk unreadable by most if not all adversaries.

Yes, some heavy-duty office shredders can also eat CDs and DVDs, but they can make a huge mess of metal foil slivers and plastic chips, and the resulting mix of paper, plastic and metal is not recycleable.

Original CD/DVD from a retailer.
CD/DVD destroyed in a microwave, lying on a paper towel.
CD/DVD destroyed in a microwave, silhouetted against a light.

Laptop Theft Prevention

Security cables:
Kensington Philadelphia Security Products American Power Conversion PC Guardian Secure-It Inc

"Phone home" style laptop tracking, Windows only as far as I know:
PC PhoneHome zTrace ComputracePlus


Spam, or Unwanted Junk E-Mail

It's a denial-of-service attack.

How can you tell where spam was injected? Read the "Received:" fields in reverse, looking for inconsistency where the promiscuous relayer accepted the spam from the source. Using a real example I received, my comments inserted below the relevant lines in red:

	From Bio-Med5241_a@linux.com.pk Thu Oct 26 15:38 EST
No, the message did not come from Pakistan (.pk), see below
	Received: from sclera.ecn.purdue.edu (root@sclera.ecn.purdue.edu [128.46.144.159])
		by rvl3.ecn.purdue.edu (8.9.3/8.9.3moyman) with ESMTP id PAA16066
		for <cromwell@rvl3.ecn.purdue.edu> Thu, 26 Oct 15:38:34 -0500 (EST)
Hop #3 — sclera forwarded my mail to rvl3.ecn.purdue.edu
	From: Bio-Med5241_a@linux.com.pk
	Received: from glasgow3.blackid.com ([212.250.136.251])
		by sclera.ecn.purdue.edu (8.9.3/8.9.3moyman) with ESMTP id PAA13819
		for <cromwell@sclera.ecn.purdue.edu>; Thu, 26 Oct 15:38:24 -0500 (EST)
 Hop #2 — glasgow3.blackid.com, the spam relayer, hands the spam to sclera.ecn.purdue.edu
	Date: Thu, 26 Oct 15:38:24 -0500 (EST)
	Message-Id: <XXXX10262038.PAA13819@sclera.ecn.purdue.edu>
	Received: from geo5 (host-216-77-220-220.fll.bellsouth.net [216.77.220.220])
		by glasgow3.blackid.com with SMTP (Microsoft Exchange Internet Mail Service Version 5.5.2650.21)
		id 449GZRTV; Thu, 26 Oct 21:33:01 +0100
Hop #1 — glasgow3.blackid.com, the spam relayer, accepts mail from the source,
a dial-in client of bellsouth.net using the IP address 216.77.220.220.  The
dial-in client undoubtedly got its IP address via DHCP, and so any system using
that IP address right now is not necessarily the original spam source.  However,
bellsouth.net should be able to figure out which of their clients used this
IP address at this particular time.
	To: customer@aol.com
That's odd — I'm not sure how they're getting SMTP to send it to me but with this
bogus address in the "To:" field — maybe I was a blind carbon-copy recipient...
	Subject: A New Dietary Supplement That Can Change Your Life....
	MIME-Version: 1.0
	Content-Type: text/plain; charset=unknown-8bit
	Content-Length: 5463
	Status: R

	[ long pseudo-medical nonsense deleted.... ]

Further investigation could use traceroute or whois to figure out where 216.77.220.220 really is in case the reverse resolution above either failed or was faked. As per the GNU version of whois

% whois 216.77.220.220
NetRange:   216.76.0.0 - 216.79.255.255
CIDR:       216.76.0.0/14
NetName:    BELLSNET-BLK5
NetHandle:  NET-216-76-0-0-1
Parent:     NET-216-0-0-0-0
NetType:    Direct Allocation
NameServer: NS.BELLSOUTH.NET
NameServer: NS.ATL.BELLSOUTH.NET
Comment:
Comment:    For Abuse Issues, email abuse@bellsouth.net. NO ATTACHMENTS. Include IP
Comment:    address, time/date, message header, and attack logs.

Back to the Security Page