Update: 12/30, 10 AM – problem appears fixed. Will call to find out what it was.
The backstory:
So on 12/22, Win noticed that email to Cathy wasn’t being delivered. She’s using an IMAP server here at Serissa Galactic HQ, and our mail gateway, hosted on a virtual machine at Rackspace, normally delivers her mail to the IMAP server.
By two days ago, we figured out that in fact we can’t establish TCP connections between the mail gateway and systems at Serissa that happen to use a particular one of our 5 static IP addresses. The others work fine.
This is just weird, but the VZ supplied Actiontec MI424 router is, well, just weird . . . but the problem isn’t the router. After several hours of trying to configure various port forwarding and static NAT setups in the router, I called Verizon tech support. After about 2 hours of phone hell, I got through to a fellow who was, well, clueful. It turns out you can set up screen sharing with them, and jointly click around in the router configuration screens. The support rep eventually agreed with me that the problem existed, but at midnight December 23, there was not much help to be had from Actiontec. He suggested connecting our system upstream of the router with a a switch, or using a different router if we had one.
I did not know that is now Verizon FIOS static IP works, but it makes a lot of sense. There is an ethernet between the optical network terminal (ONT) and the Verizon supplied router, but you don’t have to use their router. I unplugged the router and plugged in my macbook. I said to Win “OK, I’m on the Internet… wait. I am ON the internet!” I have actually never been before directly connected, not through a firewall, since Arpanet days. Cool.
We have five IP addresses, and with the macbook running tcpdump, it was easy to see what wasn’t happening. With the macbook configured with our .10 address, we would attempt to open a TCP connection to our cloud system, but never got any replies. Attempts to open connections from the cloud end never showed up. By running netstat on the cloud end, we could see connections in “SYN_RCVD” state, but not getting ESTABLISHED. Packets were going out, but not coming in. Incidently, and strangely, ping, traceroute, and other ICMP stuff worked fine.
By changing the macbook IP address from .10 to .11 (another of our static IPs), it worked fine.
This was enough evidence to open a trouble ticket at Verizon. We were told that they would get back to us in 24 hours…NOT.
In the meantime. We changed our IMAP server to static NAT on a working IP address, and changed the port forwarding for inbound SMTP to match. Now future email could be delivered, but 400 odd messages were stuck in the queue. Win figured out how to add new Postfix rules to rerun the queue and translate the address, and we cleared the backlog.
Win also noticed that we can’t talk to www.dropbox.com, which may be hosted by Rackspace as well. The IP address is on a different class B, but it isn’t much different. Same symptom. We can’t talk to dropbox via our .10 address, but we can via .11 or other.
Christmas evening, after about 48 hours of silence from Verizon, I tried to get the trouble ticket status. This is quite difficult. There is evidently no online way to do it, you have to go through phone hell. After a few tries, I again got a skillful and helpful tech. He told me that the ticket was assigned to the network techs, but there were no comments indicating anyone was working on it. However, he searched around and found an outage report saying, roughly, that Massachusetts business fios static IP customers can’t talk to certain websites, and this outage report now had 75 trouble tickets linked to it. He said he couldn’t tell me about other customers, but did mention trouble contacting www.experian.com, so I tried it. We can’t talk to www.experian.com from .10 but we can from .11.
Our trouble ticket is now number 76, but there is no clue about who or when anyone might work on the problem. Evidently other folks are much worse off than us, with their credit card processing machines unable to talk to the processors.
I will call back tomorrow or Monday to see what is going on.
I find this fascinating, but now fairly stress-free since Cathy’s email has been delivered. What could cause reliable lossage of TCP connection setup, between stable, but seemingly random addresses? Works fine for ICMP, but fails every time for SYN packets. fios-10.serissa.com fails, but fios-11.serissa.com works. www.experian.com fails, but www.google.com works. Maybe a corrupted hash table somewhere? It seems like a very subtle and mysterious kind of thing.
Oh. This blog is hosted by our cloud system, so I can’t talk to it via FIOS. I’ve changed my laptop’s default route to use Win’s Comcast DSL instead, which works fine. More proof that having a gigabit fiber between our houses is just a good idea.
One of the many problems with the Internet is that most people are at the mercy of their ISP. The ISP controls the last mile and you have no real alternative. Serissa happens to have both FIOS and Comcast links, but that isn’t as useful as you might think. Inbound traffic knows about one or the other, and failover is manual and tedious. I think we need an ASN so we can just let BGP deal with this, but that solution doesn’t scale well.
Update 12/26/2010 9 PM
We’ve found that our other IP addresses also don’t work … to different sets of sites. For example, .11 can’t reach www.patternreview.com.
I called Verizon at 888 244 8880 to report this and to find out ticket status. I was on hold for 35 minutes and reached a fairly clueless agent this time. He couldn’t get any information out of the network technician group, which probably means that no one is working on the problem. He was able to pull up the group outage report RIEH032H87.
I asked why I couldn’t get online status, and he says because my trouble ticket is linked to a group ticket, I can’t see status anymore. That seems unlikely.
I’ve created a #fios hashtag on Twitter, just for fun.
Update Monday 12/27/2010 11 PM
I called Verizon again to find out if there is any progress. Evidently the problem has been passed up from the network technicians to IP Engineering, and the NOC. This seems good. However, according to the rep I talked to, they are looking into a theory that traceroutes along affected paths are showing the trouble outside the Verizon network.
That doesn’t match what I see. As an example, from our .10 IP address, we cannot reach www.stewart.org (this blog). However, traceroute works. From our .11 IP address, we cannot reach www.patternreview.com (never mind), but traceroute works. From .10, patternreview works fine, and from .11, stewart.org works fine.
Here’s (part of) the trace for .11 to patternreview.com
Here’s part of the trace for .10 to www.stewart.org
4 so-7-2-0-0.BOS-BB-RTR2.verizon-gni.net (130.81.29.174) 9.101 ms 9.121 ms 9.028 ms
5 0.so-0-2-0.XL4.BOS4.ALTER.NET (152.63.16.141) 18.682 ms 18.757 ms 21.011 ms
6 0.xe-4-1-0.XL4.NYC4.ALTER.NET (152.63.3.102) 21.096 ms 19.589 ms 19.308
The only common elements there are verizon (and the fact that the paths both go into Alternet.
Both traceroutes work all the way to the destinations, it is just TCP SYN/ACK packets that don’t come back.
I’ve heard a theory that someone is blacklisting fios addresses. Until yesterday, we never used .11 for outbound connections, so I am skeptical.
In other news, we got about 14 inches of snow here. The kids are happy.
Update Tuesday 12/28/2010
Today’s wait on 888-244-8880 was 28 minutes. Verizon needs better music on hold.
The representative today said the problem affects 71.x.x.x addresses (true) because when the 71 addresses were assigned to Verizon, website admins are notified to unblock them, but sometimes they don’t.
This is a fairly pathetic claim. We’ve had the addresses for 5 years, they worked fine until a week ago, I control a machine I can’t talk to from one of my addresses, and ICMP traffic works fine, just not TCP.
It sounds like Verizon still has a theory about websites blacklisting Verizon addresses. I think it is much more likely that some fancy router in the broken paths has a bad memory module, My guess about which one it is based on the rather small differences between traceroutes of working paths and non-working paths. All of the non-working paths I know about pass from Verizon to Alternet in New York, for example, before branching off into other networks. Try rebooting
6 0.ae1.BR2.NYC4.ALTER.NET (152.63.18.37) 26.290 ms 24.964 ms 24.691 ms
and see if that helps…
Update 12/29 at 11 PM
I called Verizon again. As expected, there was a 35 minute wait on hold, and the representative said “they are still working on it”. I asked for a supervisor and got very little more. There are now 120 tickets linked to the group outage (up from 57), but there have been no comments added to the log since 12/27. I suggested that certainly gave me the impression Verizon didn’t take the problem very seriously.