Last month’s Low-Latency Summit in New York City featured two morning panels, one on connectivity, the other on computing. Here’s some quotes and highlights from the connectivity panel …
“Folks have pretty much moved from 1 gigabit ethernet to 10 gig, and a lot are moving to 40 gig. Will see more next year” – Cisco’s Dave Malik.
Malik also noted that increasingly, users are looking to get analytics from their infrastructure – buffer usage, queue depth – to proactively manage.
Joe Brunner from Affirmed Systems in a follow up noted that Google is increasingly interested in offerings for the financial markets, has a global network, and is clearly a leader in analytics.
“Queuing delay really messes you up,” said Solarflare Communications’ David Riddoch. When system components cannot keep up, latency can get pushed from microseconds to milliseconds. But buffering to avoid lost packets is usually preferable to dropping them, which would require a re-transmit. With TCP/IP that could mean a 200 millisecond latency bump.
Also commenting on the move from 1 gig to 10 gig ethernet, he noted that for small packet, the higher clock rate of 10gE network adaptors is a big factor – much more than serialisation improvements.
Somewhat astonishingly, Riddoch noted that InfiniBand does indeed still have an edge over 10gE – perhaps by as much as a microsecond per network hop. But this can be achieved only by using RDMA verbs. When comparing socket-level communications, 10gE is faster, he reckoned.
Riddock also commented that RMDA is better at transporting large payloads between servers, than it is at more generalised messaging. And Brunner made the day for Riddock and Malik when he said that “10gE is a thousand times easier to manage than InfiniBand.”
On the downside, though, Brunner noted that large banks and their policy of installing firewalls makes it hard for them to compete with more specialised firms, that have determined they can do without them.
Subscribe to our newsletter