Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The network piece seems kind of incomplete and outdated when compared to what's being discussed in terms of compute and storage. Most of the new networking fabrics being built at this point are 100G, with emerging server connectivity focusing on nx25G for in-rack. Longer-distance 25G is awaiting implementations of FEC but the currently shipping gear can do 3-5M - which is about ideal for top-of-rack. The economics and development cycles of these link-types tends to dictate that 40GE doesn't make sense for a new installation.

The call for "SDN" is incredibly nebulous - to the point of being almost meaningless, IMO. What the big guys tend to be after is a way to control the fabric via standardized API calls - so capability for YANG/NETCONF or some mechanism for direct access to SDE calls. The other thing that's not addressed is how to efficiently get information about the network out of the network. Traditional polled mechanisms (SNMP/RMON, et al) have been shown not to both lack scale and adequate resolution while legacy approaches to push telemetry (sFlow/Netflow) miss the mark in terms of level of detail and compatibility with large-scale data processing needs.

The next point is the selection of topology and the integration with multi-site planning. There's a lot of cool stuff happening in this regard and there seems to still be a pretty big disconnect between what the systems folks seem to understand and what's happening in the network industry, which is a shame because there's probably more opportunity for neat stuff (read: scalability, performance, fault resistance, manageability) than seems to be discussed (at least on HN).

Finally - there's a certain conventional wisdom among the systems and some sections of the programmer crowd that network control planes are just another mostly simple bit of software to be implemented. It's not. It's a hard problem and is the manifest reason why only a handful of organizations have been able to produce software that runs a meaningful percentage of large-scale L3 infrastructure in the world (hint: Arista is a great company but isn't included in this number quite yet). Truly rugged, useful/usefully-featured and performant network code is hard. Making that code work in the context of 30+ years of protocol implementations, morphing standards and a world of bad/clueful actors is REALLY hard. There's an inverse relationship between the amount of money spent on a solution and the amount of specialized expertise you have on staff. A more traditional commercial solution might be more expensive but it also relieves you of the need to keep some relatively rare, likely expensive and almost certainly non-revenue producing skill-sets off-staff.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: