Networking

This page will go over some of the production networks that I have worked on.  Note that specific implementation details (such as explicit commands used) have purposefully been left out.

Video Source Only
In this network, the only thing being provided to the customer was a source for video feeds.  This used the video source control application that I wrote and described in the code page of this blog.  The middleware to control the set top boxes was managed by another provider.

This was one of the most basic setups.  There was typically a number of devices (depending on the channel lineup) to stream the multicast video source, along with a remote access server to control the video sources and provide local access for management.  All of these devices typically plugged in to a single aggregation switch over cat5 ethernet cable, and there was an etherchannel connection to the customer core network, which was typically two fibre connections.  All of the devices would typically be connected to access ports on the same non-management vlan, i.e. not vlan 1.

The customer would typically perform the IGMP Querier function, which would result in the etherchannel becoming the mrouter port on the aggregation switch.  IGMP snooping would be enabled on all of the ports, and each of the video source ports would guard against bpdu's, have protected status (allowing communication with the control server and the fibre uplink, but not eachother), and block unknown unicast and multicast traffic.

If the customer network was flat, i.e. the same broadcast domain as the video source/control server, the aggregation switch would have dhcp snooping turned on and only trust the port with the control server, where dhcp reservations were configured and given out to the video source devices (they did not have the abillity to have ip information statically configured).


Video Source and Middleware (Island)
In this scenario, both the video source and middleware for controlling set top boxes were provided by the same company.  The middleware would not only control the set top box, but groom the incoming multicast video feeds from the receivers.  The incoming multicast traffic from the receivers was delineated from the outgoing traffic multicast traffic from the the middleware by a change of multicast group destination, as well as a change of vlans, meaning the multicast traffic had to be routed.

Each customer distribution switch was typically on a separate vlan, and since dhcp addresses to set top boxes were given out from the middleware system, this required the dhcp helper command on the core switch.  Each distribution switch would have at least two vlans coming over a trunk link to the core, one for management, and one for production.  More vlans would be added if the customer was using this infrastructure for internet access as well.  Each vlan had a custom acl applied to the svi on the core switch to further limit access a guest might have, and limit the damage of possibly compromised devices.

For the multicast routing, there was typically a loopback address which was defined for the purpose of the rendezvous point.  The rendezvous point would be statically defined, and would have an acl to define which multicast groups it was responsible for.  The production svi connections to the distribution switches would then be configured for pim sparse mode.