Contributor(s)

Contributor

ABSTRACT

For a growing number of data-intensive research projects spanning a wide range of disciplines, high-speed network access to computation and storage — located either locally in the campus or in the cloud (e.g., at national labs) — has become critical to the research. While old, slow, campus network infrastructure is a key contributor to poor performance, an equally important contributor is the problem of bottlenecks that arise at security and network management policy enforcement points in the network.

This project aims to dramatically improve network performance for a wide range of researchers across the campus by removing many of the bottlenecks inherent in the traditional network infrastructure. By replacing the existing network with modern software defined network (SDN) infrastructure, researchers will benefit from both the increased speed of the underlying network hardware and also from the ability to utilize SDN paths that avoid traditional policy enforcement bottlenecks. As a result, researchers across the campus will see significantly faster data transfers, enabling them to more effectively carry out their research.

This project builds on and extends a successful initial SDN deployment at the University of Kentucky, adding SDN switches to ten research buildings and connecting each of them to the existing SDN research network core with 40G uplinks. To ensure high-speed access to the Internet/Cloud, the research core network is being linked through a new 100 Gbps link to Internet2. Research traffic in the enabled buildings is automatically routed onto the SDN research network where SDN flow rules allow research traffic to bypass legacy network infrastructure designed to police normal traffic. By extending the SDN network capabilities to several new buildings on campus, a wide range of researchers are able to archive significantly higher network throughput for their research data.

Summary of the CC-NIE Project:

-100G* Internet2 via UK regional research network
-100G* Remote data center for research computing expansion
-Bypass existing firewalls, distributions, and edge devices.
-Deployment of GENI Racks
-Deployment of OpenStack cluster for education
-Deployment of 40G Research network SDN switching core
-Deployment of 40/100G Research network SDN routing core.
-Deployment of 40G HPC Data Transfer Node (DTN)
-Deployment of 40G Network Address Translation (NAT) server
-Deployment of 40G Virtual Research Cluster (VRC)
-Consolidate replacement of 100Mb and 1Gb access layer with 1Gb access layer in 2.5 buildings ~1400x1Gb ports

NSF Link

Categories:

Tags: