Image
Data Transfer image

BCNET News

Canadian Particle Physics Team Achieves First Terabit-Per-Second Data Transfer from a Canadian University to the SC25 Supercomputing Conference

UVIC and SFU researchers collaborate with national partners to deliver the nation's first Tbps transfer.

Canadian Particle Physics Team Achieves First Terabit-Per-Second Data Transfer from a Canadian University to the SC25 Supercomputing Conference

On November 17-19, 2025, a multi-institutional project resulted in the successful transfer of over a terabit/second (Tbps) of data across the continent, from the University of Victoria in British Columbia, Canada, to the international Supercomputing Conference (SC25) in St. Louis, Missouri, USA.

Reaching peak speeds of 1.15 Tbps, this demonstration was made possible through the participation of research computing groups at the University of Victoria (UVic) and Simon Fraser University (SFU), brought together by BCNET and CANARIE. This high-speed data transfer showcases what can be accomplished through the collaborative strength of national, provincial, and international network infrastructure partners.

A National and International Collaboration

The project was led by Dr. Tristan Sullivan and Professor Randall Sobie of the Institute of Particle Physics at UVic. At last year’s conference, they attempted a 400 Gbps transfer. This year, the goal was to transfer data exceeding 1 Tbps and to do it while showcasing how research traffic can be marked for additional visibility (using the SciTags packet marking approach).

Why? Well, as Sobie shares, they are attempting to solve a larger problem facing researchers around the globe.

“We participate in a number of international experiments, like the ATLAS experiment at the CERN laboratory in Geneva,” says Sobie. “Data is collected at the lab, but the lab cannot host all the data, so we use centres around the world (including the Canadian computing centres from the Digital Research Alliance). And one of the first things we do is distribute the raw data to those centres. There is a lot of interest in making sure we have the network capacity to do this. These tests help us prepare.”

To reach St. Louis, the data travelled through an intricate, multi-network path involving several regional, national, and international partners:

  • BCNET, British Columbia’s Research and Education Network, delivered high-speed regional connectivity.
  • CANARIE, the national backbone of Canada’s National Research and Education Network (NREN), connecting provincial and territorial networks to each other and to the world, carried the traffic to the Pacific Wave exchange in Seattle.
  • From there, connections flowed through Internet2’s North America Research and Education Exchange (NA-REX) and ICAIR/Starlight to the SCinet network at SC25.

Three transmitting servers were located at UVic and one at SFU, while three receiving servers were on the conference exhibition floor, provided by the International Centre for Advanced Internet Research (iCAIR) at Northwestern University.

“Achieving the bandwidth and coordinating between all the different people is an important accomplishment,” says Sobie.

Sullivan agrees, adding that this kind of collaboration takes a lot of work to get right.

“The coordination between the involved parties led to the project’s success,” says Sullivan. “BCNET, CANARIE, and our international partners were all critical participants.”

“Terabit-scale data transfers are not just technical milestones—they are proof of the critical infrastructure and collaborations that enable Canadian science,” said Mike Tremblay, CANARIE’s President and CEO. “Our partnership with BCNET and our 12 other provincial and territorial partners, together with our resilient connections to the global web of Research and Education Networks, ensure the UVic team and researchers throughout Canada remain at the forefront of scientific discoveries.”

This achievement demonstrates the strength of the end-to-end global research and education network ecosystem required to deliver lossless, terabit-per-second performance across continents. This project was the first terabit/second transfer from a Canadian university.

Advancing Scientific Packet Marking

But beyond raw bandwidth, the demonstration highlighted a second innovation: packet marking for scientific workflows.

Led by Dr. Sullivan, Senior Research Associate at UVic, the team embedded metadata inside IPv6 packet fields, using flow labels and extension headers, to identify the type of scientific traffic being carried. This technique, known as SciTags, has the potential to become a new global standard for marking and managing research network traffic, beginning with LHC data flows.

“The raw bandwidth achieved was a critical milestone,” says Sullivan, “and the packet marking being able to keep up at that rate was an important part of the demonstration.”

Sullivan explains that the ability to tag scientific packets at terabit speeds will open the door to real-time visibility, measurement, and optimization of large-scale scientific data movement.

Participating Organizations and Contributors

Researchers from UVic and SFU engineered and tuned the systems that originated the transferred data, balancing the flows between sites to achieve the target throughput.

Research Team

  • Tristan Sullivan: Senior Research Associate, Institute of Particle Physics; Technical Manager, HEPNET/Canada, University of Victoria
  • Professor Randall Sobie: Principal Research Scientist, Institute of Particle Physics; Director, HEPNET/Canada, University of Victoria
  • Ryan Enge and Hans Yodis: UVic Research Computing
  • Lixin Liu: SFU Research Computing
  • Tom Samplonius: Director of Network Services, BCNET
  • Thomas Tam: Chief of Network Engineering, CANARIE
  • Joe Mambretti, Fei Yeh, Jim Chen: iCAIR, Northwestern University

Network Providers

  • BCNET
  • CANARIE
  • Pacific Wave
  • Internet2 / NA-REX
  • ICAIR/Starlight
  • SCinet (SC25 network)

Vendors

The data transfer relied on high-performance computing servers and network equipment provided by industry partners:

  • Lenovo contributed network servers and QSFP112 transceivers.
  • DELL provided additional server hardware.
  • Arista supplied switching hardware and QSFP-DD to QSFP112 cabling solutions.
  • NVIDIA provided network interface hardware used in the testing.

A Milestone for Canadian Science

The successful demonstration showcases the power of collaboration and establishes UVic, SFU, BCNET, and CANARIE as key contributors to global-scale particle physics data movement.

It also lays the groundwork for future capabilities, such as exascale LHC data distribution, real-time network diagnostics via SciTags, and collaborative development across international R&E networks.

To review the results and technical details of the project, visit: *hepnetcanada.ca/SC25