Wednesday 1 November 2023

Advanced Optical Series: Part 3 - Exploring Broadcast-and-Select and Wavelength-Selective Architectures: Advancing Optical Networks

In the fast-paced world of telecommunications, the quest for faster, more efficient data transmission methods is relentless. Enter Broadcast-and-Select and Wavelength-Selective architectures, two groundbreaking technologies at the forefront of optical network innovation. In this blog post, we'll delve into the intricacies of these architectures, their applications, and the transformative impact they are poised to have on the future of connectivity.

Understanding Broadcast-and-Select Architecture

Broadcast-and-Select (B&S) architecture represents a fundamental shift in the way optical networks are structured. At its core, B&S architecture relies on the concept of broadcasting optical signals to multiple destinations simultaneously, followed by selective routing to the intended recipient. This approach offers several advantages:

  1. Efficient Resource Utilization: By broadcasting signals, B&S architecture eliminates the need for point-to-point connections, leading to more efficient utilization of network resources and reduced complexity in routing.

  2. Scalability: B&S architecture scales gracefully with network size and bandwidth demands, making it well-suited for large-scale optical networks such as metropolitan and backbone networks.

  3. Low Latency: With minimal routing overhead, B&S architecture ensures low latency transmission, making it ideal for applications that require real-time data delivery, such as video streaming and online gaming.

Broadcast-And-Select OADM Architecture


Exploring Wavelength-Selective Architecture

Wavelength-Selective (WS) architecture leverages the unique properties of light to enable high-speed data transmission over optical fibers. Unlike traditional architectures where each optical signal is transmitted on a separate wavelength, WS architecture allows multiple signals to coexist on the same wavelength, with each signal encoded using a unique modulation format or code. Key features of WS architecture include:

  1. Wavelength Reuse: By multiplexing multiple signals onto the same wavelength, WS architecture maximizes spectral efficiency and enables efficient utilization of the optical spectrum.

  2. Flexibility: WS architecture offers flexibility in allocating wavelengths to different signals dynamically, allowing for adaptive resource allocation and optimized network performance.

  3. Interference Mitigation: Through advanced signal processing techniques, WS architecture mitigates crosstalk and signal interference, ensuring reliable data transmission even in dense wavelength-division multiplexing (DWDM) environments.


Wavelength-Selective OADM Architecture


Applications and Future Outlook

Both Broadcast-and-Select and Wavelength-Selective architectures find applications across a wide range of domains, including telecommunications, data centers, and high-performance computing. These architectures are instrumental in enabling high-speed data transmission, improving network scalability, and reducing operational costs.

Looking ahead, the future of optical networks is bright, with ongoing research and development aimed at further enhancing the performance and efficiency of B&S and WS architectures. Emerging technologies such as silicon photonics, coherent detection, and software-defined networking (SDN) are poised to unlock new capabilities and applications, driving the evolution of optical networks towards faster, more reliable, and energy-efficient communication infrastructures.

In conclusion, Broadcast-and-Select and Wavelength-Selective architectures represent significant milestones in the evolution of optical networking technology. By harnessing the power of light and innovative network designs, these architectures are poised to revolutionize the way we transmit and process data, paving the way for a more connected and digitally empowered future.

Tuesday 3 October 2023

Advanced Optical Series: Part 2 - Exploring the Optical-Electrical-Optical (O-E-O) Architecture: Powering the Future of Data Transmission


In the realm of data transmission and communication, the quest for faster, more efficient methods is ceaseless. Enter the Optical-Electrical-Optical (O-E-O) architecture, a technological marvel that promises to revolutionize the way we transmit and process data. In this blog post, we'll delve into the intricacies of O-E-O architecture, its applications, and the potential it holds for shaping the future of connectivity.

2-degree Node




Understanding O-E-O Architecture

At its core, the O-E-O architecture seamlessly integrates optical and electrical components to optimize data transmission. It comprises three main stages:


1. **Optical Conversion**: The journey begins with converting electrical signals into optical signals, typically achieved using a laser or light-emitting diode (LED). This step allows for the efficient transmission of data through optical fibers, which offer significantly higher bandwidth and lower latency compared to traditional electrical wires.


2. **Electrical Processing**: Once the data reaches its destination, it undergoes electrical processing, where it is decoded, analyzed, and manipulated as needed. This stage harnesses the computational power of electronic devices to perform tasks such as error correction, encryption, and protocol handling.


3. **Optical Regeneration**: Finally, the processed data is converted back into optical signals for onward transmission or storage. This optical regeneration ensures that the integrity and quality of the data are maintained, especially over long distances where signal attenuation may occur.


Three-degree Node




Applications of O-E-O Architecture


The versatility of O-E-O architecture makes it applicable across various domains, including telecommunications, data centers, and high-performance computing. Here are some key areas where O-E-O architecture shines:


1. **Telecommunications**: O-E-O architecture plays a pivotal role in long-haul and metropolitan optical networks, enabling high-speed data transmission over vast distances. Its ability to regenerate optical signals ensures reliable communication, making it indispensable for telecommunication providers worldwide.


2. **Data Centers**: In the era of big data and cloud computing, data centers are the backbone of digital infrastructure. O-E-O architecture enhances intra and inter-data center connectivity, facilitating rapid data transfer between servers and storage systems. This accelerates data processing and improves overall system performance.


3. **High-Performance Computing (HPC)**: O-E-O architecture is increasingly integrated into HPC clusters and supercomputers to meet the ever-growing demand for computational power. By leveraging optical interconnects, HPC systems can achieve higher bandwidth, lower latency, and reduced energy consumption, leading to significant performance gains in scientific simulations, AI training, and other compute-intensive tasks.


The Future Outlook


As we continue to push the boundaries of technology, O-E-O architecture is poised to play a pivotal role in shaping the future of data transmission and communication. Advancements in photonics, integrated circuitry, and signal processing algorithms will further enhance the performance and efficiency of O-E-O systems, paving the way for faster, more reliable, and energy-efficient networks.


Moreover, the integration of O-E-O architecture with emerging technologies such as quantum computing and 5G wireless networks holds immense promise for unlocking new capabilities and applications. From ultra-fast internet connectivity to real-time data analytics, the possibilities are limitless.


In conclusion, the Optical-Electrical-Optical (O-E-O) architecture stands as a testament to human ingenuity and innovation in the realm of data transmission. By seamlessly blending optical and electrical components, O-E-O architecture offers a glimpse into the future of connectivity, where speed, efficiency, and reliability converge to redefine the way we interact with and harness the power of data.

Tuesday 5 September 2023

Advanced Optical Series: Part 1 - Ultra-dense optical data transmission over standard fibre with a single chip source

Micro-combs - optical frequency combs generated by integrated micro-cavity resonators – offer the full potential of their bulk counterparts, but in an integrated footprint. They have enabled breakthroughs in many fields including spectroscopy, microwave photonics, frequency synthesis, optical ranging, quantum sources, metrology and ultrahigh capacity data transmission. Here, by using a powerful class of micro-comb called soliton crystals, we achieve ultra-high data transmission over 75 km of standard optical fibre using a single integrated chip source. We demonstrate a line rate of 44.2 Terabits s−1 using the telecommunications C-band at 1550 nm with a spectral efficiency of 10.4 bits s−1 Hz−1. Soliton crystals exhibit robust and stable generation and operation as well as a high intrinsic efficiency that, together with an extremely low soliton micro-comb spacing of 48.9 GHz enable the use of a very high coherent data modulation format (64 QAM - quadrature amplitude modulated). This work demonstrates the capability of optical micro-combs to perform in demanding and practical optical communications networks.


Introduction



The global optical fibre network currently carries hundreds of terabits per second every instant, with capacity growing at ~25% annually. To dramatically increase bandwidth capacity, ultrahigh capacity transmission links employ massively parallel wavelength division multiplexing (WDM) with coherent modulation formats, and in recent lab-based research, by using spatial division multiplexing (SDM) over multicore or multi-mode fibre. At the same time, there is a strong trend towards a greater number of shorter high-capacity links. Whereas core long-haul (spanning 1000’s km) communications dominated global networks 10 years ago, now the emphasis has squarely shifted to metro-area networks (linking across 10’s–100’s km) and even data centres (< 10 km). All of this is driving the need for increasingly compact, low-cost and energy-efficient solutions, with photonic integrated circuits emerging as the most viable approach. The optical source is central to every link, and as such, perhaps has the greatest need for integration. The ability to supply all wavelengths with a single, compact integrated chip, replacing many parallel lasers, will offer the greatest benefits.

Micro-combs, optical frequency combs based on micro-cavity resonators, have shown significant promise in fulfilling this role. They offer the full potential of their bulk counterparts, but in an integrated footprint. The discovery of temporal soliton states (DKS—dissipative Kerr solitons) as a means of mode-locking micro-combs has enabled breakthroughs in many fields including spectroscopy, microwave photonics, frequency synthesis, optical ranging, quantum sources, metrology and more. One of their most-promising applications has been optical fibre communications, where they have enabled massively parallel ultrahigh capacity multiplexed data transmission.

The success of micro-combs has been enabled by the ability to phase-lock, or mode-lock, their comb lines. This, in turn, has resulted from exploring novel oscillation states such as temporal soliton states, including feedback-stabilised Kerr combs, dark solitons and DKS. DKS states, in particular, have enabled transmission rates of 30 Tb/s for a single device and 55 Tb/s by combining two devices, using the full C and L telecommunication bands. In particular, for practical systems, achieving a high spectral efficiency is critically important—it is a key parameter as it determines the fundamental limit of data-carrying capacity for a given optical communications bandwidth.

Recently, a powerful class of micro-comb termed soliton crystals was reported, and devices realised in a CMOS (complementary metal-oxide semiconductor) compatible platform have proven highly successful at forming the basis for microwave and RF photonic devices. Soliton crystals were so-named because of their crystal-like profile in the angular domain of tightly packed self-localised pulses within micro-ring resonators (MRRs). They are naturally formed in micro-cavities with appropriate mode-crossings without the need for complex dynamic pumping and stabilisation schemes that are required to generate self-localised DKS waves (described by the Lugiato-Lefever equation). The key to their stability lies in their intracavity power that is very close to that of spatiotemporal chaotic states. Hence, when emerging from chaotic states there is very little change in intracavity power and thus no thermal detuning or instability, resulting from the ‘soliton step’ that makes resonant pumping more challenging. It is this combination of intrinsic stability (without the need for external aid), ease of generation and overall efficiency that makes them highly suited for demanding applications such as ultrahigh-capacity transmission beyond a terabit/s.

Here, we report ultrahigh bandwidth optical data transmission across standard fibre with a single integrated chip source. We employ soliton crystals realised in a CMOS-compatible platform to achieve a data line-rate of 44.2 Tb/s from a single source, along with a high spectral efficiency of 10.4 bits/s/Hz. We accomplish these results through the use of a high modulation format of 64 QAM (quadrature amplitude modulation), a low comb-free spectral range (FSR) spacing of 48.9 GHz, and by using only the telecommunications C-band. We demonstrate transmission over 75 km of fibre in the laboratory as well as in a field trial over an installed network in the greater metropolitan area of Melbourne, Australia. Our results stem from the soliton crystal’s extremely robust and stable operation/generation as well as its much higher intrinsic efficiency, all of which are enabled by an integrated CMOS-compatible platform.

Tuesday 8 August 2023

Elastic Stack for Network and Security Engineers

In recent years network engineers turned from CLI jockeys into a hybrid between an application developer and a networking expert… what acronym-loving people call NetDevOps. In practice, a NetDevOps engineer should be able to:
  • manage and troubleshoot networks;
  • helps to troubleshoot any information system issue (including “slow” applications);
  • automate networking tasks;
  • monitor network and application performance;
  • continue auditing the infrastructure, eventually using (partial) automation to make it less time-consuming;
and do everything else not covered by other IT teams.

To get there, NetDevOps started to learn Linux, Python, and automation frameworks. In this article, we’ll add log management to the mix.

Log management is the ability to collect any event from information systems and get them automatically analyzed to help NetDevOps react faster to information system issues.

There are many commercial or open-source log management platforms; I would mention:

Each one of these has a different focus: Graylog was born as a log management solution, Elastic Stack is a Big Data solution, and InfluxDB is a time-series database.

We won’t go into discussing the pros and cons of these products. There are already plenty of blog posts doing that.

I chose to work with Elastic Stack because of its:flexibility: I’m able to inject logs from almost any device I encountered;
scalability: I can manage hundreds or thousands of log messages per seconds without any issue;
integration: I’m able to build a very robust solution that includes other open-source components.
vision: I honestly like where Elastic company is going, and I agree with its vision.

Wednesday 5 July 2023

LISP vs EVPN: Mobility in Campus Networks

 TL&DR: The discussion on whether “LISP scales better than EVPN” became irrelevant when the bus between the switch CPU and the adjacent ASIC became the bottleneck. Modern switches can process more prefixes than they can install in the ASIC forwarding tables (or we wouldn’t be using prefix-independent convergence).

Now, let’s focus on the dynamics of campus mobility. There’s almost no endpoint mobility if a campus network uses wired infrastructure. If a campus is primarily wireless, we have two options:

  • The wireless access points use tunnels to a wireless controller (or an aggregation switch), and all the end-user traffic enters the network through that point. The rest of the campus network does not observe any endpoint mobility.
  • The wireless access points send user traffic straight into the campus network, and the endpoints (end user IP/MAC addresses) move as the users roam across access points.

Therefore, the argument seems to be that LISP is better than EVPN at handling a high churn rate. Let’s see how much churn BGP (the protocol used by EVPN) can handle using data from a large-scale experiment called The Internet. According to Geoff Huston’s statistics (relevant graph), we’ve experienced up to 400.000 daily updates in 2021, with the smoothed long-term average being above 250.000. That’s around four updates per second on average. I have no corresponding graph from an extensive campus network (but I would love to see one), but as we usually don’t see users running around the campus, the roaming rate might not be much higher.

However, there seems to be another problem: latency spikes following a roaming event.

I have no idea how someone could attribute latency spikes equivalent to ping times between Boston and Chicago to a MAC move event. Unless there’s some magic going on behind the scenes:

  • The end-user NIC disappears from point A, and the switch is unaware of that (not likely with WiFi).
  • The rest of the network remains clueless; traffic to the NIC MAC address is still sent to the original switch and dropped.
  • The EVPN MAC move procedure starts when the end-user NIC reappears at point B.
  • Once the network figures out the MAC address has moved, the traffic gets forwarded to the new attachment point.

Where’s latency in that? The only way to introduce latency in that process is to have traffic buffered at some point, but that’s not a problem you can solve with EVPN or LISP. All you can get with EVPN or LISP is the notification that the MAC address is now reachable via another egress switch.

OK, maybe the engineer writing about latency misspoke and meant the traffic is disrupted for 20 msec. In other words, the MAC move event takes 20 msec. Could LISP be better than EVPN in handling that? Of course, but it all comes down to the quality of implementation. In both cases:

  • A switch control plane has to notice its hardware discovered a new MAC address (forty years after the STP was invented, we’re still doing dynamic MAC learning at the fabric edge).
  • The new MAC address is announced to some central entity (route reflector), which propagates the update to all other edge devices.
  • The edge devices install the new MAC-to-next-hop mapping into the forwarding tables.

Barring implementation differences, there’s no fundamental reason why one control-plane protocol would do the above process better than another one.

But wait, there’s another gotcha: at least in some implementations, the control plane takes “forever” to notice a new MAC address. However, that’s a hardware-related quirk, and no control-plane protocol will fix that one. No wonder some people talk about dynamic MAC learning with EVPN.

Aside: If you care about fast MAC mobility, you might be better off doing dynamic MAC learning across the fabric. You don’t need EVPN or LISP to do that; VXLAN fabric with ingress replication or SPB will work just fine.




Before doing a summary, let me throw in a few more numbers:

  • We don’t know how fast modern switches can update their ASIC tables (thank you, ASIC vendors), but the rumors talk about 1000+ entries per second.
  • The behavior of open-source routing daemons and even commercial BGP stacks is well-documented . Unfortunately, he didn’t publish the raw data, but looking at his graphs, it seems that good open-source daemons have no problems processing 10K prefixes in a second or two.

It seems like we’re at a point where (assuming optimal implementations) the BGP update processing rate on a decent CPU exceeds the FIB installation rate.





Back to LISP versus EVPN. It should be evident by now that:

  • A campus network is probably not more dynamic than the global Internet;
  • BGP handles the churn in the global Internet just fine, and there’s no technological reason why it couldn’t do the same in an EVPN-based campus.
  • BGP implementations can handle at least as many updates as can be installed in the hardware FIB.
  • Regardless of the actual numbers, decent control-plane implementations and modern ASICs are fast enough to deal with highly dynamic environments.
  • Implementing control-plane-based MAC mobility with a minimum traffic loss interval is a complex undertaking that depends on more than just a control-plane protocol.

There might be a reason only a single business unit of a single vendor uses LISP in their fabric solution (hint: regardless of what the whitepapers say, it has little to do with technology).