Showing posts with label Optical Transmission. Show all posts
Showing posts with label Optical Transmission. Show all posts

Tuesday 2 April 2024

Submarine Network Series- Part 3- Submarine Systems’ Performance Metrics



Submarine cable systems, vital lifelines of global connectivity, pose unique challenges compared to their terrestrial counterparts. As a result, measuring and determining service performance in submarine networks requires specialized approaches. Let's delve deeper into the distinctive metrics and considerations that govern the evaluation of submarine systems' performance.

Unique Challenges of Submarine Systems


Component Health Monitoring: In submarine networks, assessing service performance involves monitoring the health status of essential components such as Basic Units (BUs) and intermediate repeaters. This data is gathered by coherent transponders placed at cable ends, presenting a more complex monitoring scenario compared to terrestrial networks.


Total Output Power Constraint:
Unlike terrestrial systems, submarine systems face Total Output Power (TOP) constraints, altering the calculation of Signal-to-Noise Ratio (SNR). TOP constraints lead to signal depletion and noise accumulation, necessitating adjustments in performance evaluation methodologies.


Evolution of Coherent Modems: Advancements in coherent modems, particularly on D+ submarine optical cables, redefine system capacity parameters. Consequently, the ITU updated commissioning processes to accommodate these advancements, ensuring accurate performance assessments.

Performance Metrics for Submarine Systems


Optical Signal-to-Noise Ratio (OSNR): OSNR quantifies the ratio of service signal power to noise power within a specified bandwidth. However, comparing OSNR between systems with different baud rates requires careful consideration to ensure accurate performance evaluation.


Signal-to-Noise Ratio ASE (SNRASE): SNRASE, akin to OSNR, accounts for noise within the signal bandwidth, enabling comparisons across systems with varying baud rates.


Effective SNR (ESNR): ESNR offers a line rate-independent measurement by considering both linear and non-linear noises. Unlike traditional metrics like Q factor, ESNR remains consistent across different line rates, providing valuable insights into future system capacities.


Generalized OSNR (GOSNR): GOSNR combines linear and non-linear noise contributions, offering a comprehensive assessment of system performance. Its baud-independent version, GSNR, provides a holistic view of performance, accounting for effects like guided acoustic-optic wave Brillouin scattering (GAWBS) and signal droopSubmarine cable systems are more challenging compared to terrestrial ones. As a con- sequence, they have major differences in the ways information about service performance can be determined and measured.

Some major differences are the following:


­ In submarine networks the service performance can be determined by information de- scribing the health status of basic network components (BUs, intermediate repeaters). This information is obtained by coherent transponders which are placed at the ends of a submarine cable. Their terrestrial counterparts are by far more easy to monitor. Terrestrial networks can process more data with regard to each unit’s contribution to the whole system’s performance.


­ Total output power (TOP) constraint is another key difference between terrestrial and submarine systems as it changes the way that total SNR is calculated. The TOP constraint in submarine amplifiers results in signal depletion whereas amplifier noise is accumulated because the total channel power (S+N) remains fixed with distance.

New coherent modems used on D+ submarine optical cables change the parameters that define total system capacity and so ITU updated the commissioning process (thus the final tests before going commercial) in a G.977.1 recommendation.

Optical signal-to-noise ratio (OSNR) refers to the ratio of service signal power to noise power for a valid bandwidth (0.1 nm, so ~12.5 GHz at 1550 nm). OSNR is used to quantify the linear noise from amplified spontaneous emission (ASE). However, if two systems are using different baud rates, the OSNR of the higher baud rate system will be reported as higher (although the two systems may have the same noise level). So, OSNR has to be defined without reference to channel spacing and symbol rate.

Signal-to-noise ratio ASE (SNRASE) is similar to OSNR except that the noise bandwidth is equivalent to the signal bandwidth. This leads to a measurement not dependent on symbol rate and which accounts for all noise detected on the receiver side. Therefore, by implementing this approach it is easier to compare the SNR metric for signals with different baud rates.

Similar to the baud independence of SNR, a metric that provides a line rate-independent measurement of performance would be useful for evaluating potential system capacities. Effective SNR (ESNR) measures linear and non-linear noises and reports their impact to the signal performance. As a result, ESNR will not be changed for the same noise levels, no matter the signal’s line rate. This is an improvement over Q factor, which varies for signals experiencing the same total noise on different line rates. In this way, ESNR can be used as a future teller (i.e., if measured at one line rate, it can then be used to predict performance of higher data transmission rates for the same cable system).

As performance of upcoming systems depends also on optical nonlinearity (SNRNL), it would be convenient to measure both linear and non-linear performance. Generalized OSNR (GOSNR) sums the non-linear and the linear noise of the wet plant optical systems. GOSNR’s updated baud-independent version is GSNR. Other effects are both guided acoustic–optic wave Brillouin scattering (GAWBS) and signal droop. GAWBS is an effect which leads to a penalty for a given wet plant design. This effect is caused by the interaction between light and the acoustic modes that occur in the optical fiber.

GSNR is evaluated directly through simulations or analytic models and indirectly through experiments. Figure below summarizes all existing and new metrics and indicates at which exact point each of them is measured.




Figure: Summation of existing and new cable performance metrics and indication of the exact location where each one of them is measured.

.

Cited: Papapavlou, C., Paximadis, K., Uzunidis, D., & Tomkos, I. (2022). Toward SDM-Based Submarine Optical Networks: A Review of Their Evolution and Upcoming Trends. Telecom.


Future Perspectives and Conclusion


The performance of upcoming submarine systems hinges on the accurate measurement and evaluation of both linear and non-linear factors. As technology advances, metrics like ESNR and GSNR will play pivotal roles in predicting and optimizing system capacities.

In conclusion, submarine systems present unique challenges that demand specialized performance metrics and evaluation methodologies. By embracing advancements in coherent modems and refining performance metrics, we pave the way for more efficient, reliable, and high-capacity submarine networks, ensuring seamless global connectivity for generations to come.

Wednesday 6 March 2024

Submarine Network Series: Part 2- Past Evolution of Submarine Transmission Systems




The history of submarine cables can be traced back to the late 1840s when a newly discovered natural polymer called gutta percha provided the basic means for insulating cables for undersea use. In 1850, a relatively short-distance cable was submerged between the UK and France, but unfortunately, it lasted only for a few messages and was replaced by a more durable successor a year later. The first cable connecting Europe to the US became operational on August 16, 1858, but the deployment of cables across the Pacific Ocean proved to be more challenging, and transpacific telegraph cables only became operational in 1902.

It took approximately fifty more years for newer advanced cables to support voice and data communications to be deployed. TAT-1 was the first transatlantic cable used for telephony, which became operational on September 25, 1956, connecting Scotland to Clare Ville, initially supporting transmission of just 36 telephone channels. The first coaxial cables capable of transmitting frequency-multiplexed voice signals became operational in the 1960s, which eventually led to optical technology in 1986 and 1988, using optical fiber links for high-speed/capacity submarine communications.

Some significant milestones in the submarine era history include the first submarine cable supporting phone data (from San Francisco to Oakland) in 1884, the first submarine high-voltage direct current cable connecting the island of Gotland to mainland Sweden in 1954, and the first deployment of repeaters in the 1940s, boosting the TAT-1, which was the first telephone cable crossing Atlantic. The first transpacific submarine coaxial telephone cable linking Japan, Hawaii, and the US mainland was operational in 1964. The first submerged international fiber-optic cable that connected Belgium to the United Kingdom was in 1986, and the first submerged transoceanic fiber-optic cable, named TAT-8, that connected the USA to the United Kingdom and France, was in 1988.




Figure 1 depicts the historical evolution of submarine transmission systems leading to optical technology. In Figure 2, a transatlantic cable capacity comparison is illustrated from TAT-1 in 1956 to TAT-14 in 2001. Figure 3 shows the worldwide submarine cable map, which comprises 487 cables, stretching over 1.3 million km, and 1245 landing stations that are currently in service or under construction. These networks were deployed by consortia of major telecom network operators.




However, during the current century, major cloud service providers such as Google, Facebook, and Microsoft have become active investors in submarine networks. From 2016 to 2018, Google invested approximately $47 billion dollars in capital expenditure (capex) to expand and upgrade Google Cloud infrastructure. Nowadays, Google counts 134 points of presence (PoPs) and 14 major subsea cable investments globally, interconnecting all six continents. These investments have played a significant role in enabling Google to provide reliable and high-speed cloud services to customers globally.


Monday 5 February 2024

Submarine Network Series: Part 1- The Evolution and Future of Submarine Networks: A Focus on Space Division Multiplexing


The Submarine series of posts discusses the evolution of submarine networks alongside terrestrial ones over the past several decades. While there are similarities between these two network categories, such as the need to cover ultralong-haul distances and transport huge amounts of data, there are also important differences that have dictated their different evolutionary paths. The series focuses on space division multiplexing (SDM) as the ultimate solution to cover future capacity needs and overcome problems of both networks.

The introduction section provides a brief history of submarine cables, which were first submerged to transmit telegraphy data approximately one and a half centuries ago. Today, submarine cable systems have become a basic component of the whole global backbone network infrastructure and serve as the most crowded, yet isolated, of deep-water networks. The traffic generated by end-users has been boosted beyond the average 50% annual growth rate due to the COVID-19 pandemic, increasing the need for speed and more bandwidth.

We will review recent and future submarine technologies, focusing on all critical sectors: cable systems, amplifiers' technology, submarine network architectures, electrical power-feeding issues, economics, and security. The authors provide an overview of all recently announced SDM-based submarine cable systems, compare their performance (capacity-distance product), and analyze the reasons that led to the first SDM submarine deployment. They also report up-to-date experimental results of submarine transmission demonstrations and perform a qualitative categorization that relies on their features.

Based on all latest advances and their study findings, the authors try to predict the future of SDM submarine optical networks mainly in the fields of fiber types, fiber counts per cable, fiber-coating variants, modulation formats, as well as the type and layout structure of optical amplifiers. Results show that SDM can offer higher capacities (in order of Pb/s) compared to its counterparts, supported by novel network technologies: pump-farming amplification schemes, high counts up to 50 parallel fiber pairs, thinner fiber coating variants (200 µm), and optimum spectral efficiency (2-3 b/s/Hz).

Then we will conclude that tradeoffs between capacity and implementation complexity and cost will have to be carefully considered for future deployments of submarine cable systems. SDM can provide a solution to cover future capacity needs and overcome problems of both submarine and terrestrial networks.


Google Cloud Platform - Submarine Network

Wednesday 3 January 2024

Advanced Optical Series: Part 5 - A Deep Dive into Subsea Cabling: The Backbone of Global Computer Networking


Introduction:


Subsea cabling is an essential component of the global computer network. It is responsible for transmitting vast amounts of data between continents and connecting people and businesses worldwide. In this blog post, we will take a deep dive into the world of subsea cabling, exploring its history, technology, benefits, and challenges.

History of Subsea Cabling: 

The first subsea cable was laid across the English Channel in 1851. It was a copper cable covered in gutta-percha, a natural rubber, to protect it from seawater. Since then, subsea cabling has evolved dramatically, from simple copper cables to fiber optic cables capable of transmitting terabits of data per second.

Technology of Subsea Cabling: 


Subsea cables are made of several layers, each designed to protect the cable and ensure data is transmitted with minimal loss or disruption. The outer layer is typically made of polyethylene or other plastics to provide insulation and protect the cable from damage. The next layer is a steel armor designed to protect the cable from external pressure and damage from anchors and fishing nets. Inside the armor, the cable consists of several optical fibers, each capable of transmitting data at high speeds.

Benefits of Subsea Cabling: 


Subsea cabling offers several benefits for global computer networking. It is more reliable than satellite communication, as it is not affected by atmospheric conditions or solar flares. Subsea cabling also provides faster data transfer speeds and lower latency, making it ideal for real-time applications like video conferencing and online gaming.

Challenges of Subsea Cabling: 


Despite its benefits, subsea cabling also faces several challenges. The installation and maintenance of subsea cables are complex and expensive, requiring specialized ships and equipment. The cables are also susceptible to damage from natural disasters like earthquakes and storms, as well as human activities like fishing and anchoring.

Future of Subsea Cabling: 


The demand for subsea cabling is only expected to grow as more people and businesses connect to the internet. To meet this demand, researchers are exploring new technologies like self-healing cables, which can detect and repair damage automatically. There is also a growing interest in using subsea cables for renewable energy transmission, allowing offshore wind farms to transmit energy to the mainland.

Conclusion: 


Subsea cabling is the backbone of global computer networking, connecting people and businesses worldwide. It has come a long way since the first cable was laid across the English Channel, and it continues to evolve and improve. Despite its challenges, subsea cabling offers numerous benefits and is essential for the future of global communication and connectivity.

Friday 22 December 2023

Advanced Optical Series: Part 4 - Unlocking the Future of Data Transmission: Exploring Optical Switching Technologies

In the realm of data transmission and telecommunications, the demand for faster, more efficient methods of routing and switching data is ever-present. Optical switches, leveraging the power of light, have emerged as key enablers in meeting these demands. In this blog post, we'll explore two groundbreaking optical switch technologies: the O-E-O Optical Switch and the All-Optical Switch, shedding light on their mechanisms, applications, and the transformative impact they hold for the future of connectivity.

O-E-O Optical Switch: Bridging the Optical-Electrical Gap

The O-E-O Optical Switch represents a critical bridge between optical and electrical domains, seamlessly integrating both to facilitate efficient data routing and switching. Here's how it works:

  1. Optical-to-Electrical Conversion: Incoming optical signals are converted into electrical signals using photodetectors, allowing for easy processing and manipulation.

  2. Electrical Switching: The electrical signals are then routed through electronic switches or routers, where they can be processed, analyzed, and directed to their intended destinations.

  3. Electrical-to-Optical Conversion: Once the data has been processed, it is converted back into optical signals using lasers or light-emitting diodes (LEDs) for onward transmission through optical fibers.

(a) O-E-O Switch (b) Photonic Switch (c) All-Optical Switch


Key features and applications of O-E-O Optical Switches include:

  • Compatibility: O-E-O switches are compatible with existing electronic switching infrastructure, making them easy to integrate into existing networks.
  • Signal Regeneration: The conversion of optical signals to electrical and back to optical ensures signal regeneration, enhancing signal quality and reliability.
  • Telecommunications and Data Centers: O-E-O switches find applications in telecommunications networks and data centers, where they facilitate high-speed data routing and switching over long distances.

All-Optical Switch: Pioneering Direct Optical Routing

In contrast to O-E-O switches, All-Optical switches operate entirely in the optical domain, without the need for optical-to-electrical conversion. Here's how they work:

  1. Photonic Switching: All-Optical switches use various mechanisms such as nonlinear optics, semiconductor optical amplifiers, or photonic crystals to manipulate and route optical signals directly.

  2. Wavelength or Time-Division Multiplexing: All-Optical switches can route multiple optical signals based on their wavelength or time-slot, enabling efficient utilization of the optical spectrum.

  3. Ultra-Fast Operation: By eliminating the need for optical-to-electrical conversion, All-Optical switches offer ultra-fast switching speeds, significantly reducing latency and improving network performance.

Key features and applications of All-Optical switches include:

  • High-Speed Networks: All-Optical switches are ideal for high-speed optical networks, such as long-haul telecommunications networks and backbone infrastructure.
  • Energy Efficiency: By operating entirely in the optical domain, All-Optical switches consume less power compared to O-E-O switches, making them more energy-efficient.
  • Future-Proofing: All-Optical switches are well-suited for future-proofing optical networks, as they offer scalability and compatibility with emerging optical technologies.

All optical Switch with 3 network ports and 3 local access ports


Shaping the Future of Connectivity

As the demand for high-speed, reliable data transmission continues to grow, optical switches are poised to play a central role in shaping the future of connectivity. Whether bridging the optical-electrical gap with O-E-O switches or pioneering direct optical routing with All-Optical switches, these technologies represent significant milestones in the evolution of optical networking.

Looking ahead, ongoing research and development in areas such as integrated photonics, quantum optics, and machine learning promise to further enhance the performance and efficiency of optical switches, unlocking new capabilities and applications. From telecommunications networks to data centers and beyond, optical switches are driving the transformation towards faster, more resilient, and energy-efficient communication infrastructures.

Wednesday 1 November 2023

Advanced Optical Series: Part 3 - Exploring Broadcast-and-Select and Wavelength-Selective Architectures: Advancing Optical Networks

In the fast-paced world of telecommunications, the quest for faster, more efficient data transmission methods is relentless. Enter Broadcast-and-Select and Wavelength-Selective architectures, two groundbreaking technologies at the forefront of optical network innovation. In this blog post, we'll delve into the intricacies of these architectures, their applications, and the transformative impact they are poised to have on the future of connectivity.

Understanding Broadcast-and-Select Architecture

Broadcast-and-Select (B&S) architecture represents a fundamental shift in the way optical networks are structured. At its core, B&S architecture relies on the concept of broadcasting optical signals to multiple destinations simultaneously, followed by selective routing to the intended recipient. This approach offers several advantages:

  1. Efficient Resource Utilization: By broadcasting signals, B&S architecture eliminates the need for point-to-point connections, leading to more efficient utilization of network resources and reduced complexity in routing.

  2. Scalability: B&S architecture scales gracefully with network size and bandwidth demands, making it well-suited for large-scale optical networks such as metropolitan and backbone networks.

  3. Low Latency: With minimal routing overhead, B&S architecture ensures low latency transmission, making it ideal for applications that require real-time data delivery, such as video streaming and online gaming.

Broadcast-And-Select OADM Architecture


Exploring Wavelength-Selective Architecture

Wavelength-Selective (WS) architecture leverages the unique properties of light to enable high-speed data transmission over optical fibers. Unlike traditional architectures where each optical signal is transmitted on a separate wavelength, WS architecture allows multiple signals to coexist on the same wavelength, with each signal encoded using a unique modulation format or code. Key features of WS architecture include:

  1. Wavelength Reuse: By multiplexing multiple signals onto the same wavelength, WS architecture maximizes spectral efficiency and enables efficient utilization of the optical spectrum.

  2. Flexibility: WS architecture offers flexibility in allocating wavelengths to different signals dynamically, allowing for adaptive resource allocation and optimized network performance.

  3. Interference Mitigation: Through advanced signal processing techniques, WS architecture mitigates crosstalk and signal interference, ensuring reliable data transmission even in dense wavelength-division multiplexing (DWDM) environments.


Wavelength-Selective OADM Architecture


Applications and Future Outlook

Both Broadcast-and-Select and Wavelength-Selective architectures find applications across a wide range of domains, including telecommunications, data centers, and high-performance computing. These architectures are instrumental in enabling high-speed data transmission, improving network scalability, and reducing operational costs.

Looking ahead, the future of optical networks is bright, with ongoing research and development aimed at further enhancing the performance and efficiency of B&S and WS architectures. Emerging technologies such as silicon photonics, coherent detection, and software-defined networking (SDN) are poised to unlock new capabilities and applications, driving the evolution of optical networks towards faster, more reliable, and energy-efficient communication infrastructures.

In conclusion, Broadcast-and-Select and Wavelength-Selective architectures represent significant milestones in the evolution of optical networking technology. By harnessing the power of light and innovative network designs, these architectures are poised to revolutionize the way we transmit and process data, paving the way for a more connected and digitally empowered future.

Tuesday 3 October 2023

Advanced Optical Series: Part 2 - Exploring the Optical-Electrical-Optical (O-E-O) Architecture: Powering the Future of Data Transmission


In the realm of data transmission and communication, the quest for faster, more efficient methods is ceaseless. Enter the Optical-Electrical-Optical (O-E-O) architecture, a technological marvel that promises to revolutionize the way we transmit and process data. In this blog post, we'll delve into the intricacies of O-E-O architecture, its applications, and the potential it holds for shaping the future of connectivity.

2-degree Node




Understanding O-E-O Architecture

At its core, the O-E-O architecture seamlessly integrates optical and electrical components to optimize data transmission. It comprises three main stages:


1. **Optical Conversion**: The journey begins with converting electrical signals into optical signals, typically achieved using a laser or light-emitting diode (LED). This step allows for the efficient transmission of data through optical fibers, which offer significantly higher bandwidth and lower latency compared to traditional electrical wires.


2. **Electrical Processing**: Once the data reaches its destination, it undergoes electrical processing, where it is decoded, analyzed, and manipulated as needed. This stage harnesses the computational power of electronic devices to perform tasks such as error correction, encryption, and protocol handling.


3. **Optical Regeneration**: Finally, the processed data is converted back into optical signals for onward transmission or storage. This optical regeneration ensures that the integrity and quality of the data are maintained, especially over long distances where signal attenuation may occur.


Three-degree Node




Applications of O-E-O Architecture


The versatility of O-E-O architecture makes it applicable across various domains, including telecommunications, data centers, and high-performance computing. Here are some key areas where O-E-O architecture shines:


1. **Telecommunications**: O-E-O architecture plays a pivotal role in long-haul and metropolitan optical networks, enabling high-speed data transmission over vast distances. Its ability to regenerate optical signals ensures reliable communication, making it indispensable for telecommunication providers worldwide.


2. **Data Centers**: In the era of big data and cloud computing, data centers are the backbone of digital infrastructure. O-E-O architecture enhances intra and inter-data center connectivity, facilitating rapid data transfer between servers and storage systems. This accelerates data processing and improves overall system performance.


3. **High-Performance Computing (HPC)**: O-E-O architecture is increasingly integrated into HPC clusters and supercomputers to meet the ever-growing demand for computational power. By leveraging optical interconnects, HPC systems can achieve higher bandwidth, lower latency, and reduced energy consumption, leading to significant performance gains in scientific simulations, AI training, and other compute-intensive tasks.


The Future Outlook


As we continue to push the boundaries of technology, O-E-O architecture is poised to play a pivotal role in shaping the future of data transmission and communication. Advancements in photonics, integrated circuitry, and signal processing algorithms will further enhance the performance and efficiency of O-E-O systems, paving the way for faster, more reliable, and energy-efficient networks.


Moreover, the integration of O-E-O architecture with emerging technologies such as quantum computing and 5G wireless networks holds immense promise for unlocking new capabilities and applications. From ultra-fast internet connectivity to real-time data analytics, the possibilities are limitless.


In conclusion, the Optical-Electrical-Optical (O-E-O) architecture stands as a testament to human ingenuity and innovation in the realm of data transmission. By seamlessly blending optical and electrical components, O-E-O architecture offers a glimpse into the future of connectivity, where speed, efficiency, and reliability converge to redefine the way we interact with and harness the power of data.

Tuesday 5 September 2023

Advanced Optical Series: Part 1 - Ultra-dense optical data transmission over standard fibre with a single chip source

Micro-combs - optical frequency combs generated by integrated micro-cavity resonators – offer the full potential of their bulk counterparts, but in an integrated footprint. They have enabled breakthroughs in many fields including spectroscopy, microwave photonics, frequency synthesis, optical ranging, quantum sources, metrology and ultrahigh capacity data transmission. Here, by using a powerful class of micro-comb called soliton crystals, we achieve ultra-high data transmission over 75 km of standard optical fibre using a single integrated chip source. We demonstrate a line rate of 44.2 Terabits s−1 using the telecommunications C-band at 1550 nm with a spectral efficiency of 10.4 bits s−1 Hz−1. Soliton crystals exhibit robust and stable generation and operation as well as a high intrinsic efficiency that, together with an extremely low soliton micro-comb spacing of 48.9 GHz enable the use of a very high coherent data modulation format (64 QAM - quadrature amplitude modulated). This work demonstrates the capability of optical micro-combs to perform in demanding and practical optical communications networks.


Introduction



The global optical fibre network currently carries hundreds of terabits per second every instant, with capacity growing at ~25% annually. To dramatically increase bandwidth capacity, ultrahigh capacity transmission links employ massively parallel wavelength division multiplexing (WDM) with coherent modulation formats, and in recent lab-based research, by using spatial division multiplexing (SDM) over multicore or multi-mode fibre. At the same time, there is a strong trend towards a greater number of shorter high-capacity links. Whereas core long-haul (spanning 1000’s km) communications dominated global networks 10 years ago, now the emphasis has squarely shifted to metro-area networks (linking across 10’s–100’s km) and even data centres (< 10 km). All of this is driving the need for increasingly compact, low-cost and energy-efficient solutions, with photonic integrated circuits emerging as the most viable approach. The optical source is central to every link, and as such, perhaps has the greatest need for integration. The ability to supply all wavelengths with a single, compact integrated chip, replacing many parallel lasers, will offer the greatest benefits.

Micro-combs, optical frequency combs based on micro-cavity resonators, have shown significant promise in fulfilling this role. They offer the full potential of their bulk counterparts, but in an integrated footprint. The discovery of temporal soliton states (DKS—dissipative Kerr solitons) as a means of mode-locking micro-combs has enabled breakthroughs in many fields including spectroscopy, microwave photonics, frequency synthesis, optical ranging, quantum sources, metrology and more. One of their most-promising applications has been optical fibre communications, where they have enabled massively parallel ultrahigh capacity multiplexed data transmission.

The success of micro-combs has been enabled by the ability to phase-lock, or mode-lock, their comb lines. This, in turn, has resulted from exploring novel oscillation states such as temporal soliton states, including feedback-stabilised Kerr combs, dark solitons and DKS. DKS states, in particular, have enabled transmission rates of 30 Tb/s for a single device and 55 Tb/s by combining two devices, using the full C and L telecommunication bands. In particular, for practical systems, achieving a high spectral efficiency is critically important—it is a key parameter as it determines the fundamental limit of data-carrying capacity for a given optical communications bandwidth.

Recently, a powerful class of micro-comb termed soliton crystals was reported, and devices realised in a CMOS (complementary metal-oxide semiconductor) compatible platform have proven highly successful at forming the basis for microwave and RF photonic devices. Soliton crystals were so-named because of their crystal-like profile in the angular domain of tightly packed self-localised pulses within micro-ring resonators (MRRs). They are naturally formed in micro-cavities with appropriate mode-crossings without the need for complex dynamic pumping and stabilisation schemes that are required to generate self-localised DKS waves (described by the Lugiato-Lefever equation). The key to their stability lies in their intracavity power that is very close to that of spatiotemporal chaotic states. Hence, when emerging from chaotic states there is very little change in intracavity power and thus no thermal detuning or instability, resulting from the ‘soliton step’ that makes resonant pumping more challenging. It is this combination of intrinsic stability (without the need for external aid), ease of generation and overall efficiency that makes them highly suited for demanding applications such as ultrahigh-capacity transmission beyond a terabit/s.

Here, we report ultrahigh bandwidth optical data transmission across standard fibre with a single integrated chip source. We employ soliton crystals realised in a CMOS-compatible platform to achieve a data line-rate of 44.2 Tb/s from a single source, along with a high spectral efficiency of 10.4 bits/s/Hz. We accomplish these results through the use of a high modulation format of 64 QAM (quadrature amplitude modulation), a low comb-free spectral range (FSR) spacing of 48.9 GHz, and by using only the telecommunications C-band. We demonstrate transmission over 75 km of fibre in the laboratory as well as in a field trial over an installed network in the greater metropolitan area of Melbourne, Australia. Our results stem from the soliton crystal’s extremely robust and stable operation/generation as well as its much higher intrinsic efficiency, all of which are enabled by an integrated CMOS-compatible platform.