Faint communications signals detection using quantum physics

The incoming signal (red, lower left) proceeds through a beam splitter to the photon detector, which has an attached time register (top right). The receiver sends the reference beam to the beam splitter to cancel the incoming pulse so that no light is detected. If even one photon is detected, it means that the receiver used an incorrect reference beam, which needs to be adjusted. The receiver uses exact times of photon detection to arrive at the right adjustment with fewer guesses. The combination of recorded detection times and the history of reference beam frequencies are used to find the frequency of the incoming signal. Credit: NIST
Spread the love

Researchers at the National Institute of Standards and Technology (NIST) have demonstrated a system that could dramatically increase the performance of communications networks while enabling record-low error rates in detecting even the faintest of signals, potentially decreasing the total amount of energy required for state-of-the-art networks by a factor of 10 to 100.

The proof-of-principle system consists of a novel receiver and corresponding signal-processing technique that, unlike the methods used in today’s networks, are entirely based on the properties of quantum physics and thereby capable of handling even extremely weak signals with pulses that carry many bits of data.

The NIST team’s system can eliminate the need for amplifiers because it can reliably process even extremely feeble signal pulses: with the help of quantum measurement even faint laser pulses can be used to communicate multiple bits of information.

The researchers created an input signal of faint laser pulses comparable to a substantially attenuated conventional network signal, with the average number of photons per pulse from 0.5 to 20 (though photons are whole particles, a number less than one simply means that some pulses contain no photons). They took advantage of its wavelike properties, such as interference, until it finally hits the detector as photons.

Inside the receiver, the input signal’s pulse train combines (interferes) with a separate, adjustable reference laser beam, which controls the frequency and phase of the combined light stream. It is extremely difficult to read the different encoded states in such a faint signal. So the NIST system is designed to measure the properties of the whole signal pulse by trying to match the properties of the reference laser to it exactly. The researchers achieve this through a series of successive measurements of the signal, each of which increases the probability of an accurate match.

That is done by adjusting the frequency and phase of the reference pulse so that it interferes destructively with the signal when they are combined at the beam splitter, canceling the signal out completely so no photons can be detected. In this scheme, shot noise is not a factor: Total cancelation has no uncertainty.

Thus, counterintuitively, a perfectly accurate measurement results in no photon reaching the detector. If the reference pulse has the wrong frequency, a photon can reach the detector. The receiver uses the time of that photon detection to predict the most probable signal frequency and adjusts the frequency of the reference pulse accordingly. If that prediction is still incorrect, the detection time of the next photon results in a more accurate prediction based on both photon detection times, and so on. (Phys.org)

The paper has been published in PRX Quantum.

Read more.