## Demystifying the Data Science Behind 2022’s Physics Nobel Prize

Statistics is a core pillar of knowledge science, yet its assumptions will not be all the time fully tested. That is exacerbated by the rise of quantum computing, where even statistical axioms may be violated. In this text, we explore just how quantum physics breaks statistics, and uncover ways to grasp it using data science analogies.

Let’s play a coin-toss game: toss three coins, and take a look at to have all of them land in another way. It is a seemingly not possible task, because irrespective of how rigged a coin is, it will possibly only have two sides. There simply aren’t enough possibilities for all three tosses to land in another way.

Yet, with the facility of quantum physics, such an not possible feat may be achieved statistically: three coin tosses can all land in another way. And the reward for winning? 2022’s Nobel Prize in Physics, which was awarded to Alain Aspect, John Clauser, and Anton Zeilinger on 2022-10-04.

In line with nobelprize.org, their achievements were

“for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.”

This sentence is crammed with jargon: *entangled photons*, *Bell inequalities,* and *quantum information* science. We want a less complicated, plain English description for such a vital feat. Here’s a translation:

Scientists showed that our statistical view of the world is flawed, by showing that quantum physics can defy seemingly not possible odds.

The small print of those not possible odds are captured by mathematical formulae called *Bell inequalities*. As a substitute of flipping coins, researchers demonstrated these not possible odds by fidgeting with lasers (using beams of *entangled photons*)*.*

How is that this relevant to data science? Since our quantum mechanical world is the final word source of knowledge, flaws in our statistical laws could disrupt the very foundation of knowledge science. If statistics is indeed incomplete, we wouldn’t have the option to trust conclusions derived from it.

Fortunately, in our Universe, these statistical flaws are likely to be very tiny and negligible. Nevertheless, it will be important to grasp how classical statistics must be modified, as data science within the distant future may have to include these flaws (e.g., in quantum computers).

Before answering how quantum physics defies the laws of statistics, we first need to grasp how statistics works as an efficient description for our world.

Flip a coin, you get heads/tails. Yet coins aren’t exactly random: A robot with perfect control can seriously rig a coin-toss.

What does a 50/50 probability mean? A coin’s orientation may be very sensitive to the minute details of its surrounding. This makes it difficult to predict a coin’s landing orientation. So as a substitute of solving very complicated equations to give you a deterministic final result, we go for a nondeterministic one. How? We observe that typical coins are pretty symmetrical with respect to heads/tails. Within the absence of any particular bias, 50/50 odds could be an ideal approximation (although studies have shown these odds may be altered, e.g., Clark MP et al.).

To summarize,

Probabilities are approximations for modeling details of a fancy system. Complicated physics is traded for uncertainties to be able to simplify the mathematics.

From weather patterns to economics and healthcare, uncertainties may be traced back to complex dynamics. Mathematicians have converted these approximations into rigorous theorems based on axioms, to assist us manipulate and derive insights from unpredictable outcomes.

How does quantum physics break the laws of statistics? It violates the *Additivity Axiom*.

How does this Axiom work? Let’s consider some common scenarios where we use statistics to make decisions:

- When it’s rainy 🌧 outside, we bring an umbrella ☔️.
- After we get sick, doctors prescribe medications 💊 to assist us recuperate.

Within the rainy scenario, while there may very well be trillions of how raindrops could fall, the vast majority of these possibilities make us wet and cold, so we bring an umbrella.

Within the doctor scenario, there are multiple possibilities given a diagnosis: different disease progressions, side-effects, recovery rates, quality of life, and even misdiagnosis… etc. We decide the treatment that can result in the most effective overall final result.

The Additivity Axiom is the formalized statement that we will break probability down into possibilities:

This Axiom is sensible because statistics is created to quantify our ignorance of a system. Identical to how we assign 50/50 to a coin flip, we use the Additivity Axiom to derive properties of a system by averaging out all of the possible trajectories of its constituents.

While all this sounds intuitive, is it really how nature works? Through experiments, we will confirm that macroscopic objects work this manner, but what happens after we zoom in on the microscopic? Is it the identical because the macroscopic world, with subatomic actors moving from one scene to the subsequent? Or is it more like a movie screen, where abstract pixels are blinking on/off, creating the illusion of a story?

It seems, the pixel analogy is more accurate. The distinct paths of possibilities grow to be more ill-defined as we zoom in. As a consequence, the Additivity Axiom is violated.

What’s the substitute for our Axiom? It’s the laws of quantum physics.

While quantum physics is kind of complicated, we will understand its gists through data science analogies. Quantum physics is predicated on linear algebra, and thus may be considered a special ML model.

Below are the important thing quantum axioms linked to ML analogies:

- The world is described by giant list of (complex) numbers, called a
*quantum state —*analogous to the pixel values of a picture, or more abstract embedding vectors in ML. - As time goes on, this quantum state changes. This update may be computed by passing our quantum state through a neural network like function, called an
*operator*(a unitarity matrix technically):

Continuing our ML analogy, we will consider the Universe as a large neural network. Each operator represents a (linear) network layer. Through this network, every interaction that has occurred has been imprinted onto the quantum state of our Universe. Without pause, this computation has been repeatedly running because the starting of time. It is a profound way of viewing our world:

Our coherent reality emerges from isolated groupings in our quantum state.

Our macroscopic feeling of an object’s existence emerges from the particular neural network linkages of our operators.

All of it sounds a bit abstract, so let’s consider an explicit example: how does quantum physics describe raindrops falling on our heads?

- The info of the air molecules and us within the open are captured in a quantum state.
- As water molecules feel the Earth’s gravity, the quantum state gets updated by the corresponding operators.
- After going through many layers on this neural-network-like update, the quantum state picks up some particular numerical values.
- Laws of physics dictates that these numbers are likely to form clusters. A few of these clusters translate right into a consistent existence for these raindrops, which ultimately link to our neurons feeling these raindrops.

In this contemporary viewpoint, there is no such thing as a reason why the Additivity Axiom should hold. Because

Much like an ML blackbox, it is just not all the time possible to trace all of the physical properties of a quantum state. Subsequently, a physical final result doesn’t all the time include a listing of intermediate possibilities.

Within the raindrop scenario, because of this we will’t all the time find the particular numbers within the quantum state that results in a particular water molecule falling. In reality, the quantum state generally accommodates data of the molecules in multiple locations (e.g., superpositions), and our perception of its physical location may very well be an advanced sum of all these data.

This may increasingly seem paradoxical, as we will we not sense weird discrepancies and superpositions in our every day lives in any respect! The explanation though is that these discrepancies are tiny, and their tininess may be proved using the technical theory of decoherence, which is well beyond our scope (although here is one in all my articles which will help shed some light).

Still, being tiny isn’t the identical as being zero. Quantum effects can at times be significant, they usually can result in seemingly not possible statistics.

How? Let’s discover.

So as to invalidate unusual laws of statistics, we want to contemplate easy but not possible scenarios. The only of which involves 3 coins.

Imagine 3 robots performing 3 separate coin-tosses. In classical statistics, we will use the Additivity Axiom to completely specify the statistics: by listing all 8 outcomes and their probabilities (Note: the robots/coins may very well be rigged):

Experimentally, we will measure these probabilities by repeating these coin-tosses.

Whatever the alternative of probabilities, there’s a sanity constraint: A coin only has 1+1 = 2 sides, so after we flip 3 coins, there are certain to be no less than 2 of them that land the identical. So if we randomly (uniformly) select one pair of coins to look at, we must always expect no less than 1/3 probability to look at that they’re equal.

Let’s check out some examples, label the three coins as *A*, *B*, *C*

- If all 3 coins are fair and independent, then the prospect that we pick an equal pair is 1/2.
- If
*A**= B*, but*A*≠*C*. No matter how*A*is tossed, there is simply one equal pair. The possibility to choose this pair is 1/3.

We see that the same-pair probability is all the time no less than 1/3. This may be summarized right into a *Bell inequality *(following this paper by L. Maccone)

While it might sound ridiculous to check something so obvious, it will prove that this inequality can the truth is be *violated — *a testament that they will not be so obvious after-all.

So as to observe violation of Bell inequality, physicists can’t just depend on conventional coins. As a substitute they should utilize quantum coins made from lasers, which has all of the ingredients for coin-tosses:

- Flipping a coin: sending a laser down a beam
- Observing Head/Tail: getting a reading on one in all two detectors*
- Randomness: readings are generally unpredictable unless manipulated

(* there may very well be faulty readings if no detector observes anything)

Now, we will setup the lasers in numerous orientations to mimic 3 different coin-tosses. So how exactly can quantum coins manage the not possible? If we observe the literal results of three coin-tosses, seeing three different outcomes is logically not possible.

That is where our Bell inequality is available in: it breaks down a logical statement about 3 coins right into a probability statement that involves only 2 coins per term. So if we toss 3 coins, but only observe 2 at a time, then it is feasible to violate statistical laws while preserving logic. In quantum physics, tossing a coin vs observing a coin follows two distinct interactions:

**Quantum**: tossing a coin and observing it are governed by two different operators. A coin-toss that hasn’t been observed yet doesn’t must be assigned a definitive final result*.

That is in contrast with classical statistics

**Classical**: heads/tails are determined when the coins are tossed. That is guaranteed by the Additivity axiom. It doesn’t matter whether we determine to look at it or not.

(*That is where “spooky action-at-a-distance” is available in, since at any moment anyone can activate a detector to look at the third coin and break our results.)

The way to perform our experiment then? We want to arrange our coins to be in a selected quantum state. Here, we cook up a system where the three coins quantum state may be denoted by three vectors on a plane, just like the one shown below*:

(* Technically the quantum state involves more complicated entangled photons, but we’ll skip the main points for brevity)

What’s the probability that two coin-tosses would yield the identical result? The reply comes from physics, and is engineered to be the cosine similarity squared:

Now, if we randomly select a pair of quantum coins to look at*, there is simply a 1/4 probability that they’d be the identical; that is lower than the logical 1/3 guarantee!

(*The experiment must be arrange such that this alternative is chosen after the coins have been tossed, in order that one can rule out spooky collusion between the particles and the apparatus)

Rephrasing this by way of our Bell inequality, we’ve

Our sanity check is violated! If we pretend that classical statistics still applies, this might imply that that no less than 1/4 of the time, all three coin-tosses land in another way!

Note that while our three-coin experiment is straightforward to grasp, there are experimental difficulties and potential loopholes in its results. Thus, typical experiments are likely to involve more coin-tosses and more convoluted observations (e.g., GHZ experiment by Jian-Wei Pan et el.).

So, we see that quantum probabilities sometimes result in unexpected results, what’s the large deal, and why should we care?

First, let’s start with the sensible. As technology pushes toward packing more computational power in a smaller size, quantum physics will grow to be more essential. Eventually, our computational paradigms will must be overhauled to be able to take full advantage of quantum devices. So while violations of Bell inequalities could also be subtle, it signals that we want to think twice when designing quantum algorithms.

Second, these violations expose a fundamental limit on conventional statistical reasoning. For instance, if someone wins the lottery, it’s perfectly reasonable to attribute the cause to the lottery balls coming out in a selected way. Nevertheless, we cannot zoom in and causally link winning lottery to the (quantum) state of all of the molecules within the room. So our statistical theory of causal inference has a physical limit!

Lastly, quantum effects challenge us to rethink our Universe. While quantum physics has been validated repeatedly, it could still just be an approximation. In the longer term, we may yet discover its succession by much more abstract fundamental laws.

As a historical lesson, even Einstein was dissuaded by quantum physics’s weirdness, a lot in order that he rejected it by proclaiming “god doesn’t play dice”. Yet quantum physics continued to triumph and was fundamental in advancing much of our modern technology and understanding of the world (see my article).

In summary, quantum physics rules the world, and 2022’s Physics Nobel highlights its deep connection to statistics and data science. While quantum physics isn’t commonly taught, we must always all strive to grasp and embrace its significance.