Welcome to xINvisionQ's blog!
Throughout human history there have been a few times where our entire perspective on the very nature of reality has been changed. The Copernican revolution gave us a home, one that albeit put us on a spec rotating our Sun and not the other way around in the vast galactical cosmos. The Darwinian revolution gave us a family, one that told us that our chimpanzee relatives and all living species on our planet share the same common ancestor as us. One of the more recent revolutions, and one of the more important revolutionary paradigm shifts celebrates its centenary which the United Nations has dubbed the International Year of Quantum Science and Technology.
While the quantum story starts in 1900 when Max Planck discovered that energy is released in discrete packets called quanta, the quantum “revolution” started, if we were to pinpoint one place and time to mark the “true” inauguration of quantum mechanics, on the wee hours of June 1925, when Werner Heisenberg, an aspiring, young physics postdoc, at the tender age of only 23 and relatively unknown at the time would, as the story goes, climbed atop a boulder on the tiny 1 km2 island of Heligoland off the northern coast of Germany and elegantly gleam at what we now know as matrix mechanics, the first formal mathematical formulation of the physics that started off messy and took a quarter decade to mature, one that completely disrupted our views of the world and put into light how mother nature seemed to play dice with the universe as we know it.
With Heisenberg’s formulation of matrix mechanics, a whole spectrum of unresolved “questions” has been laid to earth. A cat that can be dead and alive at the same time. Is something a wave, or is it a particle? How can it be both? A “particle” (electron) that can go through two slits at once. Two “particles” the universe apart can someone “know” the state of the other one that leads to “Spooky action at a distance”. “God plays dice with the universe.” It’s we the observer, or more precisely how we measure something that leads to a collapse to either one state or another. An electron that can teleport through walls. And if you want to know the precise location of an electron you won’t be able to know its momentum.
While our grasp of quantum mechanics has given rise to unimaginable technologies, such as the lasers that store our precious memories on DVDs and blaze information through the ethernet cables of the Internet, the microchips that power our smartphones and laptops, help generate non-invasive images of the human body with MRI machines, and powering our GPS signals, as well as leading to the emerging vision of quantum computing, we indeed owe a lot to quantum mechanics. But we are still quite some ways from the ultimate pinnacle on the quest to truly understand the scientific and philosophical implications of quantum mechanics. It isn’t rocket science, it’s quantum mechanics.
While ever since Issac Newton formulated classical mechanics, up until quantum mechanics arrived on the scene, classical mechanics has done the job quite well, if not perfectly. Newton has told us that the planets orbit the Sun in the same way as how an apple falls from a tree. Basically, with the initial preconditions of something, we can with 100% accuracy calculate where it will end up. But in quantum mechanics, things aren’t quite that simple. In classical mechanics, if you’re sloppy, or if you have a broken-down ruler, you might get slightly different measurement readings once or twice, but that doesn’t change the length or height of whatever you’re measuring. Just like when you measure your height you might get 6”1’, while your uncle Joe might only get a reading of 5”10’, and when you go to your doctor for a physical, they might tell you that you’re in fact 5”11’, but nonetheless that doesn’t physically change how tall you are; you’re 5”11’ no matter what. But according to quantum mechanics, as a metaphor, you could be 6”3’ one time, 5”2’ another, and maybe only 1 inch tall some other time, because it is the act of measuring that causes your “height” to be determined, and this is the biggest difference the deterministic elegant classical mechanics, and the mind-boggling chaotic quantum mechanics. The main difference is that because as of now all measuring devices are macroscopic in size and follow the laws of classical mechanics; for classical objects that are being measured they’re the same size in scale of the measuring instrument, thus the measuring instrument doesn’t “disrupt” the object being measured. But for objects at the quantum scale, the measuring instrument will “disturb” the object being measured which causes the result to be uncertain. And this is precisely the problem when it comes to quantum mechanics, the question is where is the boundary between classical and quantum? When does something act in accordance with the classical mechanics formulated by Newton at the macro level and when does it start to be governed by the microscopic quantum laws? This question has puzzled physicists for the past 100 years, and very likely this it will continue to do so.
Over the course of the first century of quantum mechanics, many tend to overlook the statement by Richard Feynman, who once said, “I think I can safely say that no one understands quantum mechanics.” Well, it’s safe to say that after 100 years still no one really understands quantum mechanics. And many have probably forgotten the heated debates between Bohr and Einstein on the fabric of reality, where Einstein stubbornly insisted that “God” does not play dice with the universe and Bohr pessimistically saying that we shouldn’t tell “God” what to do and just “give into” nature because of its complementary manner. The main reason why the whole debate over what quantum mechanics fundamentally is has been so persistently heated to this very day is because quantum mechanics is essentially counterintuitive; and it is because of how bizarre quantum mechanics seems that has given rise to many different if not even more head-scratching interpretations of the matter, with three major ones: Copenhagen, many-worlds, and QBISM.
The Copenhagen interpretation, pioneered by Niels Bohr and Werner Heisenberg, is the major interpretation of quantum mechanics that is still widely taught in textbooks. Essentially Copenhagen is synonymous with uncertainty, complementing Bohr’s correspondence principle and Heisenberg’s uncertainty principle. Copenhagen’s major insistence is that something is only real once it’s measured; before it’s measured, even though it can be perfectly described by the Schrodinger equation no one can be sure what it is exactly – but only after it is measured it can exist in all possible states. With Bohr and Heisenberg being firm believers in the Copenhagen camp, the Copenhagen interpretation tells us that we should not be discussing whether or not something actually exists but how to describe it after we can observe it. In the case of Schrodinger’s cat, which was originally formulated by Schrodinger to show that quantum mechanics is incomplete and as a rebuttal to Copenhagen, but nonetheless Copenhagen says that his cat is “dead and alive simultaneously” until someone opens the box and looks.
The many-worlds interpretation is the most sci-fi friendly interpretation, and it has been the subject of many movies and TV shows. Many-worlds, or parallel worlds, leaves more to the imagination; you’re hard at work here in this world while you might be relaxing on a beach in another. Though while its original proposer, Hugh Everett did not intend for it to be a silver screen hit interpretation of quantum mechanics, the fundamental idea of many-worlds is that with every measurement the universe “splits” into parallel existing realities. Essentially all the possible states of something exists in other parallel worlds, when we observe something, we only see one possibility while all the other possibilities cease to exist here but continue in some form or another in a parallel time and space – one in which we can’t “communicate” with. Along the lines of this logic, then essentially, we can say that Schrodinger’s poor cat is alive here in this world and unfortunately met its demise in another.
Lastly, QBISM also a radical interpretation in its own might, saying that is our subjective beliefs, prior and posterior that help shape the very reality of our world. QBISM states that the quantum mysteriousness is just a reflection of our subjective understanding and interactions with the subatomic quantum world, and attempts to put the observer or scientist back into science. QBISM states that it is the beliefs we formulate and the new information that we constantly obtain to update our beliefs which leads to what’s being observed to be in the state that it is in, just like how our feline friend locked in the box is either alive or dead.
Heisenberg is also known for his Uncertainty Principle that tells us if you know where a particle is then you can’t know it’s momentum and vice versa. From an information perspective if we want to know more about something then we have to “sacrifice” knowing less about another, and though frustrating, unfortunately nature tends to never give us complete information. As Heisenberg said, “What we observe is not nature itself, but nature exposed to our method of questioning.” In the 100 years since, we have seen that mother nature seems to not play “her” cards in conventional ways, and “God” seems to play dice with the universe.
Now here's to the next 100 years: whether or not our descendants will still be asking does “God” really play dice with the universe or have found an answer to if we can play dice with “God”; maybe when we’re long gone, they’ll look back at us and say amongst the then current Einstein’s, Bohr’s, and Heisenberg’s – how in the world did our ancestors believe in quantum mechanics? But maybe, just maybe nature itself doesn’t “adapt” to anyone or anything, it’s constantly evolving and never has been set in stone for anyone to find its deepest secrets, and in 100 years, 200 years, or 1000 years, no matter what it’ll still be humankind adapting to nature and not the other way around.
What do pollen randomly floating around in a petri dish and stocks on the stock market have in common? At first glance the two may seem completely unrelated, and it might seem crazy to equate traders in suits crammed on the trading floor yelling at which stocks to buy with mad scientists in lab coats peering at pollen in a petri dish through microscopes. However, at a deeper glance, the one thing that the two share in common is their randomness – which is exactly what struck two of the greatest minds in science when seeking to find a way to describe the randomness of pollen floating around in the petri dish and on the stock prices on the market.
This phenomena of randomness has come to be known as Brownian motion, named after the Scottish botanist Robert Brown, who in 1827 described the way of pollen randomly floating around or fluctuating when he was looking at pollen grain under a microscope, however he did not describe what exactly causes the random movement of the pollen to happen – it wouldn’t be until around 80 years later that Robert Brown’s “accidental” discovery would led to the formulation of our understanding of the atomic world and the stock market; one could even go to call Brownian motion to be the “first” hypothesis to unify both the natural and social sciences – and this would be credited to two then-young Europeans, Louis Bachelier and Albert Einstein.
Einstein’s original mission was to “prove” that there are subatomic molecules or atoms that collide with the pollen and continually bump and crash into them to cause the pollen to randomly float around in liquid or a petri dish, and in doing so “overturned” the long-held belief that nothing exists beyond what we cannot see. Bachelier on the other hand, saw the randomness and immediately drew parallels with the stocks fluctuating on the stock market, one that he would use the concept of Brownian motion to describe how stocks on the Paris Stock Exchange randomly fluctuate just like pollen does in a petri dish, which was the topic of his dissertation The Theory of Speculation that he wrote in 1900 five years before Einstein’s seminal paper – and that is exactly why any social scientist will humbly point out that Bachelier was the first to describe Brownian motion, but most only remember Einstein, one because he put forth a more systematical mathematical approach, but two more so because his name is synonymous with genius.
Regardless of which original mission Brownian motion was applied to, in the simplest terms Brownian motion states any particle that is submersed by molecules around it, these molecules will collide with the particle in question, or pollen as originally observed, and in this case the exact precise location of where these particles will end up cannot be accurately calculated – only a probability of where they might be can. Brownian motion tells us that we can’t calculate the exact precise trajectory of where the pollen will end up, or diffuse to, unlike how we can for a freefalling apple.
Thus, this is where Einstein came in and gave a radical new interpretation by obtaining the new partial differential equation:

Where D is the diffusion coefficient.
Using this diffusion equation, we can then calculate the probability of where a particle of pollen will end up after a given time. This uncertainty of not being able to calculate a precise trajectory is because there are so many molecules surrounding the pollen and they can “bump” into them in all directions and its practically impossible to calculate which molecule will bump into it from which direction, but also more importantly because there are so many molecules surrounding it we can’t know the initial conditions of each and every one, but theoretically if one day someone is able to develop a supercomputer to calculate the exact movement, position, speed, and velocity of the molecules around the pollen then technically you could find exactly where the pollen will end up – but that is not the main point here, the point is that "the displacement of the suspended particles can then be described by a probability distribution that determines the number of particles displaced by a certain distance in each time interval" (Ann. Phys. (Leipzig) 14, Supplement, 23–37 (2005)), given that the motions of the particles are mutually independent of each other.
As with Einstein’s explanation, Louis Bachelier also agreed and stated that Brownian motion is a stochastic process – one that is perfect for describing random fluctuations. However, unlike Einstein, Bachelier didn’t really bother about proving that there are atoms “bumping” the pollen around, instead Bachelier saw the parallels between the randomly floating pollen as stocks “floating” up and down on the stock market and the traders as the molecules around it that “bump” them to go up and down through their actions of buying and selling. Thus, Bachelier took this metaphor and formulated a stochastic model of how stocks fluctuate on the market just like how pollen does so in a petri dish, stating that by using the partial differential equation doesn’t allow us to find the absolute price of a stock but a probability of what it might be or whether it will go up or down can be done so though. By doing so, Bachelier showed that a physical process such as Brownian motion can be used to describe the stock market, and his formulation would lead to the current most-well known theory of the market.
Building off of the concept that the market is in a random walk, Eugune Fama would put forth the Efficient Market Hypothesis, one that states that markets are informatively efficient, and that all the information of a stock is reflected in its current price – drawing off the implication of the Brownian motion random walk metaphor (assuming that the market is indeed truly in a random walk). Since EMH states and presumes that the market is completely random this also comes with the acknowledgement that no one can predict the market, because its already “informatively efficient”, and that all the stocks and traders are mutually independent of each other just like how the motions of the pollen are as well. This leads to some “counterstrikes” right away, one the market is uncertain because of the traders involved – it is the traders buying and selling that causes stocks to fluctuate, and two these participating traders are living human beings – ones with emotions and at most times irrational impulses when trading, with two of the major “opponents” of EMH being Robert Shiller who argues that the market is not always completely random from a mathematical and statistical standpoint and Daniel Kahneman who argues from traders' standpoint that these traders tend to exhibit unexplainable behaviors when trading.
Thus, for the time being, Brownian motion is the go-to explanation of anything random, be it pollen that randomly floats around, stocks that randomly fluctuate because of traders buying and selling, or a drunk person making their way home after a long night at the bar due to the colliding of “beer” molecules that they just indulged in colliding with the neurons in the brain, Brownian motion also shows us that pure randomness does not equate to uncertainty, something that we attempt to show with our formulation of an uncertain market hypothesis, one that agrees with the Efficient Market Hypothesis and Brownian motion that the market is indeed random, but more importantly uncertain – because of the “interactions” between the “entangled” traders and market.
By now it would be just be downright wrong to say that the market is not chaotic and random. While current economic theory agrees with that the market is random from all angles, whether from the markets or traders’ standpoint – with both viewpoints seeking to find the “invisible hand” that “guides” the market’s randomness. With EMH and of the current economic framework as an extension, the market or more importantly the stocks on the market are treated as the pollen randomly floating around in the petri dish, while the traders are considered the molecules around the pollen that are constantly colliding into them which causes the pollen to float around without a set motion, thus if the market is in a random walk the conclusion is that it’s unpredictable. However, as “critics” of EMH have pointed out that the market is not truly random: Robert Shiller has argued from a statistical standpoint that stocks don’t truly fluctuate randomly, as can be seen by “shocks” around the opening and closing bells as well as some breaking news; Daniel Kahneman argues from a cognitive psychology standpoint, he says that traders are not emotionless “molecules”, they are irrational and impulsive, with each and every trader’s behavior completely different.
However, regardless of what angle current economic theories tend to analyze the market from, they all run into two common problems – one though they all agree that the market and traders are inseparable and not in their own right, they can’t find a way to unifying describe both under one formalized framework; and two because they all depend on Law of Large Numbers, they need to make repeated observations of events in order to produce an average of it, thus this in turn makes it impossible for them to produce single-event forecasts.
Thus, this naturally begs two questions:
1) Could we model both the equities and traders all together as a collective whole?
2) Could we produce single-event forecasts of the market?
We will attempt to answer the first in this part; in the next and final part we will address the second question.
Unlike the abovementioned modern-day economic framework, where the market and traders are treated as randomly fluctuating entities, and that they can only be modeled separately, where stock prices act like pollen and traders are the molecules around it, and these elegant differential and diffusion equations can model their actions and behavior, the elephant in the room is that the market and traders’ are essentially inseparably intertwined or as we say “entangled” – it is the interactions between them that eventually “determines” the randomness, chaos, and uncertainty that has been symbolically known as the random walk. The elephant in the room is this dual-uncertainty: the market’s unpredictability “confuses” the traders, and all the traders’ “collective” actions cause the market to fluctuate and how to find an effective way to model both the traders and market all together under one formalized framework. Well, we’ve come up with this very subtle, shall we say ingenious way to do so – by calling on the principles of quantum mechanics, but not the actual physical sub-atomic formulation of it but just utilizing the mathematical linear algebra of it.
To do this we’ll be using the first three axioms of quantum mechanics. There are five axioms of quantum mechanics, however in our formulation we’ll only need to use the first three. Now, if you want you could go pick up a textbook and try to decipher what they mean; but if you don’t have a degree in quantum physics you might have a tough time understanding what they mean. I’ll save you the hassle and confusion and briefly go over them here.
The first three axioms are:
1. To every system corresponds a Hilbert Space H, whose vectors (state vectors, wave functions) completely describe the states of the system.
2. To every observable there is a corresponding unique self-adjoint operator A acting in H.

What the three axioms mean in plain English is actually quite simple, it’s not rocket science. The first axiom is that for anything in the quantum world, it can be described as a vector in Hilber Space, which is an abstract space that doesn’t exist in our 3D world, it’s purely in mathematical terms that we can’t see of feel. This state, and all the potential states that it can be in is denoted by the Greek letter Phi with Dirac bra-kets around it, |Ψ⟩, which represents that this is a quantum state. The second axiom is for every quantum state represented in terms of Hilbert Space there needs to be a density operator for it to be observed; and this second axiom is precisely one of the reasons that differentiates quantum mechanics from classical mechanics; because in classical mechanics you can define the state of something and observe it directly, but in quantum mechanics a quantum state as defined by axiom one cannot be observed directly, only its corresponding density operator can be observed, and axiom two defines the density operator for a quantum state which is represented by a Hermitian operator and all of its possible values of the state it can be in (its eigenvalues) are the possible values of the corresponding physical properties. The third axiom is the axiom of superposition and measurement – the uncertain state of a system can be expanded by its eigenstates that it could be in with a certain chance, thus all the possibilities are superposed, but you have to observe it to know which exact state it’s in, therefore once its measured, the uncertain state “collapses” to one of the prior possible states (eigenstates) with the probability of it being in that state is p(a_k)=|c_k |^2 (the absolute value of the coefficient squared).
With the basics of the axioms defined, we can use them to model both the market, or more importantly the trend of the equities price (up and down) and the actions that the traders can take (buy and sell). Before we do so, it’s important to note that we are not actually taking the physical properties of the closing prices of equities and the actions of buy and sell in the traders’ minds and “quantizing” them, we are merely utilizing the very nicely laid out mathematical linear formulation of quantum mechanics to model the many, infinite possibilities of the market and traders.
For the price of an equity (stocks, futures, bonds, treasuries, currency, etc.…) at a given time will either increase or decrease (it’ll either trend upwards or downwards), and at a given time a trader can choose to buy or sell; very conveniently we can superpose the trend of the equity’s price (increase or decrease) and the actions a trader can take (buy or sell), where we get:

Where |Q⟩ is the superposed state of the equity’s price trend; and |A⟩ is the superposed state of the actions that can be taken by the traders. This superposition of states becomes very naturally convenient – a price of an equity either only trends upwards or downwards, while a trader can only take two actions of buy or sell, and these states are orthogonal, meaning they are polar opposites and only one can happen at a time. The trend of the equity’s price is represented by |Q⟩ where |q_1⟩ is increase and |q_2⟩ being decrease. c_1 and c_2 are the coefficients (the actual probability is then the absolute value squared of this coefficient). All the possible actions the traders can take are represented by |A⟩ where |a_1⟩ is the trader buys and |a_2⟩ is that the trader sells, and μ_1 and μ_2 are the coefficients (the actual probability is the absolute value then squared of it).
Superposing all the possibilities of the trend of the equity’s price and the traders’ actions may seem like a very nice way to express the inherent uncertainty – at any given time no one knows whether the price of an equity will trend upwards or downwards, and when traders are on the fence over whether to actually buy or sell. However just describing the superposed state of an equity and trader is not enough when trying to describe both as a collective whole, as they are “entangled”, this “entangled” state which is the complex system of all the superposed states is formulated as:

This complex system is where modeling both the equity and traders’ together under one framework is fully highlighted; here we have Phi with Dirac bra-kets
representing the entire state of the equity and all the traders, with the increase represented by |q_1⟩,
representing all the traders
that decide to buy; with the decrease represented by |q_2 ⟩,
representing all the traders that decide to sell;
and the ⊗ being the
outer tensor product multiplying the two together. And this is exactly what results in the saying of how we are able to superpose all the possibilities of
increase and decrease as well as buy and sell; the coefficients could be any chance that the price increases, it could be 0.1, 0.5, 0.9, and it could even be
negative and that’s why its the absolute value squared that is the final probability or frequency of whether the price increases or decreases. Then we have all
the traders that buy when the price of this equity described is increasing, because this complex system only describes one equity on the market not the entire
market; and again
all the traders and the subjective probability (degree of beliefs)
that they have when they buy (because everyone will
have various degrees of confidence when they buy, thus resulting in the “infinite” possibilities of how many traders buy when this equity’s price increases),
then we have
which is all the traders the traders and the subjective probability (degree of beliefs) that they have when they sell.
Now we have this elaborate way of describing both the possibilities of equity’s price trend and the traders’ actions, but we need to eventually validate this with the actual observed data. However, this complex system is not directly observable, it is only the vector describing the superposed state of the equity and traders’; in order to observe it we need to define a density operator according to the second axiom. This density operator is represented by:

Where |ψ⟩⟨ψ| representing that this is a density operator, which also is a matrix, the first terms before the bracketed terms are just the two known states of the stock’s price (increase or decrease) and then the terms in the brackets are the “quantum interference” terms, where we don’t know whether the price will increase or decrease and whether all the traders will buy or sell, with H.C. being the Hermitian Conjugate.
These “interference” terms are precisely what illustrates the dual-uncertainty of the equity and traders – and how they “influence” each other through their “interactions”; q subscript 1 and subscript 2 are the price of the equity trending upwards and downwards respectively, and a subscript 1 and 2 are all the traders’ “collective” actions of buy and sell. This dual uncertainty arises like this: the price of an equity is inherently uncertain, we don’t know whether or not it’ll trend upwards or downwards in the next moment, so this “clouds” the traders’ decision-making abilities when they’re deciding whether to actually buy or sell; the first uncertainty of the equity influences the traders’ decisions and then the second uncertainty of the traders’ “collective effort” influences the equity’s price; the traders’ don’t know what the equity will “do” and the equity in turn doesn’t “know” what all the traders’ will eventually decide to do. But what we do know is that we can map the two “extremes” – when all the traders decide to buy or sell at the same time, which is highly unlikely but if so then the price of the equity will 100% surely go up or down respectively, and that when there are a near-infinite number of traders all buying and selling “randomly” (a lot buy and a lot sell), the price of that equity will indeed “act” how EMH says in a 50-50 random walk.
When there are infinite number of traders the price of the equity will start to behave as stated by EMH, but under the umbrella of our uncertain market hypothesis, this is just one certain special case – most times it is indeed in a random walk, but just like in the real world the I told you so only works on an average of times not every time. With a lot of traders trading an equity, it’s “behavior” will start to look something like this:

Because the “interference” of
from above is now “cleared” because when N→∞ this
is now 0; where ω_1 is the objective frequency (probability) that the entire market will go up and ω_2 is the probability of decrease.
Most times ω_1 and ω_2 will be both equal to 0.5 or 50-50 which would show the market is in a random walk, but sometimes, in the real world this frequency won’t always equal 50. This happens because when there are so many traders’ buying and selling at one time they will “cancel” each other out – i.e. around a million traders’ buy which causes a slight uptick in the price of the equity but then also around a million traders’ sell which then pulls the price of the equity down again, thus the end effect is that the price of that equity then hovers around 50-50. However, on some occasions, out of the many traders actively trading, more are buying or more are selling thus this could lead ω_1 and ω_2 to be 0.3 and 0.7, 0.55 and 0.45, or even 0.01 and 0.99, which causes the equity’s price too not be in such a evenly random fluctuation, eluding to what was stated earlier with around the opening/closing bells and other external news “shocks”; this shows that the market is random but more importantly uncertain because once a equity is listed on the market it only “comes alive” once traders start trading it, but once that happens it’s price starts to fluctuate, and when it does traders aren’t 100% sure whether to buy or sell, and then once they do in turn it is the “collective effort” of all of the participating traders and they actions that then “determines” the closing prices of equities on the market – in this never-ending constant cycle of dual uncertainty – in which we shouldn’t just naively and stubbornly equate the prices of equities and the actions of traders to the randomly floating pollen and molecules in a petri dish, ones that can be described by a diffusion equation (although that diffusion equation was formulated by someone who’s name is synonymous with genius).
Now that we’ve shown how to model both the market (equities) and traders as a collective whole under one framework, and that the random walk is in fact just a special case of our uncertain market hypothesis, on to the million-dollar question, which was alluded to earlier, now becomes how do we make single-event forecasts of the market?
What a roller coaster ride it’s been. If you’ve stuck with us till now, hopefully you’ve learned at least something – Brownian motion, the axioms of quantum mechanics, and superposition. Now in this part we’re going to address the second question of how do we make single-event forecasts of the market? Well, to try to do so we’ll need the help of another great scientific theory – Charles Darwin’s theory of Evolution (yes, evolution).
When answering the second question, keep in mind again that traditional economic theory says, “Hey the market is unpredictable, no one can predict it, so just don’t even bother trying.” They say that because it’s a perfect “cover-up excuse” for the inability of their statistical approaches that can’t produce single-event forecasts. For them to give you any type of prediction, first they’ll need a huge sample amount of data, and then they’ll tell you well for the past 100 years of the stock market it has evened out to a 50-50 random (well that’s not very useful for predicting tomorrow’s closing price) and two they’ll tell you that well 9999 out of 10000 times the stock market should “act” in this way but in reality what actually happens is the 1 outcome that technically shouldn’t have happened (this is why economic theory says that recessions only come about once in a blue moon, but in the real world they do so every couple of years). Instead of entirely equating the randomness of the market to Brownian motion, we’ve shown that the market is indeed uncertain, that it is the “interactions” between the traders’ and market that causes this dual-uncertainty, and that the informatively efficiency of the market of 50-50 is just a special case of the uncertain market hypothesis.
Now what we’ve done to attempt to predict the market, is one use a small sample of data to produce a short-horizon forecast (does a stock’s price 10 years ago really influence its price tomorrow?), and two we’ve “transformed” the forecasting process into a decision-making one. Essentially forecasting is a decision-making process, we don’t know what will happen in the future and so we make our best educated guess and try to guess right; for predicting the stock market that would mean to buy when the price increases and to sell when it decreases – and we’ve formulated an AI assistant agent to make the best “educated guess” but not blindly.
First, we define our agent and what it’s supposed to do: basically, buy when the price of the equity trends upwards and sell when the price of the equity trends downwards. Since there are only two possible actions (buy or sell, and they’re orthogonal) with a different degree of subjective probability (beliefs) for each one, thus we can very conveniently, just like how we superpose the equity’s price states of up and down, we can also “superpose” the actions that the agent can take as well. This superposed state of “mind” of simultaneously buy and sell is not actually a state where the agent can buy and sell at the same time, it’s just undecided in which specific action to take; just like when you decide to buy something or not, before you do so buy and sell are both options “superposed” in your mind. This observable operator (remember that any observable must be represented by a density operator under formalized “quantum-like” framework) can be stated in terms of Hilbert Space:

Where |a_1⟩⟨a_1| is buy and |a_2 ⟩⟨a_2| is sell; with the p_1 and p_2 being the subjective degree of beliefs that the agent has when it “decides” to buy or sell respectively. The third and fourth terms are a bit more interesting; they’re the “quantum interference” terms which indicate that the agent doesn’t know whether to buy or sell and sell or buy; these are all superposed together which creates the “all the infinite possibilities of buy and sell simultaneously.”
This density operator is directly observable, it’s just before the agent “makes its decision” the superposed state of buy and sell is still valid, making the decision is equivalent to a measurement and by doing so “collapses” the state to only one action of buy or sell with a corresponding degree of belief.
So now we’ve defined the agent’s “state of mind” before it makes a decision, the superposition of all the different possibilities that it can take, and now when it actually does “make up its mind”, when it “decides” to either buy or sell the superposed state “jumps” to one action with its corresponding degree of belief only, shown as:

Where the “decide” can be seen as a “measurement” or “collapse” to one of the states. Now this is essentially just a projection or transformation from a pure state to a mixed state (pure state is the quantum superposed state, mixed state is the “collapsed” classical state, don’t get them mixed up), and it’s this decision process that is where the evolutionary algorithm does its “magic”. Essentially each “decision” the agent makes is the algorithm pitting each possible outcome against one another (survival of the fittest) and the potential increase or decrease of the stock’s price (the environment).
Now that we have agent defined, we need to know what the evolutionary algorithm does. The evolutionary algorithm is just an algorithm that’s based on Darwinian evolution in the real world, where it attempts to find the best of doing something through generations of mutation, crossover, and selection, just like how the fittest individuals of a population survive in a species, the same applies here.
In our case the evolutionary algorithm first randomly generates a population of possibilities (buy or sell with different degrees of belief), let’s say 300. Each of these individual possibilities are represented by a pure state which in turn is just an arbitrary 2x2 matrix, and because the density operator of the agent just so happens to be a 2x2 matrix as well, we can construct these density operators with the 8 most basic acknowledged quantum gates, which then leads the possibilities to become a “matrix tree”. Each one of these matrix trees represents an action of buy or sell with a said degree of belief that the agent can eventually end up using, with each decision made basically evaluating how effective or reliable each tree is, the fitter it is then it survives if not its discarded. Now how do we measure how effective each one is? Well, we need an evaluation metric; the four potential outcomes and corresponding actions that can be taken are the perfect way to evaluate how fit each one is. Since the price of a stock can only increase or decrease at any given time and the agent can choose to buy or sell, these four outcomes are: price increases and agent decides to buy, price increases and agent decides to sell, price decreases and agent decides to buy, and price decreases and agent decides to sell, resulting in this:

Where the first and fourth terms are that the agent makes the right prediction, and the middle two are the wrong ones. Therefore, we can naturally establish a reward-punishment system here for the evaluation metric; if the agent guesses correctly to what the price actually goes to then it is rewarded, if it’s wrong then a loss is incurred. This way when each possibility is evaluated the ones that are correct are kept and the ones that are wrong are eliminated, ensuring that the fittest ones survive. Now these four outcomes are just the expected values of each single decision made; the final fitness function is just the total (sum) of all the expected values of each decision made, which is this:

With the fitness function, the algorithm now takes each generated possibility and faces them off against each other, with the strongest knocking out the weaker ones, resulting in the one with the maximum fitness function to be used by the agent when making its final decision of whether the price will increase or decrease.
After generations of evolution, let’s say 80, the algorithm does its “magic” by putting each and every single possibility to the test and by the wonders of natural selection the fittest one survives and is used by the agent to make the final decision. With the goal of the agent being when the price is increasing to buy and when the price is decreasing to sell, therefore maximizing profit, and not just to buy and sell when the price is increasing or decreasing respectively but to do so with the greatest degree of beliefs. The goal is to buy with 100% degree of beliefs when the price is increasing, and sell with 100% degree of beliefs when the price is decreasing, but obviously to do so is much harder. But theoretically with the fittest possible actions that have been evolved by the algorithm and have stood the test of natural selection, it might be possible; if omega is 0.9, p is 100%, q is then increase, and a is to buy a strategy like this would be what the agent is aiming for, but most times this will not happen, so the next best thing to do is to attempt to predict the trend of the price, which we’ve illustrated in the results of our paper Is the Market Truly in a Random Walk? Now if we wanted to go further, we could take a shot at predicting the absolute closing price, because we’ve got the trend and if its right then we could potentially take a jab at the exact price. Well, that’s another blog post for another day. Meanwhile, maybe you could try to blindfold your cat and have it throw its toys at which stocks you want to buy.