One of the main reason from introducing the density matrix is that it allows to describe statistical mixture as opposed to merely pure states. We therefore have to make some comments as to how we generally extract out information about a system from quantum mechanics. We know from introductory quantum mechanics that the wave function has all the information we could want or that quantum mechanics can provide.
Let's for a moment talk more generally about what is knowable in quantum mechanics. We have all heard about Heisenberg's uncertainty principle, namely that we can't know position and momentum exactly simultaneously. One often hears this in public discussions, but a more precise statement and in fact a more correct statement is that we can't know the position in a certain direction and the momentum in that same direction simultaneously to arbitrary accuracy. The mathematical version of this statement would be that \([ \hat{q_i},\hat{p}_j ] = i\hbar \delta_{ij}\) . Here \(q_i\) and \(p_i\) are generalized co-ordinates and generalized momentum respectively. So the more familiar version would be \( [\hat{x}, \hat{p_x}] = i\hbar \).
The underlying reason is that the position operator in the \(\hat{x}\) does not commute with the momentum operator in the same position. This implies that I can indeed now the position exactly in the \(\hat{y}\) and the momentum for example in the \(\hat{x}\) .
Now to jump to more general statements, we may proceed as follows. If we have a set T=\( ( Q_1,Q_2,Q_3 ...Q_n) \) consisting of operators any two of which commute with each other and a correponding set U = \((q_1, q_2,...q_n)\) of eigenvalues for each operator ( we assume that this sets are the largest possible size they can be so the ket \(| q_1, q_2,q_3,...q_n \rangle \) describes our system), then these two sets represent the "maximum knowledge" of the system that we can have. These states of "maximum knowledge" are what we called the pure states earlier.
One must keep in mind that these two sets may not be unique and in fact are rarely reproducible in experiments. Instead what we usually have is that we know with a certain probability \( W_n \) that our system is in a pure state \( \psi_n \). We therefore deal with statistical mixtures more than we do with pure states.
NOTE: Do not confuse a superposition of states with a statistical mixture. With a superposition there is a phase relationship between the quantum states, we can thus write down a pure state wave function describing a quantum superposition of these two states. On the other hand, statistical states \( \psi_n \) have no phase relationship. To put it another way, if I have a state and I can't tell in principle whether it is spin up or spin down until I measure then I have a superposition but if I have a number of states that are either spin up or spin down or superposition of the two (so there is probability of me picking up a state with either spin or spin down or superposition of the two) then I have a mixed state.
In quantum mechanics we connect with the "real world" by calculating expectation values of some Operator. So for example, we continuously prepare a state in some specific configuration and keep measuring its energy. At the end we shall have a probability of having the state with energy \(E_n\) in the corresponding eigenfunction \( \psi_n\). We can calculate the expectation value for a pure state (in braket notation) like this:
\begin{equation}
\langle O \rangle = \langle \psi| O | \psi \rangle
\end{equation}
and for mixed states
\begin{equation}
\langle O \rangle = \sum _n \langle \psi_n| O | \psi_n \rangle
\end{equation}
Density Matrix
We can now describe our density matrix of our statistical mixture in the following manner:
\begin{equation}
\rho = \sum_n W_n |\psi_n \rangle \langle \psi_n|
\end{equation}
Here \( W_n \) are the statistical weights and\( |psi_n \rangle\) are the independently prepared states. These independently prepared states are not necessarily orthonormal so we could re-write in terms of states that are so that :
\begin{equation}
\psi_n = \sum_m a_m^n \phi_m
\end{equation}
We are now in a position to re-write our density matrix in the following manner:
\begin{equation}
\rho = \sum_{nmk} W_n a_m^{n} a_k^{n*} |\phi_m \rangle \langle \phi_k|
\end{equation}
Thus we are able to put matrix in density matrix by find how to get its matrix elements. Using the orthogonality condition of the \( \phi_n \) state we observe that:
\begin{equation}
\rho_{ij} = \langle \phi_i |\rho | \phi_n \rangle = \sum _n W_n a_j ^{n}a_j^{n*}
\end{equation}
Introducing these orthonormal states allows us to arrive at a simple expression or the expectation value for an operator of a mixed state. We proceed as follows:
\begin{align}
\langle O \rangle &= \sum _n \langle \psi_n| O | \psi_n \rangle \\
&= \sum_{mk} \sum_{n} W_n a_m^{n} a_k^{n*} \langle \phi_k |O | \phi_m \rangle \\
& = \sum_{n}\langle \phi_m|\rho | \phi_k \rangle \langle \phi_k |O | \phi_m \rangle \\
&= tr (\rho O)
\end{align}
Look at the power of the density matrix. If I know the density matrix (remember something I can define for pure or statistical mixtures), I can arrive at the very thing I need to connect with the "real " world namely the expectation of the operator or observable in question. This something that I can't easily get if I stick with the wave function and have a statistical mixture in my lab.
Let's for a moment talk more generally about what is knowable in quantum mechanics. We have all heard about Heisenberg's uncertainty principle, namely that we can't know position and momentum exactly simultaneously. One often hears this in public discussions, but a more precise statement and in fact a more correct statement is that we can't know the position in a certain direction and the momentum in that same direction simultaneously to arbitrary accuracy. The mathematical version of this statement would be that \([ \hat{q_i},\hat{p}_j ] = i\hbar \delta_{ij}\) . Here \(q_i\) and \(p_i\) are generalized co-ordinates and generalized momentum respectively. So the more familiar version would be \( [\hat{x}, \hat{p_x}] = i\hbar \).
The underlying reason is that the position operator in the \(\hat{x}\) does not commute with the momentum operator in the same position. This implies that I can indeed now the position exactly in the \(\hat{y}\) and the momentum for example in the \(\hat{x}\) .
Now to jump to more general statements, we may proceed as follows. If we have a set T=\( ( Q_1,Q_2,Q_3 ...Q_n) \) consisting of operators any two of which commute with each other and a correponding set U = \((q_1, q_2,...q_n)\) of eigenvalues for each operator ( we assume that this sets are the largest possible size they can be so the ket \(| q_1, q_2,q_3,...q_n \rangle \) describes our system), then these two sets represent the "maximum knowledge" of the system that we can have. These states of "maximum knowledge" are what we called the pure states earlier.
One must keep in mind that these two sets may not be unique and in fact are rarely reproducible in experiments. Instead what we usually have is that we know with a certain probability \( W_n \) that our system is in a pure state \( \psi_n \). We therefore deal with statistical mixtures more than we do with pure states.
NOTE: Do not confuse a superposition of states with a statistical mixture. With a superposition there is a phase relationship between the quantum states, we can thus write down a pure state wave function describing a quantum superposition of these two states. On the other hand, statistical states \( \psi_n \) have no phase relationship. To put it another way, if I have a state and I can't tell in principle whether it is spin up or spin down until I measure then I have a superposition but if I have a number of states that are either spin up or spin down or superposition of the two (so there is probability of me picking up a state with either spin or spin down or superposition of the two) then I have a mixed state.
In quantum mechanics we connect with the "real world" by calculating expectation values of some Operator. So for example, we continuously prepare a state in some specific configuration and keep measuring its energy. At the end we shall have a probability of having the state with energy \(E_n\) in the corresponding eigenfunction \( \psi_n\). We can calculate the expectation value for a pure state (in braket notation) like this:
\begin{equation}
\langle O \rangle = \langle \psi| O | \psi \rangle
\end{equation}
and for mixed states
\begin{equation}
\langle O \rangle = \sum _n \langle \psi_n| O | \psi_n \rangle
\end{equation}
Density Matrix
We can now describe our density matrix of our statistical mixture in the following manner:
\begin{equation}
\rho = \sum_n W_n |\psi_n \rangle \langle \psi_n|
\end{equation}
Here \( W_n \) are the statistical weights and\( |psi_n \rangle\) are the independently prepared states. These independently prepared states are not necessarily orthonormal so we could re-write in terms of states that are so that :
\begin{equation}
\psi_n = \sum_m a_m^n \phi_m
\end{equation}
We are now in a position to re-write our density matrix in the following manner:
\begin{equation}
\rho = \sum_{nmk} W_n a_m^{n} a_k^{n*} |\phi_m \rangle \langle \phi_k|
\end{equation}
Thus we are able to put matrix in density matrix by find how to get its matrix elements. Using the orthogonality condition of the \( \phi_n \) state we observe that:
\begin{equation}
\rho_{ij} = \langle \phi_i |\rho | \phi_n \rangle = \sum _n W_n a_j ^{n}a_j^{n*}
\end{equation}
Introducing these orthonormal states allows us to arrive at a simple expression or the expectation value for an operator of a mixed state. We proceed as follows:
\begin{align}
\langle O \rangle &= \sum _n \langle \psi_n| O | \psi_n \rangle \\
&= \sum_{mk} \sum_{n} W_n a_m^{n} a_k^{n*} \langle \phi_k |O | \phi_m \rangle \\
& = \sum_{n}\langle \phi_m|\rho | \phi_k \rangle \langle \phi_k |O | \phi_m \rangle \\
&= tr (\rho O)
\end{align}
Look at the power of the density matrix. If I know the density matrix (remember something I can define for pure or statistical mixtures), I can arrive at the very thing I need to connect with the "real " world namely the expectation of the operator or observable in question. This something that I can't easily get if I stick with the wave function and have a statistical mixture in my lab.
No comments:
Post a Comment