Monday, December 29, 2014

Second Law of Thermodynamics and Entropy

       One concept that is often talked about  and rarely understood well is the notion of entropy and the second law of thermodynamics. It is common place to say that the second law states that entropy always increasing or remains constant paired with the statement that entropy is a measure of disorder. This is in fact correct but the problem is that it is stated and talked about carelessly before students have a firm grounding in Statistical Mechanics let alone Quantum Statistical Mechanics.  This is a problem not because anything wrong is being taught but because (as is often the case) very subtle and hard concepts are introduced before one has the right intellectual machinery to fully wrap one's mind about the concept; as a result teachers and professors often resort to very vague statements. Confusion is compounded when students are then given concrete problems from introductory textbooks where  these highfalutin statements can't be readily gleaned and in fact never seem to apply.
     The sad truth is that if we were to properly learn these concepts it is better for the concept of entropy to remain mysterious and simply be talked about as merely a state function. The interpretation of entropy as a measure of disorder requires a proper discussion of statistical mechanics precisely the thing that is not done. In fact, the fact that Entropy always increases can be derived by thinking carefully about Carnot cycles and engines. Recall that Clausius coined the term "entropy" from apparently a greek word, " entropia" which according to wiktionary means " a turning towards".
     The reader should be confused at this point if my point has been made. Why? Well, if entropy is a measure of disorder why did Clausius choose this word when he defined entropy. It seems as though it has nothing to do with disorder. The answer is this; our formulation of the second law only makes sense after accepting Boltzmann's work which came after Clausius' work. This implies there is a formulation of the second law that does not require the definition or formulation of entropy. It is this understanding that takes the back sit in these vague discussions and plays the important role when a lot of introductory thermal physics concepts are problems are being taught.
    It must be stressed that I am not trying to imply that something wrong is being taught, rather I am making a pedagogical point how we should be learning the material. It will be the goal of the next few posts to introduce another way of talking about the Second law of thermal dynamics one that makes no specific reference to entropy as a measure of disorder.

Thursday, December 25, 2014

Analog of Schrodinger Equation for Density Matrices

A basic question we can ask is why bother with density matrices don't wave functions work just fine. The answer is wave functions work just fine as long as we deal with closed systems. The beauty of density matrices is they are better way to understand the dynamics of open systems. This point can be dramatized by noting that there are density matrices that have no wave function analog. An example of this is a completely mixed state i.e zero everywhere except on the diagonal or a scalar times the identity matrix. First we would like to derive the analog of schrodinger's equation for density matrices.

We start with what we know namely, Schrödinger’s equations

\begin{equation}
 i \hbar \frac{\partial |\psi (t)\rangle}{\partial t} = H | \psi (t) \rangle  \hspace{10mm} \text {eq.1}
\end{equation}

We can know imagine the wave function at t=0 and posit the existence of an operator whose sole job is to evolve the operator from time to another. We shall call this operator U.  What this means is that

\begin{equation}
U(dt) | \psi (t) \rangle = | \psi (t +dt) \rangle \hspace{10mm} \text{eq.2}
\end{equation}

Thus  \( \psi (t)\rangle  = U(t) | \psi (0) \rangle \) and we place it in eq.1 to get

\begin{equation} i \hbar \frac{\partial U(t)}{\partial t} | \psi (0) \rangle = H U(t) \rangle | \psi (0) \rangle  \hspace{10mm} \text {eq.3}
\end{equation}

The above equation holds for any initial wave function and at any time so we simply have

\begin{equation}
 i \hbar \frac{\partial U(t)}{\partial t}  = H U(t)  \hspace{10mm} \text {eq.4}
\end{equation}

An analog equation can be got for the adjoint of U. This operator turns out to have the property that \( U U^{\dagger} = I \) .
Now recall,we introduced the density operator as being the outer product of the wave function \( \psi \rangle \) so \( \rho = |\psi \rangle  \langle  \psi | \). We therefore we can write

\begin{equation}
\rho(t) = U \rho_o U^{\dagger} \hspace{10mm} \text{eq.5}
\end{equation}

where  \( \rho_o \) is the initial density matrix at initial time.

We can now take the derivative with respect to time on both sides of eq.5 and multiply by \( i \hbar \) on both sides then using eq.4 whe shall arrive at

\begin{equation}
\frac{d\rho(t)}{dt} = \frac{1}{i \hbar} [H(t), \rho(t)] \hspace{10mm} \text{eq.5}
\end{equation}

When we generalize to open quantum system and additional operator appears in eq.5 usually called the Lindblad operator

\begin{equation}
\frac{d\rho(t)}{dt} = \frac{1}{i \hbar} [H(t), \rho(t)] +\mathcal{L}(\rho) \hspace{10mm} \text{eq.5}
\end{equation}

The additonal operator will look different depending on the system under consideration.




A note on the Hilbert Space


This blog post ties off loose ends.
As has been laid out, there is a direct correspondence between the usual picture of wave functions,Schrödinger’s equations and kets and bras introduced by Paul Dirac.
We have already seen that we interpret what happens in Quantum Mechanics as simply being a collection of Hermitian Operators acting on some vectors. For any experiment what we measure are the eigenvalues and the statistics we collect of experiments are simply the expectation values of these operators. So a question we can ask is where do these vectors "live". From physical intuition we know we need some notion of "dot product" , (we are incidentally using our intuition from the normal Euclidean space). For this dot product the order of operators in the dot product should not matter i.e \( \vec{v}.\vec{w} = \vec{w}.\vec{v} \). For the moment we shall not worry about notation. Also we require that \( \vec{v}.\vec{v} \) be positive definite and only zero when \( \vec{v}=0 \). Lastly, we may require linearity namely \( \vec{u}.( \vec{v} + \vec{w})  = \vec{u}.\vec{v} + \vec{u}.\vec{w}\).
In all this we are assuming all the usual properties that are assumed for distance functions and norms. The vectors  obviously constitute a vector space.  Thus what we really have is an inner product space. Lastly, we also have a metric space with additional property that all Cauchy sequences converge in this vector space.

Note: A metric space is a topological space with a distance function defined upon it. At some point I shall have more to say about this.

Thus we can now say what we mean by a Hilbert space by saying it is an inner product space that is also a complete metric space. Not that this definition does not care what our actual vectors look like. In fact if one goes through the desired properties of our dot product one can easily see that these are satisfied by wave functions with the dot product provided by the integral or by what we usually mean by vectors namely column and row things. In fact the usual n-dimensional real space is also a Hilbert space. In other words there are different kinds of Hilbert spaces; the kind we assume when we write down wave functions obeying a differential equation live in a specific kind of Hilbert space called \(  L^2 \) space. But there is another incarnation of Hilbert space we can use that is provided by bras and kets, as far as I know it has no special name.
These two kinds of Hilbert spaces can roughly be thought of as choosing what sort of basis you want. If you choose a continuous basis you arrive at an \( L^2\) space and generally speaking when we use bras and kets we are most likely working in a discrete basis. This connection shall be explored in greater detail once Representation theory is discussed.

Notation: Dirac introduced  \( | v \rangle \) and  \( \langle w| \) to represent the vectors in our Hilbert spaces. For finite dimensional Hilbert space one can think of these as row and column vectors although technically what we have is an element from a vector space,  \( |v \rangle\) and  \( \langle w| \) an element from the dual vector space (a space of linear functionals). Both these vectors spaces in our case have the same dimensions although in general they need not be.

A stream of posts coming

Over the past year since my last post, a lot of things in a number of different fields have crystallized in my mind. Fields of study like Group Theory, Lie Algebras , Representation Theory, General Relativity, Thermal Physics and more recently Quantum many body physics. These might be trivial insights to most but are something for me. There are two obvious personal projects:

1. A big project I would like to finish is a non trivial document on Differential Geometry and General Relativity. I already have a huge document I prepared but the task of writing everything out in latex scares me.

2.Lie Algebras and Group Theory- There are wonderful similarities between Lie Algebras and Group Theory and fleshing them out explicitly is something I want to do.

 My mental state is not stable, with wild swings between huge mental activity, euphoria and intellectual vitality and other times where it seems my brain is sucked into a black hole accompanied with physical lethargy and utter despair. Goal for this coming year is consistency although I am in no position to make this promise to myself.