It is amazing that using Laplace transforms to solve ordinary differential equations is not taught to physics students that often and yet using these beasts reduces the problem to a simple algebra problem, trivializing what could have been a nasty problem. Let begin with the tools will need, namely, what action the Laplace transform has on derivatives. Assume that $$ \int_0^\infty e^{-(st)} x(t) = x(s)= \mathbf{L[x(t)]}$$
Now the following follows:
\begin{eqnarray}
\mathbf{L[\dot{x}]}=& [x(t)e^{-(st)}]_0^{\infty} + \int_0^{\infty}se^{-(st)}x(t) dt = -x(0) + sx(s) \hspace{20mm} (1)\\
\mathbf{L[\ddot{x}]} =& [\dot{x}e^{-(st)}]_0^{\infty} + \int_0^{\infty}se^{-(st)}\dot{x}dt \\
\mathbf{L[\ddot{x}]} =& -\dot{x}+ [sx(t)e^{-(st)}]_0^\infty + \int_0^{\infty}s^2e^{-(st)}x(t) dt\\
\mathbf{L[\ddot{x}]}=& s^2x(s) -sx(0)-\dot{x}(0) \hspace{20mm} (2)
\end{eqnarray}
I shall now refer to x(0) and y(0) as \(x_o\) and \(y_o\) respectively.
There is one more powerful result that relates laplace transforms to convolutions that would not hurt to prove. Remember, we are going to take the laplace transform of our differential equations and at the end we will need to take the inverse laplace transform. The result we are about to prove will help us later and prevent us from doing nasty partial fractions. Let f*g represent the following integral\(\int_0^t f(\tau)g(t-\tau) d\tau\). Now let the laplace transform act upon it and let \(\bar{f}\bar{g}\)= \(\mathbf{L[f(t)*g(t)]}\) then \(\mathbf{L[f*g]}= \int_0^{\infty}e^{-(st)}\int_0^t f(\tau)g(t-\tau) d\tau dt\).
The above integral involves integrating vertically in the \(\tau,t\) plane so we can change that so that we integrate horizontally by doing the following:
$$
\mathbf{L[f\star g]} = \int_0^{\infty}\int_{\tau}^{\infty} e^{-(st)}f(\tau)g(t-\tau) dt d\tau\\
\mathbf{L[f\star g]}=\int_0^{\infty}f(\tau)[\int_{\tau}^{\infty} e^{-(st)}g(t-\tau)dt ]d\tau
$$
Changing variables to \( u= t-\tau\) and doing the integral in the brackets in the second equation gives us the following
\begin{eqnarray}
\mathbf{L[f \star g]} =& \int_0^{\infty}f(\tau)e^{-(s\tau)}\bar{g}(s) d\tau \\
\mathbf{L[f \star g]} = &\bar{g}(s)\bar{f}(s)\\
\mathbf{L[f \star g]} = &\bar{f}(s)\bar{g}(s) \hspace{30mm} eq.3
\end{eqnarray}
From eq.3 we arrive at the powerful result, $$f(t) \star g(t) = \mathbf{L^{-1}[\bar{f}\bar{g}]} \hspace{30 mm} eq. 4 $$
With eq.1 ,eq.2 and eq.4 in our bag let's look back at the couple ordinary differential equations we originally derived from considering Newton's second law. They are the reproduce here:
\begin{eqnarray}
m_1\frac{d^2x_1}{dt^2}= -k_1x_1 - k_{12}x_1 + k_{12} x_2 \\
m_2\frac{d^2x_2}{dt^2}= -k_2x_2 - k_{12}x_2 + k_{12} x_1
\end{eqnarray}
Let \(x_1\) = x and \(x_2\) = y for ease of writing and clarity.
Re-writing both equations, we get the following:
\begin{eqnarray}
m_1 \ddot{x}= -k_1x - k_{12}(x-y)\\
m_2\ddot{y} = -k_{2}y + k_{12}(x-y)
\end{eqnarray} Next apply laplace transform and assume oscillators begin from rest to get:
\begin{eqnarray}
(m_1s^2 + k_1+k_12)x - k_12y = m_1sx_o\\
- k_12x+ (m_2s^2 + k_2+k_12)y = m_2sy_o
\end{eqnarray}
Hence we get the following matrix equation:
\begin{align}
\begin{pmatrix}
m_1s^2 + k_1 + k_{12} & -k_{12}\\
-k_{12} & m_2s^2 + k_2 + k_{12}
\end{pmatrix}\begin{pmatrix}
x\\ y
\end{pmatrix} =
\begin{pmatrix}
m_1sx_o\\
m_2sy_o
\end{pmatrix}
\end{align}
Notice that if all the m's are the same and all the k's are the same then the matrix on the left becomes symmetric. Thus one of the reasons for symmetric matrices is that there is a translational in-variance in the problem. In other words one can't tell which is the right or left oscillator. Pick one as your left oscillator, the moment you turn around, I shall turn the whole system around and now your left oscillator will be on your right and you will not know the difference. Now, what I shall do next is to solve for x using cramer's rule( it turns out that cramer's rule can be useful especially if the coefficients are functions). So x(s)=
\begin{equation} \frac{m_1s x_o[m_2s^2+k_2+ k_{12}] - k_{12}m_2sy_o}{(m_1s^2+k_{12}+k)(m_2s^2+k_2+k_{12}) - k_{12}^2}
\end{equation}
We shall now make a simplifying assumption namely that \(m_1 = m_ 2 = m\) , \(k_1 = k_{12} = k_{2} = k\) and lastly \(\frac{k}{m}= \omega^2\). With that in mind x(s) becomes
\begin{equation}
\frac{m^2s^3x_o - kmsy_o -2msk}{(ms^2+3k)(ms^2+k)}\\
= \frac{3sx_o}{2(s^2+ 3\omega^2)}-\frac{sx_o}{2(s^2+\omega^2)} - \frac{\omega^2sy_o}{(s^2+3\omega^2)(s^2+\omega^2)} - \frac{2s\omega}{(s^2+ 3\omega^2)(s^2+ \omega^2)}
\end{equation}
We are now in position to use eq. 4 derived earlier. Remember the result says that if you recognize a product of laplace transforms \(\bar{f}\bar{g}\), then do this \( \int_0 ^ t f(\tau)g(t-\tau) d\tau\) which will give you \(\mathbf{L^{-1}[\bar{f}\bar{g}}]\). Since I recognize sines and cosines in the equation, I can do the following convolutions:
\begin{equation}
x(t ) = x_o\left(\frac{3 \cos(\sqrt{3}t\omega)}{2}- \frac{\cos(t \omega)}{2}\right) - \omega y_o \left(\cos(\omega \sqrt{3}t) \star \sin \omega t \right) - 2 \left(\cos (\sqrt{3}\omega t) \star \sin \omega t \right)
\end{equation}
Then using wolfram alpha for the actual integrals, one gets:
\begin{equation}
x(t) = x_o\left(\frac{3 \cos(\sqrt{3}t\omega)}{2}- \frac{\cos(t \omega)}{2}\right) - \frac{y_0(\cos(t\omega)- \cos(\sqrt{3}t\omega))}{2} - \frac{\cos(t\omega)- cos(\sqrt{3}t\omega)}{\omega}
\end{equation}.
Let's assume that both oscillator began from equilibrium, then \(x_o\) and \(y_o\) are zero. To get y(t) it is easier to go back to eq.1 and solve for y getting \(\frac{\ddot{x}}{\omega^2} + 2x = y\) and plugging the appropriate equations for \(\ddot{x}\) and \( x\), one gets the following:
\begin{equation}
y(t) = \frac{-\cos(\omega t) - \cos(\sqrt{3}t \omega)}{\omega}
\end{equation}
Now the following follows:
\begin{eqnarray}
\mathbf{L[\dot{x}]}=& [x(t)e^{-(st)}]_0^{\infty} + \int_0^{\infty}se^{-(st)}x(t) dt = -x(0) + sx(s) \hspace{20mm} (1)\\
\mathbf{L[\ddot{x}]} =& [\dot{x}e^{-(st)}]_0^{\infty} + \int_0^{\infty}se^{-(st)}\dot{x}dt \\
\mathbf{L[\ddot{x}]} =& -\dot{x}+ [sx(t)e^{-(st)}]_0^\infty + \int_0^{\infty}s^2e^{-(st)}x(t) dt\\
\mathbf{L[\ddot{x}]}=& s^2x(s) -sx(0)-\dot{x}(0) \hspace{20mm} (2)
\end{eqnarray}
I shall now refer to x(0) and y(0) as \(x_o\) and \(y_o\) respectively.
There is one more powerful result that relates laplace transforms to convolutions that would not hurt to prove. Remember, we are going to take the laplace transform of our differential equations and at the end we will need to take the inverse laplace transform. The result we are about to prove will help us later and prevent us from doing nasty partial fractions. Let f*g represent the following integral\(\int_0^t f(\tau)g(t-\tau) d\tau\). Now let the laplace transform act upon it and let \(\bar{f}\bar{g}\)= \(\mathbf{L[f(t)*g(t)]}\) then \(\mathbf{L[f*g]}= \int_0^{\infty}e^{-(st)}\int_0^t f(\tau)g(t-\tau) d\tau dt\).
The above integral involves integrating vertically in the \(\tau,t\) plane so we can change that so that we integrate horizontally by doing the following:
$$
\mathbf{L[f\star g]} = \int_0^{\infty}\int_{\tau}^{\infty} e^{-(st)}f(\tau)g(t-\tau) dt d\tau\\
\mathbf{L[f\star g]}=\int_0^{\infty}f(\tau)[\int_{\tau}^{\infty} e^{-(st)}g(t-\tau)dt ]d\tau
$$
Changing variables to \( u= t-\tau\) and doing the integral in the brackets in the second equation gives us the following
\begin{eqnarray}
\mathbf{L[f \star g]} =& \int_0^{\infty}f(\tau)e^{-(s\tau)}\bar{g}(s) d\tau \\
\mathbf{L[f \star g]} = &\bar{g}(s)\bar{f}(s)\\
\mathbf{L[f \star g]} = &\bar{f}(s)\bar{g}(s) \hspace{30mm} eq.3
\end{eqnarray}
From eq.3 we arrive at the powerful result, $$f(t) \star g(t) = \mathbf{L^{-1}[\bar{f}\bar{g}]} \hspace{30 mm} eq. 4 $$
With eq.1 ,eq.2 and eq.4 in our bag let's look back at the couple ordinary differential equations we originally derived from considering Newton's second law. They are the reproduce here:
\begin{eqnarray}
m_1\frac{d^2x_1}{dt^2}= -k_1x_1 - k_{12}x_1 + k_{12} x_2 \\
m_2\frac{d^2x_2}{dt^2}= -k_2x_2 - k_{12}x_2 + k_{12} x_1
\end{eqnarray}
Let \(x_1\) = x and \(x_2\) = y for ease of writing and clarity.
Re-writing both equations, we get the following:
\begin{eqnarray}
m_1 \ddot{x}= -k_1x - k_{12}(x-y)\\
m_2\ddot{y} = -k_{2}y + k_{12}(x-y)
\end{eqnarray} Next apply laplace transform and assume oscillators begin from rest to get:
\begin{eqnarray}
(m_1s^2 + k_1+k_12)x - k_12y = m_1sx_o\\
- k_12x+ (m_2s^2 + k_2+k_12)y = m_2sy_o
\end{eqnarray}
Hence we get the following matrix equation:
\begin{align}
\begin{pmatrix}
m_1s^2 + k_1 + k_{12} & -k_{12}\\
-k_{12} & m_2s^2 + k_2 + k_{12}
\end{pmatrix}\begin{pmatrix}
x\\ y
\end{pmatrix} =
\begin{pmatrix}
m_1sx_o\\
m_2sy_o
\end{pmatrix}
\end{align}
Notice that if all the m's are the same and all the k's are the same then the matrix on the left becomes symmetric. Thus one of the reasons for symmetric matrices is that there is a translational in-variance in the problem. In other words one can't tell which is the right or left oscillator. Pick one as your left oscillator, the moment you turn around, I shall turn the whole system around and now your left oscillator will be on your right and you will not know the difference. Now, what I shall do next is to solve for x using cramer's rule( it turns out that cramer's rule can be useful especially if the coefficients are functions). So x(s)=
\begin{equation} \frac{m_1s x_o[m_2s^2+k_2+ k_{12}] - k_{12}m_2sy_o}{(m_1s^2+k_{12}+k)(m_2s^2+k_2+k_{12}) - k_{12}^2}
\end{equation}
We shall now make a simplifying assumption namely that \(m_1 = m_ 2 = m\) , \(k_1 = k_{12} = k_{2} = k\) and lastly \(\frac{k}{m}= \omega^2\). With that in mind x(s) becomes
\begin{equation}
\frac{m^2s^3x_o - kmsy_o -2msk}{(ms^2+3k)(ms^2+k)}\\
= \frac{3sx_o}{2(s^2+ 3\omega^2)}-\frac{sx_o}{2(s^2+\omega^2)} - \frac{\omega^2sy_o}{(s^2+3\omega^2)(s^2+\omega^2)} - \frac{2s\omega}{(s^2+ 3\omega^2)(s^2+ \omega^2)}
\end{equation}
We are now in position to use eq. 4 derived earlier. Remember the result says that if you recognize a product of laplace transforms \(\bar{f}\bar{g}\), then do this \( \int_0 ^ t f(\tau)g(t-\tau) d\tau\) which will give you \(\mathbf{L^{-1}[\bar{f}\bar{g}}]\). Since I recognize sines and cosines in the equation, I can do the following convolutions:
\begin{equation}
x(t ) = x_o\left(\frac{3 \cos(\sqrt{3}t\omega)}{2}- \frac{\cos(t \omega)}{2}\right) - \omega y_o \left(\cos(\omega \sqrt{3}t) \star \sin \omega t \right) - 2 \left(\cos (\sqrt{3}\omega t) \star \sin \omega t \right)
\end{equation}
Then using wolfram alpha for the actual integrals, one gets:
\begin{equation}
x(t) = x_o\left(\frac{3 \cos(\sqrt{3}t\omega)}{2}- \frac{\cos(t \omega)}{2}\right) - \frac{y_0(\cos(t\omega)- \cos(\sqrt{3}t\omega))}{2} - \frac{\cos(t\omega)- cos(\sqrt{3}t\omega)}{\omega}
\end{equation}.
Let's assume that both oscillator began from equilibrium, then \(x_o\) and \(y_o\) are zero. To get y(t) it is easier to go back to eq.1 and solve for y getting \(\frac{\ddot{x}}{\omega^2} + 2x = y\) and plugging the appropriate equations for \(\ddot{x}\) and \( x\), one gets the following:
\begin{equation}
y(t) = \frac{-\cos(\omega t) - \cos(\sqrt{3}t \omega)}{\omega}
\end{equation}
No comments:
Post a Comment