Sub­sec­tions


A.15 Quan­tum Field The­ory in a Nanoshell

The clas­si­cal quan­tum the­ory dis­cussed in this book runs into ma­jor dif­fi­cul­ties with truly rel­a­tivis­tic ef­fects. In par­tic­u­lar, rel­a­tiv­ity al­lows par­ti­cles to be cre­ated or de­stroyed. For ex­am­ple, a very en­er­getic pho­ton near a heavy nu­cleus might cre­ate an elec­tron and a positron. Ein­stein’s $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ $mc^2$ im­plies that that is pos­si­ble be­cause mass is equiv­a­lent to en­ergy. The pho­ton en­ergy is con­verted into the elec­tron and positron masses. Sim­i­larly, an elec­tron and positron can an­ni­hi­late each other, re­leas­ing their en­ergy as pho­tons. The quan­tum for­mal­ism in this book can­not deal with par­ti­cles that ap­pear out of noth­ing or dis­ap­pear. A mod­i­fied for­mu­la­tion called quan­tum field the­ory is needed.

And quan­tum field the­ory is not just for es­o­teric con­di­tions like elec­tron-positron pair cre­ation. The pho­tons of light are rou­tinely cre­ated and de­stroyed un­der nor­mal con­di­tions. Still more ba­sic to an en­gi­neer, so are their equiv­a­lents in solids, the phonons of crys­tal vi­bra­tions. Then there is the band the­ory of semi­con­duc­tors: elec­trons are cre­ated within the con­duc­tion band, if they pick up enough en­ergy, or an­ni­hi­lated when they lose it. And the same hap­pens for the real-life equiv­a­lent of positrons, holes in the va­lence band.

Such phe­nom­ena are rou­tinely de­scribed within the frame­work of quan­tum field the­ory. Al­most un­avoid­ably you will run into it in lit­er­a­ture, [18,29]. Elec­tron-phonon in­ter­ac­tions are par­tic­u­larly im­por­tant for en­gi­neer­ing ap­pli­ca­tions, lead­ing to elec­tri­cal re­sis­tance (along with crys­tal de­fects and im­pu­ri­ties), and to the com­bi­na­tion of elec­trons into Cooper pairs that act as bosons and so give rise to su­per­con­duc­tiv­ity.

This ad­den­dum ex­plains some of the ba­sic ideas of quan­tum field the­ory. It should al­low you to rec­og­nize it when you see it. Ad­den­dum {A.23} uses the ideas to ex­plain the quan­ti­za­tion of the elec­tro­mag­netic field. That then al­lows the quan­tum de­scrip­tion of spon­ta­neous emis­sion of ra­di­a­tion by ex­cited atoms or nu­clei in {A.24}. Here a pho­ton is cre­ated.

Un­for­tu­nately a full dis­cus­sion of quan­tum field the­ory is far out­side the scope of this book. Es­pe­cially the fully rel­a­tivis­tic the­ory is very in­volved. To ex­plain quan­tum field the­ory in a nut­shell takes Zee 500 pages, [53]. Tong [[17]] writes: “This is charm­ing book, where em­pha­sis is placed on phys­i­cal un­der­stand­ing and the au­thor isn’t afraid to hide the ugly truth when nec­es­sary. It con­tains many gems.” But you first need to learn lin­ear al­ge­bra, at the min­i­mum read all of chap­ter 1 on rel­a­tiv­ity, chap­ter 1.2.5 and {A.4} on in­dex no­ta­tion, chap­ter 12.12 and {A.36} on the Dirac equa­tion, ad­den­dum {A.14} on the Klein-Gor­don equa­tion, {A.1} on La­grangian me­chan­ics, {A.12} on the Heisen­berg in­ter­pre­ta­tion, and pick up enough group the­ory. Learn­ing some­thing about the path in­te­gral ap­proach to quan­tum me­chan­ics, like from [22], can­not hurt ei­ther. In the ab­sence of 1 000 pages and a will­ing au­thor, the fol­low­ing dis­cus­sion will truly be quan­tum field the­ory in a nanoshell.

If you want to get a start on a more ad­vanced treat­ment of quan­tum field the­ory of el­e­men­tary par­ti­cles at a rel­a­tively low level of math­e­mat­ics, Grif­fiths [24] is rec­om­mended.

And if you are just in­ter­ested in rel­a­tivis­tic quan­tum me­chan­ics from an in­tel­lec­tual point of view, there is good news. Feyn­man gave a set of lec­tures on “quan­tum elec­tro­dy­nam­ics” for a gen­eral au­di­ence around 1983, and the text is read­ily avail­able at low cost. With­out doubt, this is the best ex­po­si­tion of the fun­da­men­tals of quan­tum me­chan­ics that has ever been writ­ten, or ever will. The sub­ject is re­duced to its bare ab­stract ax­ioms, and no more can be said. If the hu­man race is still around a mil­len­nium or so from now, ar­ti­fi­cial in­tel­li­gence may take care of the needed de­tails of quan­tum me­chan­ics. But those who need or want to un­der­stand what it means will still reach for Feyn­man. The 2006 edi­tion, [19], has a fore­word by Zee that gives a few hints how to re­late the ba­sic con­cepts in the dis­cus­sion to more con­ven­tional math­e­mat­ics like the com­plex num­bers found in this book. It will not be much help ap­ply­ing quan­tum field the­ory to en­gi­neer­ing prob­lems, how­ever.


A.15.1 Oc­cu­pa­tion num­bers

The first con­cept that must be un­der­stood in quan­tum field the­ory is oc­cu­pa­tion num­bers. They will be the new way to rep­re­sent quan­tum wave func­tions.

Re­call first the form of wave func­tions in clas­si­cal quan­tum me­chan­ics, as nor­mally cov­ered in this book. As­sume a sys­tem of in­de­pen­dent, or maybe weakly in­ter­act­ing par­ti­cles. The en­ergy eigen­func­tions of such a sys­tem can be writ­ten in terms of what­ever are the sin­gle-par­ti­cle en­ergy eigen­func­tions

\begin{displaymath}
\pp1/{\skew0\vec r}//z/,\pp2/{\skew0\vec r}//z/,\pp3/{\skew0\vec r}//z/,\ldots
\end{displaymath}

For each sin­gle-par­ti­cle eigen­func­tion, ${\skew0\vec r}$ in­di­cates the po­si­tion of the par­ti­cle and $S_z$ its spin an­gu­lar mo­men­tum in the cho­sen $z$-​di­rec­tion.

Now con­sider a sys­tem of, say, 36 par­ti­cles. A com­pletely ar­bi­trary ex­am­ple of an en­ergy eigen­func­tion for such a sys­tem would be:

\begin{displaymath}
\begin{array}{l}
\psi^{\rm S}({\skew0\vec r}_1,S_{z1},{\sk...
...}_5//z5/ \ldots \pp54/{\skew0\vec r}_{36}//z36/
\end{array} %
\end{displaymath} (A.46)

This sys­tem eigen­func­tion has par­ti­cle 1 in the sin­gle-par­ti­cle state $\pp24////$, par­ti­cle 2 in $\pp4////$, etcetera. The sys­tem en­ergy is the sum of the sep­a­rate en­er­gies of the 36 sin­gle-par­ti­cle states in­volved:

\begin{displaymath}
{\vphantom' E}^{\rm S}= {\vphantom' E}^{\rm p}_{\psi_{24}} ...
...{\rm p}_{\psi_6} + \ldots + {\vphantom' E}^{\rm p}_{\psi_{54}}
\end{displaymath}

Fig­ure A.2: Graph­i­cal de­pic­tion of an ar­bi­trary sys­tem en­ergy eigen­func­tion for 36 dis­tin­guish­able par­ti­cles.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(405,197...
...
\PB350,196,t'$\pp72////$'
\PB378,196,t'$\pp73////$'
\end{picture}
\end{figure}

In­stead of writ­ing out the ex­am­ple eigen­func­tion math­e­mat­i­cally as done in (A.46) above, it can be graph­i­cally de­picted as in fig­ure A.2. In the fig­ure the sin­gle-par­ti­cle states are shown as boxes, and the par­ti­cles that are in those par­tic­u­lar sin­gle-par­ti­cle states are shown in­side the boxes. In the ex­am­ple, par­ti­cle 1 is in­side the $\pp24////$ box, par­ti­cle 2 is in­side the $\pp4////$ one, etcetera. It is just the re­verse from the math­e­mat­i­cal ex­pres­sion (A.46): the math­e­mat­i­cal ex­pres­sion shows for each par­ti­cle in turn what the sin­gle-par­ti­cle eigen­state of that par­ti­cle is. The fig­ure shows for each sin­gle-par­ti­cle eigen­state in turn what par­ti­cles are in that eigen­state.

Fig­ure A.3: Graph­i­cal de­pic­tion of an ar­bi­trary sys­tem en­ergy eigen­func­tion for 36 iden­ti­cal bosons.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(405,197...
...
\PB350,196,t'$\pp72////$'
\PB378,196,t'$\pp73////$'
\end{picture}
\end{figure}

How­ever, if the 36 par­ti­cles are iden­ti­cal bosons, (like pho­tons or phonons), the ex­am­ple math­e­mat­i­cal eigen­func­tion (A.46) and cor­re­spond­ing de­pic­tion fig­ure A.2 is un­ac­cept­able. As chap­ter 5.7 ex­plained, wave func­tions for bosons must be un­changed if two par­ti­cles are swapped. But if, for ex­am­ple, par­ti­cles 2 and 5 in eigen­func­tion (A.46) above are ex­changed, it puts 2 in state 6 and 5 in state 4:

\begin{displaymath}
\begin{array}{l}
\psi^{\rm S}_{2\leftrightarrow5}({\skew0\...
...c r}_5//z5/ \ldots \pp54/{\skew0\vec r}_{36}//z36/
\end{array}\end{displaymath}

That is sim­ply a dif­fer­ent en­ergy eigen­func­tion. So nei­ther (A.46) nor this swapped form are ac­cept­able by them­selves. To fix up the prob­lem, eigen­func­tions must be com­bined. To get a valid en­ergy eigen­func­tion for bosons out of (A.46), all the dif­fer­ent eigen­func­tions that can be formed by swap­ping the 36 par­ti­cles must be summed to­gether. The nor­mal­ized sum gives the cor­rect eigen­func­tion for bosons. But note that there is a hu­mon­gous num­ber of dif­fer­ent eigen­func­tions that can be ob­tained by swap­ping the par­ti­cles. Over 10$\POW9,{37}$ if you care to count them. As a re­sult, there is no way that the gi­gan­tic ex­pres­sion for the re­sult­ing 36-bo­son en­ergy eigen­func­tion could ever be writ­ten out here.

It is much eas­ier in terms of the graph­i­cal de­pic­tion fig­ure A.2: graph­i­cally all these count­less sys­tem eigen­func­tions dif­fer only with re­spect to the num­bers in the par­ti­cles. And since in the fi­nal eigen­func­tion, all par­ti­cles are present in ex­actly the same way, then so are their num­bers within the par­ti­cles. Every num­ber ap­pears equally in every par­ti­cle. So the num­bers do no longer add dis­tin­guish­ing in­for­ma­tion and can be left out. That makes the graph­i­cal de­pic­tion of the ex­am­ple eigen­func­tion for a sys­tem of iden­ti­cal bosons as in fig­ure A.3. It il­lus­trates why iden­ti­cal par­ti­cles are com­monly called “in­dis­tin­guish­able.”

Fig­ure A.4: Graph­i­cal de­pic­tion of an ar­bi­trary sys­tem en­ergy eigen­func­tion for 33 iden­ti­cal fermi­ons.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(405,197...
...
\PB350,196,t'$\pp72////$'
\PB378,196,t'$\pp73////$'
\end{picture}
\end{figure}

For a sys­tem of iden­ti­cal fermi­ons, (like elec­trons or quarks), the eigen­func­tions must change sign if two par­ti­cles are swapped. As chap­ter 5.7 showed, that is very re­stric­tive. It means that you can­not cre­ate an eigen­func­tion for a sys­tem of 36 fermi­ons from the ex­am­ple eigen­func­tion (A.46) and the swapped ver­sions of it. Var­i­ous sin­gle-par­ti­cle eigen­func­tions ap­pear mul­ti­ple times in (A.46), like $\pp4////$, which is oc­cu­pied by par­ti­cles 2, 31, and 33. That can­not hap­pen for fermi­ons. A sys­tem eigen­func­tion for 36 iden­ti­cal fermi­ons re­quires 36 dif­fer­ent sin­gle-par­ti­cle eigen­func­tions.

It is the same graph­i­cally. The ex­am­ple fig­ure A.3 for bosons is im­pos­si­ble for a sys­tem of iden­ti­cal fermi­ons; there can­not be more than one fermion in a sin­gle-par­ti­cle state. A de­pic­tion of an ar­bi­trary en­ergy eigen­func­tion that is ac­cept­able for a sys­tem of 33 iden­ti­cal fermi­ons is in fig­ure A.4.

As ex­plained in chap­ter 5.7, a neat way of writ­ing down the sys­tem en­ergy eigen­func­tion of the pic­tured ex­am­ple is to form a Slater de­ter­mi­nant from the oc­cu­pied states

\begin{displaymath}
\pp1////,
\pp2////,
\pp3////,
\ldots,
\pp{43}////,
\pp{45}////,
\pp{56}////.
\end{displaymath}

It is good to meet old friends again, isn’t it?

Now con­sider what hap­pens in rel­a­tivis­tic quan­tum me­chan­ics. For ex­am­ple, sup­pose that an elec­tron and positron an­ni­hi­late each other. What are you go­ing to do, leave holes in the pa­ra­me­ter list of your wave func­tion, where the elec­tron and positron used to be? Like

\begin{displaymath}
\Psi({\skew0\vec r}_1,S_{z1},\mbox{[gone]},{\skew0\vec r}_3...
... {\skew0\vec r}_5,S_{z5},\ldots,{\skew0\vec r}_{36},S_{z36};t)
\end{displaymath}

say? Or worse, what if a pho­ton with very high en­ergy hits an heavy nu­cleus and cre­ates an elec­tron-positron pair in the col­li­sion from scratch? Are you go­ing to scrib­ble in a set of ad­di­tional pa­ra­me­ters for the new par­ti­cles into your pa­ra­me­ter list? Scrib­ble in an­other row and col­umn in the Slater de­ter­mi­nant for your elec­trons? That is voodoo math­e­mat­ics. The clas­si­cal way of writ­ing wave func­tions fails.

And if positrons are too weird for you, con­sider pho­tons, the par­ti­cles of elec­tro­mag­netic ra­di­a­tion, like or­di­nary light. As chap­ters 6.8 and 7.8 showed, the elec­trons in hot sur­faces cre­ate and de­stroy pho­tons read­ily when the ther­mal equi­lib­rium shifts. Mov­ing at the speed of light, with zero rest mass, pho­tons are as rel­a­tivis­tic as they come. Good luck scrib­bling in tril­lions of new states for the pho­tons into your wave func­tion when your black box heats up. Then there are solids; as chap­ter 11.14.6 shows, the phonons of crys­tal vi­bra­tional waves are the equiv­a­lent of the pho­tons of elec­tro­mag­netic waves.

One of the key in­sights of quan­tum field the­ory is to do away with clas­si­cal math­e­mat­i­cal forms of the wave func­tion such as (A.46) and the Slater de­ter­mi­nants. In­stead, the graph­i­cal de­pic­tions, such as the ex­am­ples in fig­ures A.3 and A.4, are cap­tured in terms of math­e­mat­ics. How do you do that? By list­ing how many par­ti­cles are in each type of sin­gle-par­ti­cle state. In other words, you do it by list­ing the sin­gle-state “oc­cu­pa­tion num­bers.”

Con­sider the ex­am­ple bosonic eigen­func­tion of fig­ure A.3. The oc­cu­pa­tion num­bers for that state would be

\begin{displaymath}
\left\vert 3,4, 1,3,2,2,2, 1,1,0,0,2,1,2, 0,1,1,1,1,1,0,0,0,
1,0,0,0,0,0,0,1,\ldots\right\rangle
\end{displaymath}

in­di­cat­ing that there are 3 bosons in sin­gle-par­ti­cle state $\pp1////$, 4 in $\pp2////$, 1 in $\pp3////$, etcetera. Know­ing those num­bers is com­pletely equiv­a­lent to know­ing the clas­si­cal sys­tem en­ergy eigen­func­tion; it could be re­con­structed from them. Sim­i­larly, the oc­cu­pa­tion num­bers for the ex­am­ple fermi­onic eigen­func­tion of fig­ure A.4 would be

\begin{displaymath}
\left\vert 1,1, 1,1,1,1,1, 1,1,1,1,1,1,1, 0,1,0,1,1,1,1,1,1,
1,0,1,0,0,1,1,1, \ldots\right\rangle
\end{displaymath}

Such sets of oc­cu­pa­tion num­bers are called “Fock ba­sis states.” Each de­scribes one sys­tem en­ergy eigen­func­tion.

Gen­eral wave func­tions can be de­scribed by tak­ing lin­ear com­bi­na­tions of these ba­sis states. The most gen­eral Fock wave func­tion for a clas­si­cal set of ex­actly $I$ par­ti­cles is a lin­ear com­bi­na­tion of all the ba­sis states whose oc­cu­pa­tion num­bers add up to $I$. But Fock states make it also pos­si­ble to de­scribe sys­tems like pho­tons in a box with vary­ing num­bers of par­ti­cles. Then the most gen­eral wave func­tion is a lin­ear com­bi­na­tion of all the Fock ba­sis states, re­gard­less of the to­tal num­ber of par­ti­cles. The set of all pos­si­ble wave func­tions that can be formed from lin­ear com­bi­na­tions of the Fock ba­sis states is called the Fock space.

How about the case of dis­tin­guish­able par­ti­cles as in fig­ure A.2? In that case, the num­bers in­side the par­ti­cles also make a dif­fer­ence, so where do they go?? The an­swer of quan­tum field the­ory is to deny the ex­is­tence of generic par­ti­cles that take num­bers. There are no generic par­ti­cles in quan­tum field the­ory. There is a field of elec­trons, there is a field of pro­tons, (or quarks, ac­tu­ally), there is a field of pho­tons, etcetera, and each of these fields is granted its own set of oc­cu­pa­tion num­bers. There is no way to de­scribe a generic par­ti­cle us­ing a num­ber. For ex­am­ple, if there is an elec­tron in a sin­gle-par­ti­cle state, in quan­tum field the­ory it means that the elec­tron field has a par­ti­cle in that en­ergy state. The par­ti­cle has no num­ber.

Some physi­cist feel that this is a strong point in fa­vor of be­liev­ing that quan­tum field the­ory is the way na­ture re­ally works. In the clas­si­cal for­mu­la­tion of quan­tum me­chan­ics, the (anti) sym­metriza­tion re­quire­ments un­der par­ti­cle ex­change are an ad­di­tional in­gre­di­ent, added to ex­plain the data. In quan­tum field the­ory, it comes nat­u­rally: par­ti­cles that are dis­tin­guish­able sim­ply can­not be de­scribed by the for­mal­ism. Still, our con­ve­nience in de­scrib­ing it is an un­cer­tain mo­ti­va­tor for na­ture.

The suc­cess­ful analy­sis of the black­body spec­trum in chap­ter 6.8 al­ready tes­ti­fied to the use­ful­ness of the Fock space. If you check the de­riva­tions in chap­ter 11 lead­ing to it, they were all con­ducted based on oc­cu­pa­tion num­bers. A clas­si­cal wave func­tion for the sys­tem of pho­tons was never writ­ten down; that sim­ply can­not be done.

Fig­ure A.5: Ex­am­ple wave func­tions for a sys­tem with just one type of sin­gle par­ti­cle state. Left: iden­ti­cal bosons; right: iden­ti­cal fermi­ons.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(178,22...
...B14,21,t'$\pp////$'}
\PC8,5.5,\PC20,5.5,
\PD164,5.5,
\end{picture}
\end{figure}

There is a lot more in­volved in quan­tum field the­ory than just the black­body spec­trum, of course. To ex­plain some of the ba­sic ideas, sim­ple ex­am­ples can be help­ful. The sim­plest ex­am­ple that can be stud­ied in­volves just one sin­gle-par­ti­cle state, say just a sin­gle-par­ti­cle ground state. The graph­i­cal de­pic­tion of an ar­bi­trary ex­am­ple wave func­tion is then as in fig­ure A.5. There is just one sin­gle-par­ti­cle box. In non­rel­a­tivis­tic quan­tum me­chan­ics, this would be a com­pletely triv­ial quan­tum sys­tem. In the case of $I$ iden­ti­cal bosons, shown to the left in the fig­ure, all of them would have to go into the only state there is. In the case of iden­ti­cal fermi­ons, shown to the right, there can only be one fermion, and it has to go into the only state there is.

But when par­ti­cles can be cre­ated or de­stroyed, things get more in­ter­est­ing. When there is no given num­ber of par­ti­cles, there can be any num­ber of iden­ti­cal bosons within that sin­gle par­ti­cle state. That al­lows $\vert\rangle$ (no par­ti­cles,) $\vert 1\rangle$ (1 par­ti­cle), $\vert 2\rangle$ (2 par­ti­cles), etcetera. And the gen­eral wave func­tion can be a lin­ear com­bi­na­tion of those pos­si­bil­i­ties. It is the same for iden­ti­cal fermi­ons, ex­cept that there are now only the states $\vert\rangle$ (no par­ti­cles) and $\vert 1\rangle$ (1 par­ti­cle). The wave func­tion can still be a com­bi­na­tion of these two pos­si­bil­i­ties.

A rel­a­tivis­tic sys­tem with just one type of sin­gle-par­ti­cle state does seem very ar­ti­fi­cial. It raises the ques­tion how es­o­teric such an ex­am­ple is. But there are in fact two very well es­tab­lished clas­si­cal sys­tems that be­have just like this:

1.
The one-di­men­sion­al har­monic os­cil­la­tor of chap­ter 4.1 has en­ergy lev­els that hap­pen to be ex­actly equally spaced. It can pick up an en­ergy above the ground state that is any whole mul­ti­ple of $\hbar\omega$, where $\omega$ is its nat­ural fre­quency. If you are will­ing to ac­cept the par­ti­cles to be quanta of en­ergy of size $\hbar\omega$, then it pro­vides a model of a bosonic sys­tem with just one sin­gle-par­ti­cle state. The ground state, $h_0$ in the no­ta­tions of chap­ter 4.1, is the state ${\left\vert\right\rangle}$. The first ex­cited state $h_1$ is ${\left\vert 1\right\rangle}$; it has one ad­di­tional en­ergy quan­tum $\hbar\omega$. The sec­ond ex­cited state $h_2$ is ${\left\vert 2\right\rangle}$, with two quanta more than the ground state, etcetera.

Re­call from chap­ter 4.1 that there is an ad­di­tional ground state en­ergy of half a $\hbar\omega$ quan­tum. In a quan­tum field the­ory, this ad­di­tional en­ergy that ex­ists even when there are no par­ti­cles is called the vac­uum en­ergy.

The gen­eral wave func­tion of a har­monic os­cil­la­tor is a lin­ear com­bi­na­tion of the en­ergy states. In terms of chap­ter 4.1, that ex­presses an un­cer­tainty in en­ergy. In the present con­text, it ex­presses an un­cer­tainty in the num­ber of these en­ergy par­ti­cles!

2.
A sin­gle elec­tron has ex­actly two spin states. It can pick up ex­actly one unit $\hbar$ of $z$-​mo­men­tum above the spin-down state. If you ac­cept the par­ti­cles to be sin­gle quanta of $z$-​mo­men­tum of size $\hbar$, then it pro­vides an ex­am­ple of a fermi­onic sys­tem with just one sin­gle-par­ti­cle state. There can be ei­ther 0 or 1 quan­tum $\hbar$ of an­gu­lar mo­men­tum in that sin­gle-par­ti­cle state. The gen­eral wave func­tion is a lin­ear com­bi­na­tion of the state with one an­gu­lar mo­men­tum par­ti­cle and the state with no an­gu­lar mo­men­tum par­ti­cle.

This ex­am­ple is less in­tu­itive, since nor­mally when you talk about a par­ti­cle, you talk about an amount of en­ergy, like in Ein­stein’s mass-en­ergy re­la­tion. If it both­ers you, think of the elec­tron as be­ing con­fined in­side a mag­netic field; then the spin-up state is as­so­ci­ated with a cor­re­spond­ing in­crease in en­ergy.

While the above two ex­am­ples of rel­a­tivis­tic sys­tems with only one sin­gle-par­ti­cle state are ob­vi­ously made up, they do pro­vide a very valu­able san­ity check on any rel­a­tivis­tic analy­sis.

Not only that, the two ex­am­ples are also very use­ful to un­der­stand the dif­fer­ence be­tween a zero wave func­tion and the so-called “vac­uum state”

\begin{displaymath}
\fbox{$\displaystyle
\vert\vec 0\,\rangle \equiv \vert,0,0,\ldots\rangle
$} %
\end{displaymath} (A.47)

in which all oc­cu­pa­tion num­bers are zero. The vac­uum state is a nor­mal­ized, nonzero, wave func­tion just like the other pos­si­ble sets of oc­cu­pa­tion num­bers. It de­scribes that there are no par­ti­cles with cer­tainty. You can see it from the two ex­am­ples above. For the har­monic os­cil­la­tor, the state $\vert\rangle$ is the ground state $h_0$ of the os­cil­la­tor. For the elec­tron-spin ex­am­ple, it is the spin-down state of the elec­tron. These are com­pletely nor­mal eigen­states that the sys­tem can be in. They are not zero wave func­tions, which would be un­able to de­scribe a sys­tem.

Fock ba­sis kets are taken to be or­tho­nor­mal; an in­ner prod­uct be­tween kets is zero un­less all oc­cu­pa­tion num­bers are equal. If they are all equal, the in­ner prod­uct is 1. In short:

\begin{displaymath}
\fbox{$\displaystyle
{\left\langle\ldots,{\underline i}_3,...
...tom{\strut_1^1} 0 \mbox{ otherwise}
\end{array} \right.
$} %
\end{displaymath} (A.48)

If the two kets have the same to­tal num­ber of par­ti­cles, this or­tho­nor­mal­ity is re­quired be­cause the cor­re­spond­ing clas­si­cal wave func­tions are or­tho­nor­mal. In­ner prod­ucts be­tween clas­si­cal eigen­func­tions that have even a sin­gle par­ti­cle in a dif­fer­ent state are zero. That is eas­ily ver­i­fied if the wave func­tions are sim­ple prod­ucts of sin­gle-par­ti­cle ones. But then it also holds for sums of such eigen­func­tions, as you have for bosons and fermi­ons.

If the two kets have dif­fer­ent to­tal num­bers of par­ti­cles, the in­ner prod­uct be­tween the clas­si­cal wave func­tions does not ex­ist. But ba­sis kets are still or­tho­nor­mal. To see that, take the two sim­ple ex­am­ples given above. For the har­monic os­cil­la­tor ex­am­ple, dif­fer­ent oc­cu­pa­tion num­bers for the par­ti­cles cor­re­spond to dif­fer­ent en­ergy eigen­func­tions of the ac­tual har­monic os­cil­la­tor. These are or­tho­nor­mal. It is sim­i­lar for the spin ex­am­ple. The state of 0 par­ti­cles is the spin-down state of the elec­tron. The state of 1 par­ti­cle is the spin-up state. These spin states are or­tho­nor­mal states of the ac­tual elec­tron.


A.15.2 Cre­ation and an­ni­hi­la­tion op­er­a­tors

The key to rel­a­tivis­tic quan­tum me­chan­ics is that par­ti­cles can be cre­ated and an­ni­hi­lated. So it may not be sur­pris­ing that it is very help­ful to de­fine op­er­a­tors that cre­ate and an­ni­hi­late par­ti­cles .

To keep the no­ta­tions rel­a­tively sim­ple, it will ini­tially be as­sumed that there is just one type of sin­gle-par­ti­cle state. Graph­i­cally that means that there is just one sin­gle-par­ti­cle state box, like in fig­ure A.5. How­ever, there can be an ar­bi­trary num­ber of par­ti­cles in that box.

The de­sired ac­tions of the cre­ation and an­ni­hi­la­tion op­er­a­tors are sketched in fig­ure A.6. An an­ni­hi­la­tion op­er­a­tor $\widehat a$ turns a state ${\left\vert i\right\rangle}$ with $i$ par­ti­cles into a state ${\left\vert i{-}1\right\rangle}$ with $i-1$ par­ti­cles. A cre­ation op­er­a­tor $\widehat a^\dagger $ turns a state ${\left\vert i\right\rangle}$ with $i$ par­ti­cles into a state ${\left\vert i{+}1\right\rangle}$ with $i+1$ par­ti­cles.

Fig­ure A.6: Cre­ation and an­ni­hi­la­tion op­er­a­tors for a sys­tem with just one type of sin­gle par­ti­cle state. Left: iden­ti­cal bosons; right: iden­ti­cal fermi­ons.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(200,19...
...9.5,\PC5,131.5,\PC14,131.5,\PC23,131.5,
\PD164,47.5,
\end{picture}
\end{figure}

The op­er­a­tors are there­fore de­fined by the re­la­tions

\begin{displaymath}
\widehat a{\left\vert i\right\rangle} = \alpha _i {\left\ve...
...\dagger {\left\vert 1\right\rangle} = 0 \mbox{ for fermions} %
\end{displaymath} (A.49)

Here the $\alpha _i$ and $\alpha^\dagger _i$ are nu­mer­i­cal con­stants still to be cho­sen.

Note that the above re­la­tions only spec­ify what the op­er­a­tors $\widehat a$ and $\widehat a^\dagger $ do to ba­sis kets. But that is enough in­for­ma­tion to de­fine them. To fig­ure out what these op­er­a­tors do to lin­ear com­bi­na­tions of ba­sis kets, just ap­ply them to each term in the com­bi­na­tion sep­a­rately.

Math­e­mat­i­cally you can al­ways de­fine what­ever op­er­a­tors you want. But you must hope that they will turn out to be op­er­a­tors that are phys­i­cally help­ful. To help achieve that, you want to chose the nu­mer­i­cal con­stants $\alpha _i$ and $\alpha^\dagger _i$ ap­pro­pri­ately. Con­sider what hap­pens if the op­er­a­tors are ap­plied in se­quence:

\begin{displaymath}
\widehat a^\dagger \widehat a{\left\vert i\right\rangle} = ...
...} = \alpha^\dagger _{i-1}\alpha _i {\left\vert i\right\rangle}
\end{displaymath}

Read­ing from right to left, the or­der in which the op­er­a­tors act on the state, first $\widehat a$ de­stroys a par­ti­cle, then $\widehat a^\dagger $ re­stores it again. It gives the same state back, ex­cept for the nu­mer­i­cal fac­tor $\alpha^\dagger _{i-1}\alpha _i$. That makes every state ${\left\vert i\right\rangle}$ an eigen­vec­tor of the op­er­a­tor $\widehat a^\dagger \widehat a$ with eigen­value $\alpha^\dagger _{i-1}\alpha _i$.

If the con­stants $\alpha^\dagger _{i-1}$ and $\alpha _i$ are cho­sen to make the eigen­value a real num­ber, then the op­er­a­tor $\widehat a^\dagger \widehat a$ will be Her­mit­ian. More specif­i­cally, if they are cho­sen to make the eigen­value equal to $i$, then $\widehat a^\dagger \widehat a$ will be the “par­ti­cle num­ber op­er­a­tor” whose eigen­val­ues are the num­ber of par­ti­cles in the sin­gle-par­ti­cle state. The most log­i­cal choice for the con­stants to achieve that is clearly

\begin{displaymath}
\alpha _i=\sqrt{i}
\qquad
\alpha^\dagger _{i-1}=\sqrt{i}
\quad\Longrightarrow\quad
\alpha^\dagger _i=\sqrt{i+1}
\end{displaymath}

The full de­f­i­n­i­tion of the an­ni­hi­la­tion and cre­ation op­er­a­tors can now be writ­ten in a nice sym­met­ric way as

\begin{displaymath}
\fbox{$\displaystyle
\widehat a{\left\vert i\right\rangle}...
...gger {\left\vert 1\right\rangle} = 0\mbox{ for fermions}
$} %
\end{displaymath} (A.50)

In words, the an­ni­hi­la­tion op­er­a­tor $\widehat a$ kills off one par­ti­cle and adds a fac­tor $\sqrt{i}$. The op­er­a­tor $\widehat a^\dagger $ puts the par­ti­cle back in and adds an­other fac­tor $\sqrt{i}$.

These op­er­a­tors are par­tic­u­larly con­ve­nient since they are Her­mit­ian con­ju­gates. That means that if you take them to the other side in an in­ner prod­uct, they turn into each other. In par­tic­u­lar, for in­ner prod­ucts be­tween ba­sis kets,

\begin{displaymath}
\Big\langle {\left\vert{\underline i}\right\rangle} \Big\ve...
... \Big\vert {\left\vert{\underline i}\right\rangle} \Big\rangle
\end{displaymath}

Note that if such re­la­tions ap­ply for ba­sis kets, they also ap­ply for all lin­ear com­bi­na­tions of ba­sis kets.

To ver­ify that the above re­la­tions ap­ply, re­call from the pre­vi­ous sub­sec­tion that kets are or­tho­nor­mal. In the equal­i­ties above, the in­ner prod­ucts are only nonzero if ${\underline i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $i-1$: af­ter low­er­ing the par­ti­cle num­ber with $\widehat a$, or rais­ing it with $\widehat a^\dagger $, the par­ti­cle num­bers must be the same at both sides of the in­ner prod­uct. And when ${\underline i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $i-1$, ac­cord­ing to the de­f­i­n­i­tions (A.50) of $\widehat a$ and $\widehat a^\dagger $ all in­ner prod­ucts above equal $\sqrt{i}$, so the equal­i­ties still ap­ply.

It re­mains true for fermi­ons that $\widehat a$ and $\widehat a^\dagger $ are Her­mit­ian con­ju­gates, even though $\widehat a^\dagger {\left\vert 1\right\rangle}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 in­stead of $\sqrt{2}\,{\left\vert 2\right\rangle}$. The rea­son is that the lat­ter would only make a dif­fer­ence if there was a ${\left\vert 2\right\rangle}$ state in the other side of the in­ner prod­uct, and such a state does not ex­ist.

The in­ner prod­ucts are usu­ally writ­ten in the more es­thetic form

\begin{displaymath}
{\left\langle{\underline i}\hspace{0.3pt}\right\vert}\wideh...
...widehat a^\dagger \Big){\left\vert{\underline i}\right\rangle}
\end{displaymath}

Here it is to be un­der­stood that, say, ${\left\langle{\underline i}\hspace{0.3pt}\right\vert}\widehat a$ stands for $\widehat a^\dagger {\left\vert{\underline i}\right\rangle}$ pushed into the left hand side of an in­ner prod­uct, chap­ter 2.7.1.

You may well won­der why $\widehat a^\dagger \widehat a$ is the par­ti­cle count op­er­a­tor; why not $\widehat a\widehat a^\dagger $? The rea­son is that $\widehat a\widehat a^\dagger $ would not work for the state ${\left\vert\right\rangle}$ un­less you took $\widehat a^\dagger {\left\vert\right\rangle}$ to be zero or $\widehat a{\left\vert 1\right\rangle}$ to be zero, and then they could no longer cre­ate or an­ni­hi­late ${\left\vert 1\right\rangle}$.

Still, it is in­ter­est­ing to see what the ef­fect of $\widehat a\widehat a^\dagger $ is. It turns out that this de­pends on the type of par­ti­cle. For bosons, us­ing (A.50),

\begin{displaymath}
\widehat a_{\rm {b}}\widehat a^\dagger _{\rm {b}} {\left\ve...
...\left\vert i\right\rangle} = (i+1) {\left\vert i\right\rangle}
\end{displaymath}

So the op­er­a­tor $\widehat a_{\rm {b}}\widehat a^\dagger _{\rm {b}}$ has eigen­val­ues one greater than the num­ber of par­ti­cles. That means that if you sub­tract $\widehat a_{\rm {b}}\widehat a^\dagger _{\rm {b}}$ and $\widehat a^\dagger _{\rm {b}}\widehat a_{\rm {b}}$, you get the unit op­er­a­tor that leaves all states un­changed. And the dif­fer­ence be­tween $\widehat a_{\rm {b}}\widehat a^\dagger _{\rm {b}}$ and $\widehat a^\dagger _{\rm {b}}\widehat a_{\rm {b}}$ is by de­f­i­n­i­tion the com­mu­ta­tor of $\widehat a_{\rm {b}}$ and $\widehat a^\dagger _{\rm {b}}$, in­di­cated by square brack­ets:
\begin{displaymath}
\fbox{$\displaystyle
[\widehat a_{\rm{b}},\widehat a^\dagg...
...} - \widehat a^\dagger _{\rm{b}} \widehat a_{\rm{b}} = 1
$} %
\end{displaymath} (A.51)

Isn’t that cute! Of course, $[\widehat a_{\rm {b}},\widehat a_{\rm {b}}]$ and $[\widehat a^\dagger _{\rm {b}},\widehat a^\dagger _{\rm {b}}]$ are zero since every­thing com­mutes with it­self. It turns out that you can learn a lot from these com­mu­ta­tors, as seen in later sub­sec­tions.

The same com­mu­ta­tor does not ap­ply to fermi­ons, be­cause if you ap­ply $\widehat a_{\rm {f}}\widehat a^\dagger _{\rm {f}}$ to ${\left\vert 1\right\rangle}$, you get zero in­stead of $2{\left\vert 1\right\rangle}$. But for fermi­ons, the only state for which $\widehat a_{\rm {f}}\widehat a^\dagger _{\rm {f}}$ pro­duces some­thing nonzero is ${\left\vert\right\rangle}$ and then it leaves the state un­changed. Sim­i­larly, the only state for which $\widehat a^\dagger _{\rm {f}}\widehat a_{\rm {f}}$ pro­duces some­thing nonzero is ${\left\vert 1\right\rangle}$ and then it leaves that state un­changed. That means that if you add $\widehat a_{\rm {f}}\widehat a^\dagger _{\rm {f}}$ and $\widehat a^\dagger _{\rm {f}}\widehat a_{\rm {f}}$ to­gether, in­stead of sub­tract them, it re­pro­duces the same state state whether it is ${\left\vert\right\rangle}$ or ${\left\vert 1\right\rangle}$ (or any com­bi­na­tion of them). The sum of $\widehat a_{\rm {f}}\widehat a^\dagger _{\rm {f}}$ and $\widehat a^\dagger _{\rm {f}}\widehat a_{\rm {f}}$ is called the “an­ti­com­mu­ta­tor” of $\widehat a_{\rm {f}}$ and $\widehat a^\dagger _{\rm {f}}$; it is in­di­cated by curly brack­ets:

\begin{displaymath}
\fbox{$\displaystyle
\{\widehat a_{\rm{f}},\widehat a^\dag...
...} + \widehat a^\dagger _{\rm{f}} \widehat a_{\rm{f}} = 1
$} %
\end{displaymath} (A.52)

Isn’t that neat? Note also that $\{\widehat a_{\rm {f}},\widehat a_{\rm {f}}\}$ and $\{\widehat a_{\rm {f}},\widehat a_{\rm {f}}\}$ are zero since ap­ply­ing ei­ther op­er­a­tor twice ends up in a nonex­ist­ing state.

How about the Hamil­ton­ian for the en­ergy of the sys­tem of par­ti­cles? Well, for non­in­ter­act­ing par­ti­cles the en­ergy of $i$ par­ti­cles is $i$ times the sin­gle par­ti­cle en­ergy ${\vphantom' E}^{\rm p}$. And since the op­er­a­tor that gives the num­ber of par­ti­cles is $\widehat a^\dagger \widehat a$, that is ${\vphantom' E}^{\rm p}\widehat a^\dagger \widehat a$. The to­tal Hamil­ton­ian for non­in­ter­act­ing par­ti­cles be­comes there­fore:

\begin{displaymath}
\fbox{$\displaystyle
H = {\vphantom' E}^{\rm p}\widehat a^\dagger \widehat a+ E_{\rm{ve}}
$} %
\end{displaymath} (A.53)

Here $E_{\rm {ve}}$ stands for any ad­di­tional “vac­uum en­ergy” that ex­ists even if there are no par­ti­cles. That is the ground state en­ergy of the sys­tem. The above Hamil­ton­ian al­lows the Schrö­din­ger equa­tion to be writ­ten in terms of oc­cu­pa­tion num­bers and cre­ation and an­ni­hi­la­tion op­er­a­tors.


A.15.3 The ca­Her­mi­tians

It is im­por­tant to note that the cre­ation and an­ni­hi­la­tion op­er­a­tors $\widehat a^\dagger $ and $\widehat a$ are not Her­mit­ian. They can­not be taken un­changed to the other side of an in­ner prod­uct. And their eigen­val­ues are not real. There­fore they can­not cor­re­spond to phys­i­cally ob­serv­able quan­ti­ties. But since they are Her­mit­ian con­ju­gates, it is easy to form op­er­a­tors from them that are Her­mit­ian. For ex­am­ple, their prod­ucts $\widehat a^\dagger \widehat a$ and $\widehat a\widehat a^\dagger $ are Her­mit­ian. The Hamil­ton­ian for non­in­ter­act­ing par­ti­cles (A.53) given in the pre­vi­ous sub­sec­tion il­lus­trates that.

Her­mit­ian op­er­a­tors can also be formed from lin­ear com­bi­na­tions of the cre­ation and an­ni­hi­la­tion op­er­a­tors. Two com­bi­na­tions that are of­ten phys­i­cally rel­e­vant are

\begin{displaymath}
\widehat P \equiv {\textstyle\frac{1}{2}}(\widehat a+\wideh...
...textstyle\frac{1}{2}} {\rm i}(\widehat a- \widehat a^\dagger )
\end{displaymath}

In lack of a bet­ter name that the au­thor knows of, this book will call $\widehat{P}$ and $\widehat{Q}$ the ca­Her­mi­tians.

Con­versely, the an­ni­hi­la­tion and cre­ation op­er­a­tors can be writ­ten in terms of the ca­Her­mi­tians as

\begin{displaymath}
\fbox{$\displaystyle
\widehat a= \widehat P - {\rm i}\wide...
...quad \widehat a^\dagger = \widehat P + {\rm i}\widehat Q
$} %
\end{displaymath} (A.54)

This can be ver­i­fied by sub­sti­tut­ing in the de­f­i­n­i­tions of $\widehat{P}$ and $\widehat{Q}$.

The Hamil­ton­ian (A.53) for non­in­ter­act­ing par­ti­cles can be writ­ten in terms of $\widehat{P}$ and $\widehat{Q}$ as

\begin{displaymath}
H = {\vphantom' E}^{\rm p}\left(\widehat P^2 + \widehat Q^2
- {\rm i}[\widehat P,\widehat Q]\right) + E_{\rm {ve}}
\end{displaymath}

Here ${\vphantom' E}^{\rm p}$ is again the sin­gle-par­ti­cle en­ergy and $E_{\rm {ve}}$ the vac­uum en­ergy. The square brack­ets in­di­cate again the com­mu­ta­tor of the en­closed op­er­a­tors.

What this Hamil­ton­ian means de­pends on whether the par­ti­cles be­ing de­scribed are bosons or fermi­ons. They have dif­fer­ent com­mu­ta­tors $[\widehat{P},\widehat{Q}]$.

Con­sider first the case that the par­ti­cles are bosons. The pre­vi­ous sub­sec­tion showed that the com­mu­ta­tor $[\widehat a_{\rm {b}},\widehat a^\dagger _{\rm {b}}]$ is 1. From that the com­mu­ta­tor of $P_{\rm {b}}$ and $Q_{\rm {b}}$ is read­ily found us­ing the rules of chap­ter 4.5.4. It is:

\begin{displaymath}
\fbox{$\displaystyle
[\widehat P_{\rm{b}},\widehat Q_{\rm{b}}] = - {\textstyle\frac{1}{2}}{\rm i}
$} %
\end{displaymath} (A.55)

So the com­mu­ta­tor is an imag­i­nary con­stant. That is very much like Heisen­berg’s canon­i­cal com­mu­ta­tor be­tween po­si­tion and lin­ear mo­men­tum in clas­si­cal quan­tum me­chan­ics. It im­plies a sim­i­lar un­cer­tainty prin­ci­ple, chap­ter 4.5.2 (4.46). In par­tic­u­lar, $P_{\rm {b}}$ and $Q_{\rm {b}}$ can­not have def­i­nite val­ues at the same time. Their val­ues have un­cer­tain­ties $\sigma_{P_{\rm {b}}}$ and $\sigma_{Q_{\rm {b}}}$ that are at least so big that

\begin{displaymath}
\sigma_{P_{\rm {b}}} \sigma_{Q_{\rm {b}}} \mathrel{\raisebox{-1pt}{$\geqslant$}}{\textstyle\frac{1}{4}}
\end{displaymath}

The Hamil­ton­ian for bosons be­comes, us­ing the com­mu­ta­tor above,

\begin{displaymath}
H_{\rm {b}}
= {\vphantom' E}^{\rm p}\left(\widehat P_{\rm ...
..._{\rm {ve}} - {\textstyle\frac{1}{2}} {\vphantom' E}^{\rm p} %
\end{displaymath} (A.56)

Of­ten, the Hamil­ton­ian is sim­ply the first term in the right hand side. In that case, the vac­uum en­ergy is half a par­ti­cle.

For fermi­ons, the fol­low­ing use­ful re­la­tions fol­low from the an­ti­com­mu­ta­tors for the cre­ation and an­ni­hi­la­tion op­er­a­tors given in the pre­vi­ous sub­sec­tion:

\begin{displaymath}
\widehat P_{\rm {f}}^2 = {\textstyle\frac{1}{4}} \qquad
\widehat Q_{\rm {f}}^2 = {\textstyle\frac{1}{4}} %
\end{displaymath} (A.57)

The Hamil­ton­ian then be­comes
\begin{displaymath}
H = {\vphantom' E}^{\rm p}\left({\textstyle\frac{1}{2}}-{\r...
...hat P_{\rm {f}},\widehat Q_{\rm {f}}]\right)
+ E_{\rm {ve}} %
\end{displaymath} (A.58)


A.15.4 Re­cast­ing a Hamil­ton­ian as a quan­tum field one

The ar­gu­ments of the pre­vi­ous sub­sec­tion can be re­versed. Given a suit­able Hamil­ton­ian, it can be re­cast in terms of an­ni­hi­la­tion and cre­ation op­er­a­tors. This is of­ten use­ful. It pro­vides a way to quan­tize sys­tems such as a har­monic os­cil­la­tor or elec­tro­mag­netic ra­di­a­tion.

As­sume that some sys­tem has a Hamil­ton­ian with the fol­low­ing prop­er­ties:

\begin{displaymath}
\fbox{$\displaystyle
H = {\vphantom' E}^{\rm p}\left(\wide...
...at P,\widehat Q\right] = - {\textstyle\frac{1}{2}} {\rm i}
$}
\end{displaymath} (A.59)

Here $\widehat{P}$ and $\widehat{Q}$ must be Her­mit­ian op­er­a­tors and ${\vphantom' E}^{\rm p}$ and $E_{\rm {ref}}$ must be con­stants with units of en­ergy.

It may be noted that typ­i­cally $E_{\rm {ref}}$ is zero. It may also be noted that it suf­fices that the com­mu­ta­tor is an imag­i­nary con­stant. A dif­fer­ent mag­ni­tude of the con­stant can be ac­com­mo­dated by rescal­ing $\widehat{P}$ and $\widehat{Q}$, and ab­sorb­ing the scal­ing fac­tor in ${\vphantom' E}^{\rm p}$. A sign change can be ac­com­mo­dated by swap­ping $\widehat{P}$ and $\widehat{Q}$.

From the given ap­par­ently lim­ited amount of in­for­ma­tion, all of the fol­low­ing con­clu­sions fol­low:

1.
The ob­serv­able quan­ti­ties $P$ and $Q$ cor­re­spond­ing to the Her­mit­ian op­er­a­tors are al­ways un­cer­tain. As ex­plained in chap­ter 4.4, if you mea­sure an un­cer­tain quan­tity, say $P$, for a lot of iden­ti­cal sys­tems, you do get some av­er­age value. That av­er­age value is called the ex­pec­ta­tion value $\left\langle{P}\right\rangle $. How­ever, the in­di­vid­ual mea­sured val­ues will de­vi­ate from that ex­pec­ta­tion value. The av­er­age de­vi­a­tion is called the stan­dard de­vi­a­tion or un­cer­tainty $\sigma_P$. For the sys­tem above, the un­cer­tain­ties in $P$ and $Q$ must sat­isfy the re­la­tion

\begin{displaymath}
\sigma_P \sigma_Q \mathrel{\raisebox{-1pt}{$\geqslant$}}{\textstyle\frac{1}{4}}
\end{displaymath}

Nei­ther un­cer­tainty can be zero, be­cause that would make the other un­cer­tainty in­fi­nite.
2.
The ex­pec­ta­tion val­ues of the ob­serv­ables $P$ and $Q$ sat­isfy the equa­tions

\begin{displaymath}
\frac{{\rm d}\left\langle{P}\right\rangle }{{\rm d}t} = - \...
...ox{where } \omega \equiv \frac{{\vphantom' E}^{\rm p}}{\hbar}
\end{displaymath}

That means that the ex­pec­ta­tion val­ues vary har­mon­i­cally with time,

\begin{displaymath}
\left\langle{P}\right\rangle = A \cos(\omega t + \alpha)
\qquad
\left\langle{Q}\right\rangle = A \sin(\omega t + \alpha)
\end{displaymath}

Here the am­pli­tude $A$ and the “phase an­gle” $\alpha$ are ar­bi­trary con­stants.
3.
In en­ergy eigen­states, the ex­pec­ta­tion val­ues $\left\langle{P}\right\rangle $ and $\left\langle{Q}\right\rangle $ are al­ways zero.
4.
The ground state en­ergy of the sys­tem is

\begin{displaymath}
E_0 = {\textstyle\frac{1}{2}}{\vphantom' E}^{\rm p}+E_{\rm {ref}}
\end{displaymath}

For now it will be as­sumed that the ground state is unique. It will be in­di­cated as ${\left\vert\right\rangle}$. It is of­ten called the vac­uum state.
5.
The higher en­ergy states will be in­di­cated by ${\left\vert 1\right\rangle}$, ${\left\vert 2\right\rangle}$, ...in or­der of in­creas­ing en­ergy $E_1$, $E_2$, .... The states are unique and their en­ergy is

\begin{displaymath}
\mbox{wave function: } {\left\vert i\right\rangle}
\qquad
...
...gy: } E_i = (i + {\textstyle\frac{1}{2}}) E_p + E_{\rm {ref}}
\end{displaymath}

So a state ${\left\vert i\right\rangle}$ has $i$ ad­di­tional quanta of en­ergy ${\vphantom' E}^{\rm p}$ more than the vac­uum state. In par­tic­u­lar that means that the en­ergy lev­els are equally spaced. There is no max­i­mum en­ergy.
6.
In en­ergy eigen­states,

\begin{displaymath}
\left\langle{{\vphantom' E}^{\rm p}P^2}\right\rangle = \lef...
...le = {\textstyle\frac{1}{2}}(i + {\textstyle\frac{1}{2}}) E_p
\end{displaymath}

So the ex­pec­ta­tion val­ues of these two terms in the Hamil­ton­ian are equal. Each con­tributes half to the en­ergy of the quanta.
7.
In the ground state, the two ex­pec­ta­tion en­er­gies above are the ab­solute min­i­mum al­lowed by the un­cer­tainty re­la­tion. Each ex­pec­ta­tion en­ergy is then ${\textstyle\frac{1}{4}}{\vphantom' E}^{\rm p}$.
8.
An­ni­hi­la­tion and cre­ation op­er­a­tors can be de­fined as

\begin{displaymath}
\widehat a\equiv \widehat P - {\rm i}\widehat Q
\qquad
\widehat a^\dagger \equiv \widehat P + {\rm i}\widehat Q
\end{displaymath}

These have the fol­low­ing ef­fects on the en­ergy states:

\begin{displaymath}
\widehat a{\left\vert i\right\rangle} = \sqrt{i} {\left\ver...
...rt i{-}1\right\rangle} = \sqrt{i} {\left\vert i\right\rangle}
\end{displaymath}

(This does as­sume that the nor­mal­iza­tion fac­tors in the en­ergy eigen­states are cho­sen con­sis­tently. Oth­er­wise there might be ad­di­tional fac­tors of mag­ni­tude 1.) The com­mu­ta­tor $[\widehat a,\widehat a^\dagger ]$ is 1.
9.
The Hamil­ton­ian can be rewrit­ten as

\begin{displaymath}
H = {\vphantom' E}^{\rm p}\widehat a^\dagger \widehat a+ {\textstyle\frac{1}{2}} E_p + E_{\rm {ref}}
\end{displaymath}

Here the op­er­a­tor $\widehat a^\dagger \widehat a$ gives the num­ber of en­ergy quanta of the state it acts on.
10.
If the ground state is not unique, each in­de­pen­dent ground state gives rise to its own set of en­ergy eigen­func­tions, with the above prop­er­ties. Con­sider the ex­am­ple that the sys­tem de­scribes an elec­tron, and that the en­ergy does not de­pend on the spin. In that case, there will be a spin-up and a spin-down ver­sion of the ground state, ${\left\vert\right\rangle}{\uparrow}$ and ${\left\vert\right\rangle}{\downarrow}$. These will give rise to two fam­i­lies of en­ergy states ${\left\vert i\right\rangle}{\uparrow}$ re­spec­tively ${\left\vert i\right\rangle}{\downarrow}$. Each fam­ily will have the prop­er­ties de­scribed above.

The de­riva­tion of the above prop­er­ties is re­ally quite sim­ple and el­e­gant. It can be found in {D.33}.

Note that var­i­ous prop­er­ties above are ex­actly the same as found in the analy­sis of bosons start­ing with the an­ni­hi­la­tion and cre­ation op­er­a­tors. The dif­fer­ence in this sub­sec­tion is that the start­ing point was a Hamil­ton­ian in terms of two square Her­mit­ian op­er­a­tors; and those merely needed to have a purely imag­i­nary com­mu­ta­tor.


A.15.5 The har­monic os­cil­la­tor as a bo­son sys­tem

This sub­sec­tion will il­lus­trate the power of the in­tro­duced quan­tum field ideas by ex­am­ple. The ob­jec­tive is to use these ideas to red­erive the one-di­men­sion­al har­monic os­cil­la­tor from scratch. The de­riva­tion will be much cleaner than the elab­o­rate al­ge­braic de­riva­tion of chap­ter 4.1, and in par­tic­u­lar {D.12}.

The Hamil­ton­ian of a har­monic os­cil­la­tor in clas­si­cal quan­tum me­chan­ics is, chap­ter 4.1,

\begin{displaymath}
H = \frac{1}{2m}{\widehat p}_x^2 + \frac{m}{2}\omega^2 x^2
\end{displaymath}

Here the first term is the ki­netic en­ergy and the sec­ond the po­ten­tial en­ergy.

Ac­cord­ing to the pre­vi­ous sub­sec­tion, a sys­tem like this can be solved im­me­di­ately if the com­mu­ta­tor of ${\widehat p}_x$ and $x$ is an imag­i­nary con­stant. It is, that is the fa­mous “canon­i­cal com­mu­ta­tor” of Heisen­berg:

\begin{displaymath}[x,{\widehat p}_x]= {\rm i}\hbar
\end{displaymath}

To use the re­sults of the pre­vi­ous sub­sec­tion, first the Hamil­ton­ian must be rewrit­ten in the form

\begin{displaymath}
H = {\vphantom' E}^{\rm p}\left(\widehat P^2 + \widehat Q^2\right)
\end{displaymath}

where $\widehat{P}$ and $\widehat{Q}$ sat­isfy the com­mu­ta­tion re­la­tion­ship for bosonic ca­Her­mi­tians:

\begin{displaymath}
\left[\widehat P,\widehat Q\right] = - {\textstyle\frac{1}{2}} {\rm i}
\end{displaymath}

That re­quires that you de­fine

\begin{displaymath}
{\vphantom' E}^{\rm p}= \hbar\omega \qquad
\widehat P = \s...
...hat p}_x \qquad
\widehat Q = \sqrt{\frac{m\omega}{2\hbar}}\,x
\end{displaymath}

Ac­cord­ing to the pre­vi­ous sub­sec­tion, the en­ergy eigen­val­ues are

\begin{displaymath}
E_i = (i + {\textstyle\frac{1}{2}}) \hbar\omega
\end{displaymath}

So the spec­trum has al­ready been found.

And var­i­ous other in­ter­est­ing prop­er­ties of the so­lu­tion may also be found in the pre­vi­ous sub­sec­tion. Like the fact that there is half a quan­tum of en­ergy left in the ground state. True, the zero level of en­ergy is not im­por­tant for the dy­nam­ics. But this half quan­tum does have a phys­i­cal mean­ing. As­sume that you have a lot of iden­ti­cal har­monic os­cil­la­tors in the ground state, and that you do a mea­sure­ment of the ki­netic en­ergy for each. You will not get zero ki­netic en­ergy. In fact, the av­er­age ki­netic en­ergy mea­sured will be a quar­ter quan­tum, half of the to­tal en­ergy. The other quar­ter quan­tum is what you get on av­er­age if you do po­ten­tial en­ergy mea­sure­ments.

An­other ob­ser­va­tion of the pre­vi­ous sub­sec­tion is that the ex­pec­ta­tion po­si­tion of the par­ti­cle will vary har­mon­i­cally with time. It is a har­monic os­cil­la­tor, af­ter all.

The en­ergy eigen­func­tions will be in­di­cated by $h_i$, rather than ${\left\vert i\right\rangle}$. What has not yet been found are spe­cific ex­pres­sions for these eigen­func­tions. How­ever, as fig­ure A.6 shows, if you ap­ply the an­ni­hi­la­tion op­er­a­tor $\widehat a$ on the ground state $h_0$, you get zero:

\begin{displaymath}
\widehat ah_0 = 0
\end{displaymath}

And also ac­cord­ing to the pre­vi­ous sub­sec­tion

\begin{displaymath}
\widehat a= \widehat P - {\rm i}\widehat Q
\end{displaymath}

Putting in the ex­pres­sions for $\widehat{P}$ and $\widehat{Q}$ above, with ${\widehat p}_x$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\hbar\partial$$\raisebox{.5pt}{$/$}$${\rm i}\partial{x}$, and re­ar­rang­ing gives

\begin{displaymath}
\frac{1}{h_0} \frac{\partial h_0}{\partial x}
= - \frac{m\omega}{\hbar} x
\end{displaymath}

This can be sim­pli­fied by defin­ing a scaled $x$ co­or­di­nate:

\begin{displaymath}
\frac{1}{h_0} \frac{\partial h_0}{\partial \xi}
= - \xi
\...
...frac{x}{\ell}
\qquad \ell \equiv \sqrt{\frac{\hbar}{m\omega}}
\end{displaymath}

In­te­grat­ing both sides with re­spect to $\xi$ and clean­ing up by tak­ing an ex­po­nen­tial gives the ground state as

\begin{displaymath}
h_0 = C e^{-\xi^2/2}
\end{displaymath}

The in­te­gra­tion con­stant $C$ can be found from nor­mal­iz­ing the wave func­tion. The needed in­te­gral can be found un­der ! in the no­ta­tions sec­tion. That gives the fi­nal ground state as

\begin{displaymath}
h_0 = \frac{1}{(\pi\ell^2)^{1/4}} e^{-\xi^2/2}
\end{displaymath}

To get the other eigen­func­tions $h_i$ for $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, ..., ap­ply the cre­ation op­er­a­tor $\widehat a^\dagger $ re­peat­edly:

\begin{displaymath}
h_i = \frac{1}{\sqrt{i}} \widehat a^\dagger h_{i-1}
\end{displaymath}

Ac­cord­ing to the pre­vi­ous sub­sec­tion, the cre­ation op­er­a­tor is

\begin{displaymath}
\widehat a^\dagger = \widehat P + {\rm i}\widehat Q =
\sq...
...}}{\sqrt{2}} \left(\xi - \frac{\partial}{\partial \xi}\right)
\end{displaymath}

So the en­tire process in­volves lit­tle more than a sin­gle dif­fer­en­ti­a­tion for each en­ergy eigen­func­tion found. In par­tic­u­lar, un­like in {D.12}, no ta­ble books are needed. Note that fac­tors ${\rm i}$ do not make a dif­fer­ence in eigen­func­tions. So the ${\rm i}$ in the fi­nal ex­pres­sion for $\widehat a^\dagger $ may be left out to get real eigen­func­tions. That gives ta­ble 4.1.

That was easy, wasn’t it?


A.15.6 Canon­i­cal (sec­ond) quan­ti­za­tion

Canon­i­cal quan­ti­za­tion” is a pro­ce­dure to turn a clas­si­cal sys­tem into the proper quan­tum one. If it is ap­plied to a field, like the elec­tro­mag­netic field, it is of­ten called “sec­ond quan­ti­za­tion.”

Re­call the quan­tum analy­sis of the har­monic os­cil­la­tor in the pre­vi­ous sub­sec­tion. The key to the cor­rect so­lu­tion was the canon­i­cal com­mu­ta­tor be­tween po­si­tion and mo­men­tum. Ap­par­ently, if you get the com­mu­ta­tors right in quan­tum me­chan­ics, you get the quan­tum me­chan­ics right. That is the idea be­hind canon­i­cal quan­ti­za­tion.

The ba­sic idea can eas­ily be il­lus­trated for the har­monic os­cil­la­tor. The stan­dard har­monic os­cil­la­tor in clas­si­cal physics is a sim­ple spring-mass sys­tem. The clas­si­cal gov­ern­ing equa­tions are:

\begin{displaymath}
\frac{{\rm d}x}{{\rm d}t} = v_x
\qquad
m \frac{{\rm d}v_x}{{\rm d}t} = - k x
\end{displaymath}

Here $x$ is the po­si­tion of the os­cil­lat­ing mass $m$ and $k$ is the spring con­stant. The first of these equa­tions is merely the de­f­i­n­i­tion of ve­loc­ity. The sec­ond is New­ton’s sec­ond law.

As you can read­ily check by sub­sti­tu­tion, the most gen­eral so­lu­tion is

\begin{displaymath}
x = A \sin(\omega t +\alpha) \qquad v_x = A\omega \cos(\omega t +\alpha)
\qquad \omega \equiv \sqrt{\frac{k}{m}}
\end{displaymath}

Here the am­pli­tude $A$ and the “phase an­gle” $\alpha$ are ar­bi­trary con­stants. The fre­quency $\omega$ is given in terms of the known spring con­stant and mass.

This sys­tem is now to be quan­tized us­ing canon­i­cal quan­ti­za­tion. The process is some­what round-about. First a “canon­i­cal mo­men­tum,” or “con­ju­gate mo­men­tum,” or “gen­er­al­ized mo­men­tum,” $p_x$ is de­fined by tak­ing the de­riv­a­tive of the ki­netic en­ergy, $\frac12mv_x^2$, (or more gen­er­ally, of the La­grangian {A.1}), with re­spect to the time de­riv­a­tive of $x$. Since the time de­riv­a­tive of $x$ is $v_x$, the mo­men­tum is $mv_x$. That is the usual lin­ear mo­men­tum.

Next a clas­si­cal Hamil­ton­ian is de­fined. It is the to­tal en­ergy of the sys­tem ex­pressed in terms of po­si­tion and mo­men­tum:

\begin{displaymath}
H_{\rm cl} = \frac{p_x^2}{2 m} + \frac{m}{2} \omega^2 x^2
\end{displaymath}

Here the first term is the ki­netic en­ergy, with $v_x$ rewrit­ten in terms of the mo­men­tum. The sec­ond term is the po­ten­tial en­ergy in the spring. The spring con­stant in it was rewrit­ten as $m\omega^2$ be­cause $m$ and $\omega$ are phys­i­cally more im­por­tant vari­ables, and the sym­bol $k$ is al­ready greatly over­worked in quan­tum me­chan­ics as it is. See {A.1} for more on clas­si­cal Hamil­to­ni­ans.

To quan­tize the sys­tem, the mo­men­tum and po­si­tion in the Hamil­ton­ian must be turned into op­er­a­tors. Ac­tual val­ues of mo­men­tum and po­si­tion are then the eigen­val­ues of these op­er­a­tors. Ba­si­cally, you just put a hat on the mo­men­tum and po­si­tion in the Hamil­ton­ian:

\begin{displaymath}
H = \frac{{\widehat p}_x^{\,2}}{2 m} + \frac{m}{2} \omega^2 {\widehat x}^{\,2}
\end{displaymath}

Note that the hat on $x$ is usu­ally omit­ted. How­ever, it is still an op­er­a­tor in the sense that it is sup­posed to mul­ti­ply wave func­tions now. Now all you need is the right com­mu­ta­tor be­tween ${\widehat p}_x$ and ${\widehat x}$.

In gen­eral, you iden­tify com­mu­ta­tors in quan­tum me­chan­ics with so-called Pois­son brack­ets in clas­si­cal me­chan­ics. As­sume that $A$ and $B$ are any two quan­ti­ties that de­pend on $x$ and $p_x$. Then their Pois­son bracket is de­fined as, {A.12},

\begin{displaymath}
\{A,B\}_{\rm P} \equiv
\frac{\partial A}{\partial x} \frac...
... \frac{\partial B}{\partial x} \frac{\partial A}{\partial p_x}
\end{displaymath}

From that it is im­me­di­ately seen that

\begin{displaymath}
\{x,p_x\}_{\rm P} = 1 \qquad \{x,x\}_{\rm P} = 0 \qquad \{p_x,p_x\}_{\rm P} = 0
\end{displaymath}

Cor­re­spond­ingly, in quan­tum me­chan­ics you take

\begin{displaymath}[x,{\widehat p}_x]= {\rm i}\hbar \qquad [x,x] = 0 \qquad [{\widehat p}_x,{\widehat p}_x] = 0
\end{displaymath}

In this way the nonzero Pois­son brack­ets bring in Planck’s con­stant that de­fines quan­tum me­chan­ics. (In case of fermi­ons, an­ti­com­mu­ta­tors take the place of com­mu­ta­tors.)

Be­cause of rea­sons dis­cussed for the Heisen­berg pic­ture of quan­tum me­chan­ics, {A.12}, the pro­ce­dure en­sures that the quan­tum me­chan­ics is con­sis­tent with the clas­si­cal me­chan­ics. And in­deed, the re­sults of the pre­vi­ous sub­sec­tion con­firmed that. You can check that the ex­pec­ta­tion po­si­tion and mo­men­tum had the cor­rect clas­si­cal har­monic de­pen­dence on time.

Fun­da­men­tally, quan­ti­za­tion of a clas­si­cal sys­tem is just an ed­u­cated guess. Clas­si­cal me­chan­ics is a spe­cial case of quan­tum me­chan­ics, but quan­tum me­chan­ics is not a spe­cial case of clas­si­cal me­chan­ics. For the ma­te­r­ial cov­ered in this book, there are sim­pler ways to make an ed­u­cated guess than canon­i­cal quan­ti­za­tion. Be­ing less math­e­mat­i­cal, they are more un­der­stand­able and in­tu­itive. That might make them maybe more con­vinc­ing too.


A.15.7 Spin as a fermion sys­tem

There is, of course, not much analy­sis that can be done with a fermion sys­tem with only one sin­gle-par­ti­cle state. There are only two in­de­pen­dent sys­tem states; no fermion or one fermion.

How­ever, there is at least one phys­i­cal ex­am­ple of such a sim­ple sys­tem. As noted in sub­sec­tion A.15.1, a par­ti­cle with spin $\leavevmode \kern.03em\raise.7ex\hbox{\the\scriptfont0 1}\kern-.2em
/\kern-.21em\lower.56ex\hbox{\the\scriptfont0 2}\kern.05em$ like an elec­tron can be con­sid­ered to be a model for it. The vac­uum state ${\left\vert\right\rangle}$ is the spin-down state of the elec­tron. The state ${\left\vert 1\right\rangle}$ is the spin-up state. This state has one unit $\hbar$ more an­gu­lar mo­men­tum in the $z$-​di­rec­tion. If the elec­tron is in a mag­netic field, that ad­di­tional mo­men­tum cor­re­sponds to a quan­tum of en­ergy.

One rea­son­able ques­tion that can now be asked is whether the an­ni­hi­la­tion and cre­ation op­er­a­tors, and the ca­Her­mi­tians, have some phys­i­cal mean­ing for this sys­tem. They do.

Re­call that for fermi­ons, the Hamil­ton­ian was given in terms of the ca­Her­mi­tians $\widehat{P}_{\rm {f}}$ and $\widehat{Q}_{\rm {f}}$ as

\begin{displaymath}
H = {\vphantom' E}^{\rm p}\left({\textstyle\frac{1}{2}}-{\r...
...dehat P_{\rm {f}},\widehat Q_{\rm {f}}]\right)
+ E_{\rm {ve}}
\end{displaymath}

The ex­pres­sion be­tween paren­the­ses is the par­ti­cle count op­er­a­tor, equal to zero for the spin-down state and 1 for the spin up state. So the sec­ond term within paren­the­ses in the Hamil­ton­ian must be the spin in the $z$-​di­rec­tion, nondi­men­sion­al­ized by $\hbar$. (Re­call that the spin in the $z$-​di­rec­tion has the val­ues $\pm\frac12\hbar$.) So ap­par­ently

\begin{displaymath}[\widehat P_{\rm {f}},\widehat Q_{\rm {f}}]= {\rm i}\frac{{\widehat S}_z}{\hbar}
\end{displaymath}

Rea­son­ably speak­ing then, the ca­Her­mi­tians them­selves should be the nondi­men­sion­al com­po­nents of spin in the $x$ and $y$ di­rec­tions,

\begin{displaymath}
\widehat P_{\rm {f}} = \frac{{\widehat S}_x}{\hbar}
\qquad
\widehat Q_{\rm {f}} = \frac{{\widehat S}_y}{\hbar}
\end{displaymath}

What other vari­ables are there in this prob­lem? And so it is. The com­mu­ta­tor above, with the ca­Her­mi­tians equal to the nondi­men­sion­al spin com­po­nents, is known as the “fun­da­men­tal com­mu­ta­tion re­la­tion.” Quan­tum field analy­sis is one way to un­der­stand that this re­la­tion ap­plies.

Re­call an­other prop­erty of the ca­Her­mi­tians for fermi­ons:

\begin{displaymath}
\widehat P_{\rm {f}}^2 = {\textstyle\frac{1}{4}} \qquad \widehat Q_{\rm {f}}^2 = {\textstyle\frac{1}{4}}
\end{displaymath}

Ap­par­ently then, the square spin com­po­nents are just con­stants with no un­cer­tainty. Of course, that is no sur­prise since the only spin val­ues in any di­rec­tion are $\pm\frac12\hbar$.

Fi­nally con­sider the an­ni­hi­la­tion and cre­ation op­er­a­tors, mul­ti­plied by $\hbar$:

\begin{displaymath}
\hbar\widehat a= {\widehat S}_x - {\rm i}{\widehat S}_y \qq...
...bar\widehat a^\dagger = {\widehat S}_x + {\rm i}{\widehat S}_y
\end{displaymath}

Ap­par­ently these op­er­a­tors can re­move, re­spec­tively add a unit $\hbar$ of an­gu­lar mo­men­tum in the $z$-​di­rec­tion. That is of­ten im­por­tant in rel­a­tivis­tic ap­pli­ca­tions where a fermion emits or ab­sorbs an­gu­lar mo­men­tum in the $z$-​di­rec­tion. This changes the spin of the fermion and that can be ex­pressed by the op­er­a­tors above. So you will usu­ally find $x$ and $y$ spin op­er­a­tors in the analy­sis of such processes.

Ob­vi­ously, you can learn a lot by tak­ing a quan­tum field type ap­proach. To be sure, the cur­rent analy­sis ap­plies only to par­ti­cles with spin $\leavevmode \kern.03em\raise.7ex\hbox{\the\scriptfont0 1}\kern-.2em
/\kern-.21em\lower.56ex\hbox{\the\scriptfont0 2}\kern.05em$. But ad­vanced analy­sis of an­gu­lar mo­men­tum in gen­eral is very sim­i­lar to quan­tum field analy­sis, chap­ter 12. It re­sem­bles some mix­ture of the bo­son and fermion cases.


A.15.8 More sin­gle par­ti­cle states

The pre­vi­ous sub­sec­tions dis­cussed quan­tum field the­ory when there is just one type of sin­gle-par­ti­cle state for the par­ti­cles. This sub­sec­tion con­sid­ers the case that there is more than one. An in­dex $n$ will be used to num­ber the states.

Graph­i­cally, the case of mul­ti­ple sin­gle-par­ti­cle states was il­lus­trated in fig­ures A.3 and A.4. There is now more than one box that par­ti­cles can be in. Each box cor­re­sponds to one type of sin­gle-par­ti­cle state $\pp{n}////$.

Each such sin­gle-par­ti­cle state has an oc­cu­pa­tion num­ber $i_n$ that gives the num­ber of par­ti­cles in that state. A com­plete set of such oc­cu­pa­tion num­bers form a Fock space ba­sis ket

\begin{displaymath}
\vert i_1,i_2,i_3,i_4,\ldots\rangle
\end{displaymath}

An an­ni­hi­la­tion op­er­a­tor $\widehat a_n$ and a cre­ation op­er­a­tor $\widehat a^\dagger _n$ must be de­fined for every oc­cu­pa­tion num­ber. The math­e­mat­i­cal de­f­i­n­i­tion of these op­er­a­tors for bosons is

\begin{displaymath}
\fbox{$\displaystyle
\begin{array}{l}
\displaystyle\strut...
...i_2,\ldots,i_{n-1},i_n,i_{n+1},\ldots\rangle
\end{array} $} %
\end{displaymath} (A.60)

The com­mu­ta­tor re­la­tions are

\begin{displaymath}
\fbox{$\displaystyle
\left[\widehat a_{{\rm{b}},n},\wideha...
...rm{b}},{\underline n}}\right] = \delta_{n{\underline n}}
$} %
\end{displaymath} (A.61)

Here $\delta_{n{\underline n}}$ is the Kro­necker delta, equal to one if $n$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\underline n}$, and zero in all other cases. These com­mu­ta­tor re­la­tions ap­ply for $n$ $\raisebox{.2pt}{$\ne$}$ ${\underline n}$ be­cause then the op­er­a­tors do un­re­lated things to dif­fer­ent sin­gle-par­ti­cle states; in that case it does not make a dif­fer­ence in what or­der you ap­ply them. That makes the com­mu­ta­tor zero. For $n$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\underline n}$, the com­mu­ta­tor re­la­tions are un­changed from the case of just one sin­gle-par­ti­cle state.

For fermi­ons it is a bit more com­plex. The graph­i­cal rep­re­sen­ta­tion of the ex­am­ple fermi­onic en­ergy eigen­func­tion fig­ure A.4 cheats a bit, be­cause it sug­gests that there is only one clas­si­cal wave func­tion for a given set of oc­cu­pa­tion num­bers. Ac­tu­ally, there are two vari­a­tions, based on how the par­ti­cles are or­dered. The two are the same ex­cept that they have the op­po­site sign. Sup­pose that you cre­ate a par­ti­cle in a state $n$; clas­si­cally you would want to call that par­ti­cle 1, and then cre­ate a par­ti­cle in a state ${\underline n}$, clas­si­cally you would want to call it par­ti­cle 2. Do the par­ti­cle cre­ation in the op­po­site or­der, and it is par­ti­cle 1 that ends up in state ${\underline n}$ and par­ti­cle 2 that ends up in state $n$. That means that the clas­si­cal wave func­tion will have changed sign. How­ever, the Fock space ket will not un­less you do some­thing.

What you can do is de­fine the an­ni­hi­la­tion and cre­ation op­er­a­tors for fermi­ons as fol­lows:

\begin{displaymath}
\fbox{$\displaystyle
\begin{array}{l}
\displaystyle\strut...
...2,\ldots,i_{n-1},1,i_{n+1},\ldots\rangle = 0
\end{array} $} %
\end{displaymath} (A.62)

The only dif­fer­ence from the an­ni­hi­la­tion and cre­ation op­er­a­tors for just one type of sin­gle-par­ti­cle state is the po­ten­tial sign changes due to the $(-1)^{\ldots}$. It adds a mi­nus sign when­ever you swap the or­der of an­ni­hi­lat­ing/cre­at­ing two par­ti­cles in dif­fer­ent states. For the an­ni­hi­la­tion and cre­ation op­er­a­tors of the same state, it may change both their signs, but that does noth­ing much: it leaves the im­por­tant prod­ucts such as $\widehat a^\dagger _n\widehat a_n$ and the an­ti­com­mu­ta­tors un­changed.

Of course, you can de­fine the an­ni­hi­la­tion and cre­ation op­er­a­tors with what­ever sign you want, but putting in the sign pat­tern above may pro­duce eas­ier math­e­mat­ics. In fact, there is an im­me­di­ate ben­e­fit al­ready for the an­ti­com­mu­ta­tor re­la­tions; they take the same form as for bosons, ex­cept with an­ti­com­mu­ta­tors in­stead of com­mu­ta­tors:

\begin{displaymath}
\fbox{$\displaystyle
\left\{\widehat a_{{\rm{f}},n},\wideh...
...m{f}},{\underline n}}\right\} = \delta_{n{\underline n}}
$} %
\end{displaymath} (A.63)

These re­la­tion­ships ap­ply for $n$ $\raisebox{.2pt}{$\ne$}$ ${\underline n}$ ex­actly be­cause of the sign change caused by swap­ping the or­der of the op­er­a­tors. For $n$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\underline n}$, they are un­changed from the case of just one sin­gle-par­ti­cle state.

The Hamil­ton­ian for a sys­tem of non­in­ter­act­ing par­ti­cles is like the one for just one sin­gle-par­ti­cle state, ex­cept that you must now sum over all sin­gle-par­ti­cle states:

\begin{displaymath}
\fbox{$\displaystyle
H = \sum_n {\vphantom' E}^{\rm p}_n \widehat a^\dagger _n \widehat a_n + E_{{\rm ve},n}
$} %
\end{displaymath} (A.64)


A.15.9 Field op­er­a­tors

As noted at the start of this sec­tion, quan­tum field the­ory is par­tic­u­larly suited for rel­a­tivis­tic ap­pli­ca­tions be­cause the num­ber of par­ti­cles can vary. How­ever, in rel­a­tivis­tic ap­pli­ca­tions, it is of­ten nec­es­sary to work in terms of po­si­tion co­or­di­nates in­stead of sin­gle-par­ti­cle en­ergy eigen­func­tions. To be sure, prac­ti­cal quan­tum field com­pu­ta­tions are usu­ally worked out in terms of rel­a­tivis­tic en­ergy-mo­men­tum states. But to un­der­stand them re­quires con­sid­er­a­tion of po­si­tion and time. Rel­a­tivis­tic ap­pli­ca­tions must make sure that co­or­di­nate sys­tems mov­ing at dif­fer­ent speeds are phys­i­cally equiv­a­lent and re­lated through the Lorentz trans­for­ma­tion. There is also the “causal­ity prob­lem,” that an event at one lo­ca­tion and time may not af­fect an event at an­other lo­ca­tion and time that is not reach­able with the speed of light. These con­di­tions are posed in terms of po­si­tion and time.

To han­dle such prob­lems, the an­ni­hi­la­tion and cre­ation op­er­a­tors can be con­verted into so-called field op­er­a­tors $\widehat a({\underline{\skew0\vec r}})$ and $\widehat a^\dagger ({\underline{\skew0\vec r}})$ that an­ni­hi­late re­spec­tively cre­ate par­ti­cles at a given po­si­tion ${\underline{\skew0\vec r}}$ in space. At least, roughly speak­ing that is what they do.

Now in clas­si­cal quan­tum me­chan­ics, a par­ti­cle at a given po­si­tion ${\underline{\skew0\vec r}}$ cor­re­sponds to a wave func­tion that is nonzero at only that sin­gle point. And if the wave func­tion is con­cen­trated at the sin­gle point ${\underline{\skew0\vec r}}$, it must then be in­fi­nitely large at that point. Re­lax­ing the nor­mal­iza­tion con­di­tion a bit, the ap­pro­pri­ate in­fi­nitely con­cen­trated math­e­mat­i­cal func­tion is called the “delta func­tion,” $\Psi$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\delta^3({\skew0\vec r}-{\underline{\skew0\vec r}})$, chap­ter 7.9. Here ${\underline{\skew0\vec r}}$ is the po­si­tion of the par­ti­cle and ${\skew0\vec r}$ the po­si­tion at which the delta func­tion is eval­u­ated. If ${\skew0\vec r}$ is not equal to ${\underline{\skew0\vec r}}$, the delta func­tion is zero; but at ${\skew0\vec r}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\underline{\skew0\vec r}}$ it is in­fi­nite. A delta func­tion by it­self in­te­grates to 1; its square mag­ni­tude would in­te­grate to in­fin­ity. So it is def­i­nitely not nor­mal­ized.

Like any func­tion, a delta func­tion can be writ­ten in terms of the sin­gle-par­ti­cle en­ergy eigen­func­tions $\psi_n$ as

\begin{displaymath}
\delta^3({\skew0\vec r}-{\underline{\skew0\vec r}}) = \sum_{{\rm all\ }n} c_n \psi_n({\skew0\vec r})
\end{displaymath}

Here the co­ef­fi­cients $c_n$ can be found by tak­ing in­ner prod­ucts of both sides with an ar­bi­trary eigen­func­tion $\psi_{\underline n}$. That gives, not­ing that or­tho­nor­mal­ity of the eigen­func­tions only leaves $c_{\underline n}$ in the right-hand side,

\begin{displaymath}
c_{\underline n}= \int \psi_{\underline n}^*({\skew0\vec r}...
...0\vec r}-{\underline{\skew0\vec r}}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

The in­te­gral is over all space. The in­dex ${\underline n}$ can be reno­tated as $n$ since the above ex­pres­sion ap­plies for all pos­si­ble val­ues of ${\underline n}$. Also, an in­ner prod­uct with a delta func­tion can eas­ily be eval­u­ated. The in­ner prod­uct above sim­ply picks out the value of $\psi_n^*$ at ${\underline{\skew0\vec r}}$. So

\begin{displaymath}
c_n = \psi_n^*({\underline{\skew0\vec r}})
\end{displaymath}

Af­ter all, ${\underline{\skew0\vec r}}$ is the only po­si­tion where the delta func­tion is nonzero. So fi­nally

\begin{displaymath}
\delta^3({\skew0\vec r}-{\underline{\skew0\vec r}}) = \sum_...
...n} \psi^*_n({\underline{\skew0\vec r}}) \psi_n({\skew0\vec r})
\end{displaymath}

Since $\psi^*_n({\underline{\skew0\vec r}})$ is the amount of eigen­func­tion $\psi_n$ that must be cre­ated to cre­ate the delta func­tion at ${\underline{\skew0\vec r}}$, the an­ni­hi­la­tion and cre­ation field op­er­a­tors should pre­sum­ably be

\begin{displaymath}
\widehat a({\underline{\skew0\vec r}}) = \sum_n \psi_n({\un...
...um_n \psi^*_n({\underline{\skew0\vec r}})\widehat a^\dagger _n
\end{displaymath} (A.65)

The an­ni­hi­la­tion op­er­a­tor is again the Her­mit­ian con­ju­gate of the cre­ation op­er­a­tor.

In the case of non­in­ter­act­ing par­ti­cles in free space, the en­ergy eigen­func­tions are the mo­men­tum eigen­func­tions $e^{{\rm i}{\skew0\vec p}\cdot{\skew0\vec r}/\hbar}$. The com­bi­na­tion ${\vec k}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\skew0\vec p}$$\raisebox{.5pt}{$/$}$$\hbar$ is com­monly re­ferred to as the “wave num­ber vec­tor.” Note that in in­fi­nite free space, the sums be­come in­te­grals called Fourier trans­forms; see chap­ter 7.9 and 7.10.1 for more de­tails.

To check the ap­pro­pri­ate­ness of the cre­ation field op­er­a­tor as de­fined above, con­sider its con­sis­tency with clas­si­cal quan­tum me­chan­ics. A clas­si­cal wave func­tion $\Psi$ can al­ways be writ­ten as a com­bi­na­tion of the en­ergy eigen­func­tions:

\begin{displaymath}
\Psi({\skew0\vec r}) = \sum_n c_n \psi_n({\skew0\vec r})
\...
...{\skew0\vec r}) \Psi({\skew0\vec r}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

That is the same as for the delta func­tion case above. How­ever, any nor­mal func­tion also al­ways sat­is­fies

\begin{displaymath}
\Psi({\skew0\vec r}) = \int \Psi({\underline{\skew0\vec r}}...
...nderline{\skew0\vec r}}) {\,\rm d}^3{\underline{\skew0\vec r}}
\end{displaymath}

That is be­cause the delta func­tion picks out the value of $\Psi({\underline{\skew0\vec r}})$ at ${\underline{\skew0\vec r}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\skew0\vec r}$ as also al­ready noted above. You can look at the ex­pres­sion above as fol­lows: $\Psi({\skew0\vec r})$ is a com­bi­na­tion of po­si­tion states $\delta({\skew0\vec r}-{\underline{\skew0\vec r}}){\rm d}^3{\underline{\skew0\vec r}}$ with co­ef­fi­cients $\Psi({\underline{\skew0\vec r}})$. So here the clas­si­cal wave func­tion is writ­ten as a com­bi­na­tion of po­si­tion states in­stead of en­ergy states.

Now this needs to be con­verted to quan­tum field form. The clas­si­cal wave func­tion then be­comes a com­bi­na­tion ${\left\vert\Psi\right\rangle}$ of Fock space kets. But by de­f­i­n­i­tion, the cre­ation field op­er­a­tor $\widehat a^\dagger ({\underline{\skew0\vec r}})$ ap­plied on the vac­uum state ${\left\vert\right\rangle}$ should pro­duce the Fock space equiv­a­lent of a delta func­tion at ${\underline{\skew0\vec r}}$. So the above clas­si­cal wave func­tion should con­vert to a Fock space wave func­tion as

\begin{displaymath}
{\left\vert\Psi\right\rangle} = \int \Psi({\underline{\skew...
...\left\vert\right\rangle} {\,\rm d}^3{\underline{\skew0\vec r}}
\end{displaymath}

To check that, sub­sti­tute in the de­f­i­n­i­tion of the cre­ation field op­er­a­tor:

\begin{displaymath}
{\left\vert\Psi\right\rangle} = \sum_n \int \psi_n^*({\unde...
...ew0\vec r}}\;\; \widehat a^\dagger _n{\left\vert\right\rangle}
\end{displaymath}

But $\widehat a^\dagger _n{\left\vert\right\rangle}$ is the Fock space equiv­a­lent of the clas­si­cal en­ergy eigen­func­tion $\psi_n$. The rea­son is that $\widehat a^\dagger _n$ puts ex­actly one par­ti­cle in the state $\psi_n$. And the in­te­gral is the same co­ef­fi­cient $c_n$ of this en­ergy eigen­state as in the clas­si­cal case. So the cre­ation field op­er­a­tor as de­fined does pro­duce the cor­rect com­bi­na­tion of en­ergy states.

As a check on the ap­pro­pri­ate­ness of the an­ni­hi­la­tion field op­er­a­tor, con­sider the Hamil­ton­ian. The Hamil­ton­ian of non­in­ter­act­ing par­ti­cles sat­is­fies

\begin{displaymath}
H {\left\vert\Psi\right\rangle} = \sum_n \widehat a^\dagger...
...antom' E}^{\rm p}_n \widehat a_n {\left\vert\Psi\right\rangle}
\end{displaymath}

Here ${\vphantom' E}^{\rm p}_n$ is the sin­gle-par­ti­cle en­ergy and ${\left\vert\Psi\right\rangle}$ stands for a state de­scribed by Fock space kets. The ground state en­ergy was taken zero for sim­plic­ity. Note the crit­i­cal role of the trail­ing $\widehat a_n$. States with no par­ti­cles should not pro­duce en­ergy. The trail­ing $\widehat a_n$ en­sures that they do not; it pro­duces 0 when state $n$ has no par­ti­cles.

In terms of an­ni­hi­la­tion and cre­ation field op­er­a­tors, you would like the Hamil­ton­ian to be de­fined sim­i­larly:

\begin{displaymath}
H {\left\vert\Psi\right\rangle} = \int \widehat a^\dagger (...
...ec r}) {\left\vert\Psi\right\rangle} {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

Note that the sum has be­come an in­te­gral, as ${\skew0\vec r}$ is a con­tin­u­ous vari­able. Also, the sin­gle-par­ti­cle en­ergy ${\vphantom' E}^{\rm p}$ has be­come the sin­gle-par­ti­cle Hamil­ton­ian; that is nec­es­sary be­cause po­si­tion states are not en­ergy eigen­states with def­i­nite en­ergy. The trail­ing $\widehat a({\skew0\vec r})$ en­sures that po­si­tions with no par­ti­cles do not con­tribute to the Hamil­ton­ian.

Now, if the de­f­i­n­i­tions of the field op­er­a­tors are right, this Hamil­ton­ian should still pro­duce the same an­swer as be­fore. Sub­sti­tut­ing in the de­f­i­n­i­tions of the field op­er­a­tors gives

\begin{displaymath}
H {\left\vert\Psi\right\rangle} = \int
\sum_{\underline n}...
...at a_n {\left\vert\Psi\right\rangle} {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

The sin­gle-par­ti­cle Hamil­ton­ian $H^{\rm p}$ ap­plied on $\psi_n$ gives a fac­tor ${\vphantom' E}^{\rm p}_n$. And or­tho­nor­mal­ity of the eigen­func­tions im­plies that the in­te­gral is zero un­less ${\underline n}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $n$. And in that case, the square en­ergy eigen­func­tion mag­ni­tude in­te­grates to 1. That then im­plies that the Hamil­ton­ian is in­deed the same as be­fore.

The above ar­gu­ment roughly fol­lows [43, pp. 22-29], but note that this source puts a tilde on $\widehat a^\dagger _n$ and $\widehat a_n$ as de­fined here. See also [35, pp. 19-24] for a some­what dif­fer­ent ap­proach, with a some­what dif­fer­ent de­f­i­n­i­tion of the an­ni­hi­la­tion and cre­ation field op­er­a­tors.

One fi­nal ques­tion that is much more messy is in what sense these op­er­a­tors re­ally cre­ate or an­ni­hi­late a par­ti­cle lo­cal­ized at ${\underline{\skew0\vec r}}$. An an­swer can be given us­ing ar­gu­ments like those used for the elec­tro­mag­netic mag­netic field in {A.23.4}. In par­tic­u­lar, you want to leave some un­cer­tainty in the num­ber of par­ti­cles cre­ated at po­si­tion ${\underline{\skew0\vec r}}$. Then the ex­pec­ta­tion val­ues for the ob­serv­able field do be­come strongly lo­cal­ized near po­si­tion ${\underline{\skew0\vec r}}$. The de­tails will be skipped. But qual­i­ta­tively, the fact that in quan­tum field the­ory there is un­cer­tainty in the num­ber of par­ti­cles does of course add to the un­cer­tainty in the mea­sured quan­ti­ties.

A big ad­van­tage of the way the an­ni­hi­la­tion and cre­ation op­er­a­tors were de­fined now shows up: the an­ni­hi­la­tion and cre­ation field op­er­a­tors sat­isfy es­sen­tially the same (anti)com­mu­ta­tion re­la­tions. In par­tic­u­lar

\begin{displaymath}
\fbox{$\displaystyle
\Big[\widehat a_{\rm{b}}({\skew0\vec ...
...g] = \delta^3({\skew0\vec r}-{\underline{\skew0\vec r}})
$} %
\end{displaymath} (A.66)


\begin{displaymath}
\fbox{$\displaystyle
\Big\{\widehat a_{\rm{f}}({\skew0\vec...
...\} = \delta^3({\skew0\vec r}-{\underline{\skew0\vec r}})
$} %
\end{displaymath} (A.67)

In other ref­er­ences you might see an ad­di­tional con­stant mul­ti­ply­ing the three-di­men­sion­al delta func­tion, de­pend­ing on how the po­si­tion and mo­men­tum eigen­func­tions were nor­mal­ized.

To check these com­mu­ta­tors, plug in the de­f­i­n­i­tions of the field op­er­a­tors. Then the zero com­mu­ta­tors above fol­low im­me­di­ately from the ones for $a_n$ and $\widehat a^\dagger _n$, (A.61) and (A.63). For the nonzero com­mu­ta­tor, mul­ti­ply by a com­pletely ar­bi­trary func­tion $f({\skew0\vec r})$ and in­te­grate over ${\skew0\vec r}$. That gives $f({\underline{\skew0\vec r}})$, which is the same re­sult as ob­tained from in­te­grat­ing against $\delta^3({\skew0\vec r}-{\underline{\skew0\vec r}})$. That can only be true for every func­tion $f$ if the com­mu­ta­tor is the delta func­tion. (In fact, pro­duc­ing $f({\underline{\skew0\vec r}})$ for any $f({\skew0\vec r})$ is ex­actly the way how a delta func­tion would be de­fined by a con­sci­en­tious math­e­mati­cian.)

Field op­er­a­tors help solve a vex­ing prob­lem for rel­a­tivis­tic quan­tum me­chan­ics: how to put space and time on equal foot­ing, [43, p. 7ff]. Rel­a­tiv­ity un­avoid­ably mixes up po­si­tion and time. But clas­si­cal quan­tum me­chan­ics, as cov­ered in this book, needs to keep them rigidly apart.

Right at the be­gin­ning, this book told you that ob­serv­able quan­ti­ties are the eigen­val­ues of Her­mit­ian op­er­a­tors. That was not com­pletely true, there is an ex­cep­tion. Spa­tial co­or­di­nates are in­deed the eigen­val­ues of Her­mit­ian po­si­tion op­er­a­tors, chap­ter 7.9. But time is not an eigen­value of an op­er­a­tor. When this book wrote a wave func­tion as, say, $\Psi({\skew0\vec r},S_z;t)$ the time $t$ was just a la­bel. It in­di­cated that at any given time, you have some wave func­tion. Then you can ap­ply purely spa­tial op­er­a­tors like $x$, ${\widehat p}_x$, $H$, etcetera to find out things about the mea­sur­able po­si­tion, mo­men­tum, en­ergy, etcetera at that time. At a dif­fer­ent time you have a dif­fer­ent wave func­tion, for which you can do the same things. Time it­self is left out in the cold.

Cor­re­spond­ingly, the clas­si­cal Schrö­din­ger equa­tion ${\rm i}\hbar\partial\Psi$$\raisebox{.5pt}{$/$}$$\partial{t}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $H\Psi$ treats space and time quite dif­fer­ent. The spa­tial de­riv­a­tives, in $H$, are sec­ond or­der, but the time de­riv­a­tive is first or­der. The first-or­der time de­riv­a­tive de­scribes the change from one spa­tial wave func­tion to the next one, a time $\partial{t}$ later. Of course, you can­not think of the spa­tial de­riv­a­tives in the same way. Even if there was only one spa­tial co­or­di­nate in­stead of three, the sec­ond or­der spa­tial de­riv­a­tives would not rep­re­sent a change of wave func­tion from one po­si­tion to the next.

The dif­fer­ent treat­ment of time and space causes prob­lems in gen­er­al­iz­ing the Schrö­din­ger equa­tion to the rel­a­tivis­tic case.

For spin­less par­ti­cles, the sim­plest gen­er­al­iza­tion of the Schrö­din­ger equa­tion is the Klein-Gor­don equa­tion, {A.14}. How­ever, this equa­tion brings in states with neg­a­tive en­er­gies, in­clud­ing neg­a­tive rest mass en­er­gies. That is a prob­lem. For ex­am­ple, what pre­vents a par­ti­cle from tran­si­tion­ing to states of more and more neg­a­tive en­ergy, re­leas­ing in­fi­nite amounts of en­ergy in the process? There is no clean way to deal with such prob­lems within the bare con­text of the Klein-Gor­don equa­tion.

There is also the mat­ter of what to make of the Klein-Gor­don wave func­tion. It ap­pears as if a wave func­tion for a sin­gle par­ti­cle is be­ing writ­ten down, like it would be for the Schrö­din­ger equa­tion. But for the Schrö­din­ger equa­tion the in­te­grated square mag­ni­tude of the wave func­tion is 1 and stays 1. That is taken to mean that the prob­a­bil­ity of find­ing the par­ti­cle is 1 if you look every­where. But the Klein-Gor­don equa­tion does not pre­serve the in­te­grated square mag­ni­tude of the wave func­tion in time. That is not sur­pris­ing, since in rel­a­tiv­ity par­ti­cles can be cre­ated out of en­ergy or an­ni­hi­lated. But if that is so, in what sense could the Klein-Gor­don equa­tion pos­si­bly de­scribe a wave func­tion for a sin­gle, (i.e. ex­actly 1), par­ti­cle?

(Of course, this is not a prob­lem for sin­gle-par­ti­cle en­ergy eigen­states. En­ergy eigen­states are sta­tion­ary, chap­ter 7.1.4. It is also not a prob­lem if there are only par­ti­cle states, or only an­tipar­ti­cle states, {D.32}. The real prob­lems start when you try to add per­tur­ba­tions to the equa­tion.)

For fermi­ons with spin $\leavevmode \kern.03em\raise.7ex\hbox{\the\scriptfont0 1}\kern-.2em
/\kern-.21em\lower.56ex\hbox{\the\scriptfont0 2}\kern.05em$, the ap­pro­pri­ate gen­er­al­iza­tion of the Schrö­din­ger equa­tion is the Dirac equa­tion, chap­ter 12.12. How­ever, there are still those neg­a­tive-en­ergy so­lu­tions. Dirac pos­tu­lated that all, in­fi­nitely many, neg­a­tive en­ergy states in the uni­verse are al­ready filled with elec­trons. That is ob­vi­ously a rather ugly as­sump­tion. Worse, it would not work for bosons. Any num­ber of bosons can go into a sin­gle state, they can­not fill them.

Quan­tum field the­ory can put space and time on a more equal foot­ing, es­pe­cially in the Heisen­berg for­mu­la­tion, {A.12}. This for­mu­la­tion pushes time from the wave func­tion onto the op­er­a­tor. To see how this works, con­sider some ar­bi­trary in­ner prod­uct in­volv­ing a Schrö­din­ger op­er­a­tor $\widehat{A}$:

\begin{displaymath}
{\left\langle\Phi\hspace{0.3pt}\right\vert}\widehat A {\hspace{-\nulldelimiterspace}\left.\Psi\right\rangle}
\end{displaymath}

(Why look at in­ner prod­ucts? Sim­ply put, if you get all in­ner prod­ucts right, you get the quan­tum me­chan­ics right. Any­thing in quan­tum me­chan­ics can be found by tak­ing the right in­ner prod­uct.) Now re­call that if a wave func­tion $\Psi$ has def­i­nite en­ergy $E$, it varies in time as $e^{-{{\rm i}}Et/\hbar}\Psi_0$ where $\Psi_0$ is in­de­pen­dent of time, chap­ter 7.1.2. If $\Psi$ does not have def­i­nite en­ergy, you can re­place $E$ in the ex­po­nen­tial by the Hamil­ton­ian $H$. (Ex­po­nen­tials of op­er­a­tors are de­fined by their Tay­lor se­ries.) So the in­ner prod­uct be­comes

\begin{displaymath}
{\left\langle\Phi_0\hspace{0.3pt}\right\vert} e^{{\rm i}H t...
...hbar} {\hspace{-\nulldelimiterspace}\left.\Psi_0\right\rangle}
\end{displaymath}

(Re­call that ${\rm i}$ changes sign when taken to the other side of an in­ner prod­uct.) The Heisen­berg $\widetilde{A}$ op­er­a­tor ab­sorbs the ex­po­nen­tials:

\begin{displaymath}
\widetilde A \equiv e^{{\rm i}H t/\hbar} \widehat A e^{-{\rm i}H t/\hbar}
\end{displaymath}

Now note that if $\widehat{A}$ is a field op­er­a­tor, the po­si­tion co­or­di­nates in it are not Hamil­ton­ian op­er­a­tors. They are la­bels just like time. They la­bel what po­si­tion the par­ti­cle is an­ni­hi­lated or cre­ated at. So space and time are now treated much more equally.

Here is where the term field in “quan­tum field the­ory” comes from. In clas­si­cal physics, a field is a nu­mer­i­cal func­tion of po­si­tion. For ex­am­ple, a pres­sure field in a mov­ing fluid has a value, the pres­sure, at each po­si­tion. An elec­tric field has three val­ues, the com­po­nents of the elec­tric field, at each po­si­tion. How­ever, in quan­tum field the­ory, a field does not con­sist of val­ues, but of op­er­a­tors. Each po­si­tion has one or more op­er­a­tor as­so­ci­ated with it. Each par­ti­cle type is as­so­ci­ated with a field. This field will in­volve both cre­ation and an­ni­hi­la­tion op­er­a­tors of that par­ti­cle, or the as­so­ci­ated an­tipar­ti­cle, at each po­si­tion.

Within the quan­tum field frame­work, equa­tions like the Klein-Gor­don and Dirac ones can be given a clear mean­ing. The eigen­func­tions of these equa­tions give states that par­ti­cles can be in. Since en­ergy eigen­func­tions are sta­tion­ary, con­ser­va­tion of prob­a­bil­ity is not an is­sue.

It may be men­tioned that there is an al­ter­nate way to put space and time on an equal foot­ing, [43, p. 10]. In­stead of turn­ing spa­tial co­or­di­nates into la­bels, time can be turned into an op­er­a­tor. How­ever, clearly wave func­tions do evolve with time, even if dif­fer­ent ob­servers may dis­agree about the de­tails. So what to make of the time pa­ra­me­ter in the Schrö­din­ger equa­tion? Rel­a­tiv­ity of­fers an an­swer. The time in the Schrö­din­ger equa­tion can be as­so­ci­ated with the proper time of the con­sid­ered par­ti­cle. That is the time mea­sured by an ob­server mov­ing along with the par­ti­cle, chap­ter 1.2.2. The time mea­sured by an ob­server in an in­er­tial co­or­di­nate sys­tem is then pro­moted to an op­er­a­tor. All this can be done. In fact, it is the start­ing point of the so-called “string the­ory.” In string the­ory, a sec­ond pa­ra­me­ter is added to proper time. You might think of the sec­ond pa­ra­me­ter as the arc length along a string that wig­gles around in time. How­ever, ap­proaches along these lines are ex­tremely com­pli­cated. Quan­tum field the­ory re­mains the work­horse of rel­a­tivis­tic quan­tum me­chan­ics.


A.15.10 Non­rel­a­tivis­tic quan­tum field the­ory

This ex­am­ple ex­er­cise from Sred­nicki [43, p. 11] uses quan­tum field the­ory to de­scribe non­rel­a­tivis­tic quan­tum me­chan­ics. It il­lus­trates some of the math­e­mat­ics that you will en­counter in quan­tum field the­o­ries.

The ob­jec­tive is to con­vert the clas­si­cal non­rel­a­tivis­tic Schrö­din­ger equa­tion for $I$ par­ti­cles,

\begin{displaymath}
{\rm i}\hbar \frac{\partial \Psi}{\partial t} = H_{\rm cl} \Psi %
\end{displaymath} (A.68)

into quan­tum field form. The clas­si­cal wave func­tion has the po­si­tions of the num­bered par­ti­cles and time as ar­gu­ments:
\begin{displaymath}
\mbox{classical quantum mechanics:}\quad
\Psi=\Psi({\skew0...
...{\skew0\vec r}_2,{\skew0\vec r}_3,\ldots,{\skew0\vec r}_I;t) %
\end{displaymath} (A.69)

where ${\skew0\vec r}_1$ is the po­si­tion of par­ti­cle 1, ${\skew0\vec r}_2$ is the po­si­tion of par­ti­cle 2, etcetera. (You could in­clude par­ti­cle spin within the vec­tors ${\skew0\vec r}$ if you want. But par­ti­cle spin is in fact rel­a­tivis­tic, chap­ter 12.12.) The clas­si­cal Hamil­ton­ian is
\begin{displaymath}
H_{\rm cl}
= \sum_{i=1}^I\left(\frac{\hbar^2}{2m}\nabla^2_...
...\ne i}}^I
V({\skew0\vec r}_i-{\skew0\vec r}_{\underline i}) %
\end{displaymath} (A.70)

The $\nabla_i^2$ term rep­re­sents the ki­netic en­ergy of par­ti­cle num­ber $i$. The po­ten­tial $V_{\rm {ext}}$ rep­re­sents forces on the par­ti­cles by ex­ter­nal sources, while the po­ten­tial $V$ rep­re­sents forces be­tween par­ti­cles.

In quan­tum field the­ory, the wave func­tion for ex­actly $I$ par­ti­cles takes the form

\begin{displaymath}
{\left\vert\Psi\right\rangle} =
\int_{{\rm all\ }{\skew0\v...
...e {\,\rm d}^3{\skew0\vec r}_1\ldots{\rm d}^3{\skew0\vec r}_I %
\end{displaymath} (A.71)

Here the ket ${\left\vert\Psi\right\rangle}$ in the left hand side is the wave func­tion ex­pressed as a Fock space ket. The ket ${\left\vert\vec0\right\rangle}$ to the far right is the vac­uum state where there are no par­ti­cles. How­ever, the pre­ced­ing cre­ation op­er­a­tors then put in the par­ti­cles at po­si­tions ${\skew0\vec r}_1$, ${\skew0\vec r}_2$, .... That pro­duces a ket state with the par­ti­cles at these po­si­tions.

The quan­tum am­pli­tude of that ket state is the pre­ced­ing $\Psi$, a func­tion, not a ket. This is the clas­si­cal non­rel­a­tivis­tic wave func­tion, the one found in the non­rel­a­tivis­tic Schrö­din­ger equa­tion. Af­ter all, the clas­si­cal wave func­tion is sup­posed to give the quan­tum am­pli­tude for the par­ti­cles to be at given po­si­tions. In par­tic­u­lar, its square mag­ni­tude gives the prob­a­bil­ity for them to be at given po­si­tions.

So far, all this gives just the ket for one par­tic­u­lar set of par­ti­cle po­si­tions. But then it is in­te­grated over all pos­si­ble par­ti­cle po­si­tions.

The Fock space Schrö­din­ger equa­tion for ${\left\vert\Psi\right\rangle}$ takes the form

\begin{displaymath}
{\rm i}\hbar\frac{{\rm d}{\left\vert\Psi\right\rangle}}{{\rm d}t} = H {\left\vert\Psi\right\rangle} %
\end{displaymath} (A.72)

That looks just like the clas­si­cal case. How­ever, the Fock space Hamil­ton­ian $H$ is de­fined by
 $\displaystyle H {\left\vert\Psi\right\rangle}$ $\textstyle =$ $\displaystyle \int_{{\rm all\ }{\skew0\vec r}}
\widehat a^\dagger ({\skew0\vec ...
...dehat a({\skew0\vec r}) {\left\vert\Psi\right\rangle}
{\,\rm d}^3{\skew0\vec r}$   
     $\displaystyle +
{\textstyle\frac{1}{2}} \int_{{\rm all\ }{\skew0\vec r}}\int_{{...
...si\right\rangle}
{\,\rm d}^3{\skew0\vec r}{\rm d}^3{\underline{\skew0\vec r}}%
$  (A.73)

In or­der for this to make some sense, note that the Fock space ket ${\left\vert\Psi\right\rangle}$ is an ob­ject that al­lows you to an­ni­hi­late or cre­ate a par­ti­cle at any ar­bi­trary lo­ca­tion ${\skew0\vec r}$. That is be­cause it is a lin­ear com­bi­na­tion of ba­sis kets that al­low the same thing.

The goal is now to show that the Schrö­din­ger equa­tion (A.72) for the Fock space ket ${\left\vert\Psi\right\rangle}$ pro­duces the clas­si­cal Schrö­din­ger equa­tion (A.68) for clas­si­cal wave func­tion $\Psi(\ldots)$. This needs to be shown whether it is a sys­tem of iden­ti­cal bosons or a sys­tem of iden­ti­cal fermi­ons.

Be­fore try­ing to tackle this prob­lem, it is prob­a­bly a good idea to re­view rep­re­sen­ta­tions of func­tions us­ing delta func­tions. As the sim­plest ex­am­ple, a wave func­tion $\Psi(x)$ of just one spa­tial co­or­di­nate can be writ­ten as

\begin{displaymath}
\Psi(x) =
\int_{{\rm all\ }{\underline x}}
\;
\underbrac...
...\underline x}){\rm d}{\underline x}$}}
_{{\rm basis\ states}}
\end{displaymath}

The way to think about the above in­te­gral ex­pres­sion for $\Psi(x)$ is just like you would think about a vec­tor in three di­men­sions be­ing writ­ten as $\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $v_1{\hat\imath}+v_2{\hat\jmath}+v_3{\hat k}$ or a vec­tor in 30 di­men­sions as $\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sum_{i=1}^{30}v_i{\hat\imath}_i$. The $\Psi({\underline x})$ are the co­ef­fi­cients, cor­re­spond­ing to the $v_i$-​com­po­nents of the vec­tors. The $\delta(x-{\underline x}){\rm d}{\underline x}$ are the ba­sis states, just like the unit vec­tors ${\hat\imath}_i$. If you want a graph­i­cal il­lus­tra­tion, each $\delta(x-{\underline x}){\rm d}{\underline x}$ would cor­re­spond to one spike of unit height at a po­si­tion ${\underline x}$ in fig­ure 2.3, and you need to sum (in­te­grate) over them all, with their co­ef­fi­cients, to get the to­tal vec­tor.

Now as­sume that $H_1$ is the one-di­men­sion­al clas­si­cal Hamil­ton­ian. Then $H_1\Psi(x)$ is just an­other func­tion of $x$, so it can be writ­ten sim­i­larly:

\begin{eqnarray*}
H_1 \Psi(x)
& = &
\int_{{\rm all\ }{\underline x}}
H_1\Psi...
...})
\right]
\delta(x - {\underline x})
{\,\rm d}{\underline x}
\end{eqnarray*}

Note that the Hamil­ton­ian acts on the co­ef­fi­cients, not on the ba­sis states.

You may be sur­prised by this, be­cause if you straight­for­wardly ap­ply the Hamil­ton­ian $H_1$, in terms of $x$, on the in­te­gral ex­pres­sion for $\Psi(x)$, you get:

\begin{displaymath}
H_1 \Psi(x) =
\int_{{\rm all\ }{\underline x}}
\Psi({\und...
...) \delta(x - {\underline x})
\right]
{\,\rm d}{\underline x}
\end{displaymath}

Here the Hamil­ton­ian acts on the ba­sis states, not the co­ef­fi­cients.

How­ever, the two ex­pres­sions are in­deed the same. Whether there is an $x$ or ${\underline x}$ in the po­ten­tial does not make a dif­fer­ence, be­cause the mul­ti­ply­ing delta func­tion is only nonzero when $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\underline x}$. And you can use a cou­ple of in­te­gra­tions by parts to get the de­riv­a­tives off the delta func­tion and on $\Psi({\underline x})$. Note here that dif­fer­en­ti­a­tion of the delta func­tion with re­spect to $x$ or ${\underline x}$ is the same save for a sign change.

The bot­tom line is that you do not want to use the ex­pres­sion in which the Hamil­ton­ian is ap­plied to the ba­sis states, be­cause de­riv­a­tives of delta func­tions are highly sin­gu­lar ob­jects that you should not touch with a ten foot pole. (And if you have math­e­mat­i­cal in­tegrity, you would not re­ally want to use delta func­tions ei­ther. At least not the way that they do it in physics. But in that case, you bet­ter for­get about quan­tum field the­ory.)

It may here be noted that if you do have to dif­fer­en­ti­ate an in­te­gral for a func­tion $\Psi(x)$ in terms of delta func­tions, there is a much bet­ter way to do it. If you first make a change of in­te­gra­tion vari­able to $u$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\underline x}-x$, the dif­fer­en­ti­a­tion is no longer on the nasty delta func­tions.

Still, there is an im­por­tant ob­ser­va­tion here: you might ei­ther know what an op­er­a­tor does to the co­ef­fi­cients, leav­ing the ba­sis states un­touched, or what it does to the ba­sis states, leav­ing the co­ef­fi­cients un­touched. Ei­ther one will tell you the fi­nal ef­fect of the op­er­a­tor, but the math­e­mat­ics is dif­fer­ent.

Now that the gen­eral terms of en­gage­ment have been dis­cussed, it is time to start solv­ing Sred­nicki’s prob­lem. The Fock space wave func­tion ket can be thought of the same way as the ex­am­ple:

\begin{displaymath}
{\left\vert\Psi\right\rangle} =
\int_{{\rm all\ }{\skew0\v...
...}^3{\skew0\vec r}_I}
_{{\rm Fock\ space\ basis\ state\ kets}}
\end{displaymath}

The ba­sis states are Fock space kets in which a par­ti­cle called 1 is in a delta func­tion at a po­si­tion ${\skew0\vec r}_1$, a par­ti­cle called 2 in a delta func­tion at po­si­tion ${\skew0\vec r}_2$, etcetera. The clas­si­cal wave func­tion $\Psi(\ldots)$ gives the quan­tum am­pli­tude of each such ket. The in­te­gra­tion gives ${\left\vert\Psi\right\rangle}$ as a com­bined ket.

Note that Fock states do not know about par­ti­cle num­bers. A Fock ba­sis state is the same re­gard­less what the clas­si­cal wave func­tion calls the par­ti­cles. It means that the same Fock ba­sis state ket reap­pears in the in­te­gra­tion above at all swapped po­si­tions of the par­ti­cles. (For fermi­ons read: the same ex­cept pos­si­bly a sign change, since swap­ping the or­der of ap­pli­ca­tion of any two $\widehat a^\dagger $ cre­ation op­er­a­tors flips the sign, com­pare sub­sec­tion A.15.2.) This will be­come im­por­tant at the end of the de­riva­tion.

The left hand side of the Fock space Schrö­din­ger equa­tion (A.72) is eval­u­ated by push­ing the time de­riv­a­tive in­side the above in­te­gral for ${\left\vert\Psi\right\rangle}$:

\begin{displaymath}
{\rm i}\hbar\frac{{\rm d}{\left\vert\Psi\right\rangle}}{{\r...
...le
{\,\rm d}^3{\skew0\vec r}_1\ldots{\rm d}^3{\skew0\vec r}_I
\end{displaymath}

so the time de­riv­a­tive drops down on the clas­si­cal wave func­tion in the nor­mal way.

Ap­ply­ing the Fock-space Hamil­ton­ian (A.73) on the wave func­tion is quite a dif­fer­ent story, how­ever. It is best to start with just a sin­gle par­ti­cle:

\begin{displaymath}
H {\left\vert\Psi\right\rangle} =
\int_{{\rm all\ }{\skew0...
...c 0\rangle
{\,\rm d}^3{\skew0\vec r}_1{\rm d}^3{\skew0\vec r}
\end{displaymath}

The field op­er­a­tor $\widehat a({\skew0\vec r})$ may be pushed past the clas­si­cal wave func­tion $\Psi(\ldots)$; $\widehat a({\skew0\vec r})$ is de­fined by what it does to the Fock ba­sis states while leav­ing their co­ef­fi­cients, here $\Psi(\ldots)$, un­changed. That gives:

\begin{displaymath}
H {\left\vert\Psi\right\rangle} =
\int_{{\rm all\ }{\skew0...
...c 0\rangle
{\,\rm d}^3{\skew0\vec r}_1{\rm d}^3{\skew0\vec r}
\end{displaymath}

It is now that the (anti)com­mu­ta­tor re­la­tions be­come use­ful. The fact that for bosons $[\widehat a({\skew0\vec r})\widehat a^\dagger ({\skew0\vec r}_1)]$ or for fermi­ons $\{\widehat a({\skew0\vec r})\widehat a^\dagger ({\skew0\vec r}_1)\}$ equals $\delta^3({\skew0\vec r}-{\skew0\vec r}_1)$ means that you can swap the or­der of these op­er­a­tors as long as you add a delta func­tion term:

\begin{eqnarray*}
& \widehat a_{\rm {b}}({\skew0\vec r})\widehat a^\dagger _{\r...
...f}}({\skew0\vec r})
+ \delta^3({\skew0\vec r}-{\skew0\vec r}_1)
\end{eqnarray*}

But when you swap the or­der of these op­er­a­tors, you get a fac­tor $\widehat a({\skew0\vec r})\vert\vec0\rangle$. That is zero, be­cause ap­ply­ing an an­ni­hi­la­tion op­er­a­tor on the vac­uum state pro­duces zero, fig­ure A.6. So the delta func­tion term is all that re­mains:

\begin{displaymath}
H {\left\vert\Psi\right\rangle} =
\int_{{\rm all\ }{\skew0...
...c 0\rangle
{\,\rm d}^3{\skew0\vec r}_1{\rm d}^3{\skew0\vec r}
\end{displaymath}

In­te­gra­tion over ${\skew0\vec r}_1$ now picks out the value $\Psi({\skew0\vec r},t)$ from func­tion $\Psi({\skew0\vec r}_1,t)$, as delta func­tions do, so

\begin{displaymath}
H {\left\vert\Psi\right\rangle} =
\int_{{\rm all\ }{\skew0...
...skew0\vec r};t)
\vert\vec 0\rangle
{\,\rm d}^3{\skew0\vec r}
\end{displaymath}

Note that the term in square brack­ets is the clas­si­cal Hamil­ton­ian $H_{\rm {cl}}$ for a sin­gle par­ti­cle. The cre­ation op­er­a­tor $\widehat a^\dagger ({\skew0\vec r})$ can be pushed over the co­ef­fi­cient $H_{\rm {cl}}\Psi({\skew0\vec r};t)$ of the vac­uum state ket for the same rea­son that $\widehat a({\skew0\vec r})$ could be pushed over $\Psi({\skew0\vec r}_1;t)$; these op­er­a­tors do not af­fect the co­ef­fi­cients of the Fock states, just the states them­selves.

Then, reno­tat­ing ${\skew0\vec r}$ to ${\skew0\vec r}_1$, the grand to­tal Fock state Schrö­din­ger equa­tion for a sys­tem of one par­ti­cle be­comes

\begin{eqnarray*}
\lefteqn{\int_{{\rm all\ }{\skew0\vec r}_1}
{\rm i}\hbar\fra...
...{\skew0\vec r}_1)\vert\vec 0\rangle
{\,\rm d}^3{\skew0\vec r}_1
\end{eqnarray*}

It is now seen that if the clas­si­cal wave func­tion $\Psi({\skew0\vec r}_1;t)$ sat­is­fies the clas­si­cal Schrö­din­ger equa­tion, the Fock-space Schrö­din­ger equa­tion above is also sat­is­fied. And so is the con­verse: if the Fock-space equa­tion above is sat­is­fied, the clas­si­cal wave func­tion must sat­isfy the clas­si­cal Schrö­din­ger equa­tion. The rea­son is that Fock states can only be equal if the co­ef­fi­cients of all the ba­sis states are equal, just like vec­tors can only be equal if all their com­po­nents are equal. Here that means that the co­ef­fi­cient of $\widehat a^\dagger ({\skew0\vec r}_1)\vert\vec0\rangle$ must be the same at both sides, for every sin­gle value of ${\skew0\vec r}_1$.

If there is more than one par­ti­cle, how­ever, the equiv­a­lent lat­ter con­clu­sion is not jus­ti­fied. Re­mem­ber that the same Fock space kets reap­pear in the in­te­gra­tion at swapped po­si­tions of the par­ti­cles. It now makes a dif­fer­ence. The fol­low­ing ex­am­ple from ba­sic vec­tors il­lus­trates the prob­lem: yes, $a{\hat\imath}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $a'{\hat\imath}$ im­plies that $a$ $\vphantom0\raisebox{1.5pt}{$=$}$ $a'$, but no, $(a+b){\hat\imath}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(a'+b'){\hat\imath}$ does not im­ply that $a$ $\vphantom0\raisebox{1.5pt}{$=$}$ $a'$ and $b$ $\vphantom0\raisebox{1.5pt}{$=$}$ $b'$; it merely im­plies that $a+b$ $\vphantom0\raisebox{1.5pt}{$=$}$ $a'+b'$. How­ever, if ad­di­tion­ally it is pos­tu­lated that the clas­si­cal wave func­tion has the sym­me­try prop­er­ties ap­pro­pri­ate for bosons or fermi­ons, then the Fock-space Schrö­din­ger equa­tion does im­ply the clas­si­cal one. In terms of the ex­am­ple from vec­tors, $(a+a){\hat\imath}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(a'+a'){\hat\imath}$ does im­ply that $a$ $\vphantom0\raisebox{1.5pt}{$=$}$ $a'$.

In any case, the prob­lem has been solved for a sys­tem with one par­ti­cle. Do­ing it for $I$ par­ti­cles will be left as an ex­er­cise for your math­e­mat­i­cal skills.