Лични средства

qft-e (1).tex

qft-e (1).tex — TeX document, 111Kb

Съдържание на файла

%% Lectures in QFT
%% by E. Horozov.

%%%%%%%     LaTeX2e, uses amsfonts.sty and latexsym.sty    %%%%%%%%%%%%%%%%%%%%
\usepackage{amsmath, amscd}

\input xypic

\usepackage[all, knot]{xy}




\setlength{\paperwidth}{8.5truein} % changed for title page with hsize

%%%%%%%%%%%%%%%%%%%%%%%%%%% Equation counting %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand {\sectionnew}[1]{\section{#1}\cleqn\clth}

\newcommand \nc {\newcommand}
\nc \proof {\noindent {\em{Proof.\/ }}} \nc \qed {$\Box$\hfill}

\nc \bth[1] {\begin{theorem}\label{t#1} } \nc \ble[1]
{\begin{lemma}\label{l#1} } \nc \bpr[1]
{\begin{proposition}\label{p#1} } \nc \bco[1]
{\begin{corollary}\label{c#1} } \nc \bde[1]
{\begin{definition}\label{d#1}\rm } \nc \bex[1]
{\begin{example}\label{e#1}\rm } \nc \bre[1]
{\begin{remark}\label{r#1}\rm } \nc \bcon[1]
{\begin{conjecture}\label{con#1}\rm } \nc \bque[1]
{\begin{question}\label{que#1}\rm }
\nc {\eth} { \end{theorem} } \nc {\ele} { \end{lemma} } \nc
{\epr}{ \end{proposition} } \nc {\eco} { \end{corollary} } \nc
{\ede} {\end{definition} } \nc {\eex} { \end{example} } \nc {\ere}
{\end{remark} } \nc {\econ} { \end{conjecture} } \nc {\eque}
{\end{question} }
 \nc \thref[1]{Theorem \ref{t#1}}
\nc \leref[1]{Lemma \ref{l#1}} \nc \prref[1]{Proposition
\ref{p#1}} \nc \coref[1]{Corollary \ref{c#1}} \nc
\deref[1]{Definition \ref{d#1}} \nc \exref[1]{Example \ref{e#1}}
\nc \reref[1]{Remark \ref{r#1}} \nc
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nc \bth[1] { \begin{theorem}\label{t#1} }
\newcommand {\normprod}[1]{ {\textrm{:}}{#1}{\textrm{:}} } %%%normal product
\def \W {W_{1+\infty}}
\def \WN {\W(N)}
%\def \A {{\mathcal A}}
\def \M {{\mathcal M}}
\def \L {{\mathcal L}}
\def \O {{\mathcal O}}
\def \R {{\mathcal R}}
\def \D {{\mathcal D}}
\def \Dir{\partial \!\!\!/}
\def \Dmom{p \!\!\!/}
\def \g {\gamma}
\def \G {\Gamma}
\def \B {{\mathcal B}}
\def \bb {b}
\def\dd{{\mathrm{\, d}}}
\def \K {{\mathcal K}}

\def \e{\epsilon}
\def \ep{\varepsilon}
\def \d {{\partial}}
\def \Rset {{\mathbb R}}
\def \Cset {{\mathbb C}}
\def \Zset {{\mathbb Z}}
\def \Nset {{\mathbb N}}
\def \Vset {{\mathbb V}}
\def \A {{\mathbb A}}
\def \F {{\mathbb F}}
\def \N {{\mathbb N}}
\def \Z {{\mathbb Z}}
\def \Q {{\mathbb Q}}
\def \R {{\mathbb R}}
\def \C {{\mathbb C}}
\def \Hom{ {\mathrm{Hom}}}
\def \Aut{ {\mathrm{Aut}}}
\def \End{ {\mathrm{End}}}
\def \tr{ {\mathrm{Tr}}}
\def \coker { {\mathrm{Coker}} }
\def \ord { {\mathrm{ord}} }
\def \rank { {\mathrm{rank}} }
\def \span { {\mathrm{span}} }
\def \const { {\mathrm{const}} }
\def \mod { {\mathrm{mod}} }
\def \spec { {\mathrm{Spec}} }
\def \diag { {\mathrm{diag}} }
\def \deg { {\mathrm{deg}} }
\def \mult { {\mathrm{mult}} }
\def \res { {\mathrm{Res}} }
\def \ad { {\mathrm{ad}} }
\def \Ad { {\mathrm{Ad}} }
\def \wt { {\mathrm{wt}} }
\def \psd { {\mathrm{Psd}} }
\def \Im { {\mathrm{Im}} }
\def \Re { {\mathrm{Re}} }
\def \p { {\partial}}
\renewcommand \ker { {\mathrm{Ker}} }
\def \vect {\overrightarrow }
%%%%%%%%%%%%%   Grassmannians  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\nc \Wr {Wr} \nc \GRN { \Gr^{(N)} }
\nc \GRA[1] { \Gr_A^{(#1)} }   %% Gr_A
\nc \GRAN { \GRA{N} } \nc \GrA[1] { \Gr_A(#1) }\nc \GrAa {
\GrA{\alpha} }
\nc \GRB[1] { \Gr_B^{(#1)} }   %% Gr_B
\nc \GRBN { \GRB{N} } \nc \GrB[1] { \Gr_B(#1) } \nc \GrBb {
\GrB{\beta} }
\nc \GRMB[1] { \Gr_{MB}^{(#1)} }   %% Gr_{MB}
\nc \GRMBN { \GRMB{N} } \nc \GrMB[1] { \Gr_{MB}(#1) } \nc \GrMBb {
\GrMB{\beta} }


\headsep 10mm \oddsidemargin 0in \evensidemargin 0in



%%%%%%%%%%%%%%%%%%%%%%    Title    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{{\textsf {QUANTUM FIELD  THEORY}} \\for mathematicians}

\thanks{E-mail: horozov@fmi.uni-sofia.bg and horozov@math.bas.bg}
\\ \hfill\\ \normalsize \textit{Faculty of Mathematics and
\normalsize \textit{Sofia University "St. Kliment Okhridski"},
 \\ \hfill\\
and \\ \hfill\\
\normalsize \textit {Institute of Mathematics and Informatics, }\\
\normalsize \textit{ Bulg. Acad. of Sci., Acad. G. Bonchev Str.,
Block 8, 1113 Sofia, Bulgaria }  }

%%%%%%%%%%%%%%%%%%%%   Introduction   %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%



From the times of Newton Physics and Mathematics have lived most
of the time in symbiosis. Physics has supplied Mathematics with
deep problems. On its hand Mathematics has developed a language
for writing the physical problems and laws  and tools for solving
the problems posed by physicists.  Only for some part of 20-th
century there was the fashion of "pure mathematics", which means
the neglect of any motivation of mathematical research but the
intrinsic mathematical problems. For some time this has been maybe
useful for the advance of   abstract algebraic geometry, number
theory, etc. But after some really fruitful years for Mathematics
there came the new marriage -- Mathematics and Physics once again
became very close. Now there is a new ingredient in their
relations -- Physics started to supply not only ideas but also
intuition and tools for {\it posing and solving mathematical
problems}. Let us mention here the problem of computing the
intersection numbers of Chern classes on moduli spaces of Riemann
surfaces \cite{Kon1, Wit1}, knot invariants, mirror symmetry, etc.





Thirty one years ago Dick Feynman told me about his "sum over
histories" version of quantum mechanics. "The electron does
anything it likes", he said. It goes in any direction with any
speed, forward or backward in time, however it likes, and then you
add up the amplitudes and it gives you the wave function." I said
to him "You are crazy." But he wasn't.


--F. J. Dyson


The title of this subsection is a little bit misleading. Here we present only one
simple (but very important) experiment whose goal is to justify
the introduction of path integrals in physics. It is taken from R.
Feynman's numerous popular lectures (see e.g. \cite{Fey1}



\subsection{Mathematical view on QFT}

Before presenting a more complete account on the path integral
method we would like to explain in few words some of the ideas on
which it rests. The Feynman path (or more general -- functional)
integrals are integrals depending on parameters and the
"integration" is carried on infinite-dimensional spaces. First we
are going to study integrals of the  Feynman type but on
finite-dimensional spaces. They should  not be considered only as
toy models for the  real QFTs. They are a main ingredient in their
study. We are going to introduce the famous Feynman graphs that
help to express in a simple way their asymptotic expansions. With
their help we are going {\it to define} the Feynman integrals in
the cases relevant for QFT. The 0-dimensional QFT being a powerful
mathematical tool has a lot of beautiful applications to areas far
from QFT, e.g.  -- topology of moduli spaces of Riemann surfaces.

From mathematical point of view QFT studies "integrals" that are
defined as follows. Let $\Sigma$ and $N$ be manifolds with
Riemannian or pseudo-Riemannian metric. We shall denote by
$Map(\Sigma, N)$ the set of all smooth (= infinitely
differentiable) maps from $\Sigma$ to $N$. Let us also have an
action function (or rather functional) $S(\phi)$ on $\phi \in
Map(\Sigma, N)$. Let $\-h$ be a small constant (Planck's
constant). We will be interested in the following object
(including giving sense to it):

\int_{Map(\Sigma, N)} V(\phi) \exp(\frac{-S(\phi)}{\hbar})
           Here $V(\phi)$ is "an insertion function" in physicist's
language. This is a smooth function on $Map(\Sigma, N)$, whose
meaning will be explained later. The function
$\exp(\frac{-S(\phi)}{\hbar})$ has the meaning of probability
amplitude of the map $\phi \in Map(\Sigma, N)$ to the integral.

The set of the  objects  $\big(\Sigma, N,S(\phi), \phi \in
Map(\Sigma, N)\big)$ is called by physicists "theory". In the case
when $V \equiv1$ the above integral \eqref{2.1} is called {\it
partition function} of the theory and is denoted by

\beq Z^E=\int_{Map(\Sigma, N)} \exp(\frac{-S(\phi)}{\hbar})
 The superscript "E" means that the theory is {\it "euclidean"},
 e.i. the manifold $\Sigma$ is Riemannian. When $\Sigma$
is pseudoeucledian manifold with Lorentzian metric (of signature
(-,+,+,+)) we call the theory {\it relativistic QFT}. The first
coordinate is reserved for the time. In that case we replace the
sign $(-)$ with the imaginary unit $i$:

\beq Z^M=\int_{Map(\Sigma, N)} \exp(\frac{iS(\phi)}{\hbar})
   In this case the theory is {\it Minkovskian QFT}, the letter
   $M$ designating this fact.
We are going to start with $0$-dimensional theory. Of course here
there is no time or spacial coordinates.  Let us start with the
case when $\Sigma$ is one point and $N$ is the real line. Now the
set  $Map(\Sigma, N)$ consists of all real constants, e.i. it is
the real line. The partition function becomes

  \int_{-\infty}^{\infty} e^{-S(x)/{\hbar}}dx
     This integral is studied by {\it the method of
the steepest descent},  which will be  explained in one of the
next sections.

The physicists are in particular interested in the integral
\eqref{2.3} when the manifold $\Sigma$ has dimension $\geq 3$. As
a rule its value cannot be computed explicitly. So they (Feynman)
have invented an algorithm to "find" its
     {\it asymptotic  expansion} in $\hbar$. From mathematical
     point of view even the definition needs clarification.
     One reasonable definition is to
define the integral \eqref{2.3} just by a series (asymptotic, not
convergent!), whose members are indexed by different types of
graphs. (This is the reason why we put "find" in quotation marks.)

But this is not the end. The problem is that the members of the
series are defined via integrals that are not well defined. Then
there comes a painful procedure to put meaning in each of this
integrals ({\it renormalization}). The methods of renormalization
are found by physical intuition but further the predictions are
confirmed by experiments.

\sectionnew{QFT in dimension $0$ -- first Feynman graphs}
In this chapter we are going to consider  Feynman integrals for
$0$-dimensional manifolds $\Sigma$, i.e. when $\Sigma$ consists of
finite number (say $d$)  of point. This is reason for the title of
the chapter. In this case the set of all mappings  from $\Sigma$
to $\Rset$ is just the vector space $V=\Rset^d$ and the integrals

  \int_V e^{-S(x)/{\hbar}}dx.
\label{1.0} \eeq

\subsection{Free theory}

 In quantum field theory the case when $S$ contains only
      quadratic terms is called {\it free theory}. We are going
      to consider the theory as a perturbation of the free theory
      Then the function $S$ is given in the form $S=B(x,x) +
      \sum_{m\geq 3}g_mB_m(x,x,\ldots ,x)$  where
      $B_m(x,x,\ldots ,x)$ are  {\it the interactions} and $g_m$
      are formal parameters.

%%%%%%%%%%%%%%%%%%%%%%%%%Da se preraboti-11.03.09%%%%%%%%%%%%%%%%%%%%%%%%%!!!!!
The simplest example of an integral of the form \eqref{2.4} is the
Gaussian integral:

\int_{-\infty}^{\infty}e^{-\frac{1}{2}ax^2}dx =

Its multidimensional generalization is defined via a symmetric
quadratic positive-definite form $B(x,x)$ defined on a
$d$-dimensional space $V$
 in terms of a positive-definite symmetric matrix
$B(x,y)= (Bx,y)$. Then the integral we want to study is

 \int_Ve^{-\frac{1}{2}(Bx,x)}dx. \label{1.1}
By a change of variables $ x\rightarrow Sx$  where $S \in SO(d)$
we can diagonalize   the matrix $B$. Then the integral \eqref{1.1}
is easily computed:

 \int_Ve^{-\frac{1}{2}(Bx,x)}dx =
 \sqrt{\frac{(2\pi)^d}{\det B}}. \label{1.2}
Sometimes it is useful in  to consider slightly more general
integrals, namely -- with a linear term in the exponent:

   Z_b = \int_V e^{-\frac{1}{2}(Bx,x) + (b,x)}dx \label{1.2}.
      It is easy to check that

            Z_b=  (2 \pi)^{d/2}(\det B)^{-1/2}
            e^{\frac{1}{2}(b,B^{-1}b)} =

We also will be interested in integrals with insertions

 \int_V P(x) e^{-\frac{1}{2}(Bx,x)} dx. \label{1.3}

       To compute the above integral it is enough to
       consider the case when the polynomial
       $P(x)$ is homogeneous and moreover that it is
        a product of linear forms $l_1(x)l_2(x)\ldots
       l_{2m}(x)$. Such an integral is a major object in quantum
       physics (and not only there!) and is called {\it correlation
       function} or {\it correlator}. It is denoted as follows

       <l_1,l_2,\ldots,l_m> = \frac{1}{Z_0}\int_V l_1(x)l_2(x)\ldots
       l_{m}(x)e^{-\frac{1}{2}B(x,x)}dx,   \label{1.4}
         The correlators can be computed by using the integral $Z_b$
         as follows. First notice that

       \frac{\partial}{\partial b_j}Z_b= \int_V e^{-\frac{1}{2}(Bx,x) + (b,x)}x_jdx
       Then for any product of coordinate functions
       $x^{i_1}x^{i_2}\ldots x^{i_k}$, not necessarily different, we

       \frac{\partial}{\partial b_{i_1}}\ldots\frac{\partial}{\partial
       Z_b= \int_V e^{-\frac{1}{2}(Bx,x) + (b,x)}x^{i_1}x^{i_2}\ldots

 From this formula we obtain the correlator

   <x^{i_1},x^{i_2},\ldots,x^{i_m}> = \frac{1}{Z_0} \frac{\partial}{\partial b_{i_1}}\ldots\frac{\partial}{\partial
       b_{i_k}} Z_b|_{b=0} = \frac{\partial}{\partial b_{i_1}}\ldots\frac{\partial}{\partial
       b_{i_k}}e^{\frac{1}{2}(b,B^{-1}b)}_{|b=0} \label{3a.3}
    In particular the two-point correlation functions are given by
    {\it the matrix elements of $B^{-1}$}:

     <x^i,x^j> = (B^{-1})_{|ij}.

We can use these results for more general functions and even for
formal power series $f_1,\ldots,f_m$ to obtain

\bpr{4.1} The correlator $<f_1,\ldots,f_m>$ is given by the
   <f_1,f_2,\ldots,f_m> = f_1( \frac{\partial}{\partial b})\ldots
   f_m(\frac{\partial}{\partial b} )
         \proof  The formula for the monomials is \eqref{3a.3}. The
         general formula is obtained as linear combination of the monomial
         formulas. \qed

  From this we obtain a simple but very important combinatorial
  theorem, known to physicists as {\it Wick's Lemma}.
       Before formulating it we will introduce some combinatorics.

       Consider the set $\{1,2, \dots ,2m\}$. A pairing of this
       set is a partition $\sigma$ of the set into $m$ disjoint pairs.
       Let us denote the set of all pairings of the above set by
       $\Pi_m$. It is known that $|\Pi_m|= \frac{(2m)\,!}{2^m \, m \,
       !}$. Any $\sigma \in \Pi_m$  can be considered as a
       permutation of $\{1,2, \dots ,2m\}$ without fixed points
       and such that $\sigma^2 = 1$. Any pair consists of an element $i$
       and its image $\sigma (i)$. Now we are ready to formulate the
       Wick's lemma.

              (Wick's lemma)
               < l_1\ldots l_{m}>=
   \sum_{\sigma \in \Pi_m} \prod_{i \in \{1,\ldots, m\}/\sigma}
                   <l_i,l_{\sigma(i)}> & \textrm {if $m$ is even}\\
                   0 & \textrm {if $m$ is odd}
                   \end{cases}. \label{3a.4}
              \proof As before we shall first prove the theorem for
              coordinate functions. In this case our formula takes
              the form

               <x^{i_1},\ldots, x^{i_m}>= \begin{cases}
                  \sum_{\sigma \in \Pi_m}
                  \prod_{i \in \{1,\ldots, m\}/\sigma}
                   <x^{i}, x^{\sigma(i)}> & \text{if $m$ is
                   even}\\ 0 & \textrm{if $m$ is odd}
               By \eqref{3a.3} we need to
              compute the derivatives:
\frac{\partial}{\partial b_{i_1}}\ldots\frac{\partial}{\partial
               Let us do this computation by induction. We have
                e^{\frac{1}{2}(b,B^{-1}b)} =
                \sum_j (B^{-1})_{ij}b_je^{\frac{1}{2}(b,B^{-1}b)}.
                It is clear that applying the next derivative
                $\partial_{k}$ produces
 by Leibnitz' rule:
   where $Q$ is some homogenous polynomial of degree two and
   $P$ has only even power terms, the free term $(B^{-1})_{ik}$
    giving the result in this case.  In general we proceed in the same way.
    Denote by $P_{i_1\ldots i_s}$ the corresponding polynomial in
    $b$ (but when it is clear we will drop the indeces):

    $$\frac{\partial}{\partial b_{i_1}}\ldots\frac{\partial}
    {\partial b_{i_s}}e^{\frac{1}{2}(b,B^{-1}b)} =

    Each new    application of a derivative $\partial_j$ has the
   following  effect on $P$

       \partial_j\big(P(b)e^{\frac{1}{2}(b,B^{-1}b)}\big) =
      \big(\partial_j P(b)\big) e^{\frac{1}{2}(b,B^{-1}b)}+
      i.e. $P \rightarrow \big(\partial_j + \sum_m
      (B^{-1})_{jm}b_m\big) P(b)$. Notice that the function $P_{i_1\ldots i_s}$
       is either even  or odd depending on the number $s$. This proves
       the formula for odd $m$. At the same time we obtained a
       formula for $P_{i_1\ldots i_s}$:

   P_{i_1\ldots i_s}= \big(\partial_{i_1} + \sum_m
      (B^{-1})_{{i_1}m}b_m\big)\ldots \big(\partial_{i_s} + \sum_m
      (B^{-1})_{{i_s}m}b_m\big)\cdot 1.
      From this formula we see that the free term consists of sum
      of products of the type
      $(B^{-1})_{{l_1}{m_1}}\ldots (B^{-1})_{{l_p}{m_p}}$, where
      $2p=s$ and  ${l_j}{m_j}$  is a pair from the set of indices
      $i_1,\ldots i_s $. Moreover, each pair is present exactly

The general case can be obtained using linearity as above. \qed

        Notice that each summand in the formula can be represented
        by simple graphs. For each $\sigma$ and  each pair
        $(ij) \in \sigma$  draw an unoriented
        subgraph with two vertices -- $i$ and $j$ and a wedge connecting
         them. The disconnected union of these subgraphs is the
         desired graph $\Gamma_{\sigma}$, corresponding to
         the partition $\sigma$. Then our sum \eqref{3a.4}
         becomes {\it sum over the graphs $\Gamma_{\sigma}$}.


$\quad \quad \quad \quad \quad \quad $
\xymatrix{1 \ar[r]& 2  & 1\ar[d]& 2\ar[d] &1\ar[rd] &2 \ar[ld]\\
   3 \ar[r]& 4 & 3 & 4 & 3 & 4 \\ }

   Figure 1. $\quad $

       Although this is  just change of notation  we are going to use widely it in the
     computations involving general action, i.e. when $S$ is the  perturbed function

     S=\frac{B(x,x)}{2} +
     \sum_{r\geq 3} \frac{U_r(x,\ldots, x)}{r!}

      \subsection{Steepest descent and the stationary phase methods}

The method of steepest descent gives the asymptotics of integral
of the type \eqref{2.4}.

\bth{2.1} Let the functions $f(x)$ and $g(x)$ are smooth functions
defined on an interval $[a,b] \in \Rset $. Assume that the
function $f(x)$ has a unique global minimum at a point $c\in
[a,b]$ and $f^{''}(c) > 0$. Then the integral

       \int_a^b g(x)e^{-f(x)/\hbar} dx \nn
\eeq has the following asymptotic expansion:

   \int_a^b g(x)e^{-f(x)/\hbar} dx = \hbar^{1/2} e^{-f(c)/\hbar} I(\hbar),
\label{2.6} \eeq where $I(\hbar)$ is continuous function on
$(0,\infty)$,   which extends in $0$ as

  \lim_{\hbar \rightarrow 0} I(\hbar) =
 \sqrt{\pi} \frac{g(c)}{\sqrt{{f^{''}(c)}}}.
       \proof To simplify slightly notations we can consider
that $c=0$. Then we are going to cut the singular point from the
integration region, i.e. we define the integral over a small
neighborhood of $0$. This we will do as follows. Take a small real
number $\varepsilon$ satisfying $1/2 > \varepsilon > 0$ and define
$I_1(\hbar)$ by the equation

    \hbar^{1/2}e^{-f(0)/\hbar} I_1 = \int_{-\hbar ^{\frac{1}{2} -
\ep}} ^{\hbar ^{\frac{1}{2} - \ep}} g(x)e^{-f(x)/\hbar} dx
           Then it is clear
that the difference $|I(\hbar) - I_1(\hbar)|$   decays faster than
$\hbar^N$ for any $N$. So it suffices to show that $I_1(\hbar)$
has the asymptotics \eqref{2.6}. Let us introduce new variable $y$
by $x=y\hbar$. Then the function $I_1$ can be written as

     I_1 = \int_{-\hbar^{\ep}}^{\hbar^{\ep}}
      g(y\sqrt{\hbar})e^{(f(0)-f(y\sqrt{\hbar}))/\hbar}  dy
    Now it is clear that the integrand is a smooth function
in $\sqrt{\hbar}$. Then we can replace $I_1(\hbar)$ by
$I_2(\hbar)$ which is the Taylor expansion of $I_1(\hbar)$ modulo
 Then $|I_1(\hbar)-I_2(\hbar)|\leq C\hbar^N$. At the end we
 replace $I_2(\hbar)$ by $I_3(\hbar)$ which is the same
 integral but with limits from $-\infty$ to $\infty$.
 Then the difference $I_2(\hbar)- I_3(\hbar)$ is rapidly decaying.

Hence it is  enough to show that the  $I_3(\hbar)$ has Taylor
expansion in $\hbar^{1/2}$. In fact $I_3(\hbar)$  is a polynomial in
$\hbar^{1/2}$. Also the odd powers of $\hbar^{1/2}$ vanish as the
corresponding coefficients are integrals of odd functions. So we
find that the Taylor expansion exists. Let us compute the value of
$I_3(\hbar)$. We have

 I_3(0) = g(0)\int_{-\infty}^{\infty} e ^{-\frac{f^{''}(0)y^2}{2}}dy,
\nn \eeq
   Using the value of the Gaussian integral \eqref{1.1} we get the desired result. \qed

\bex {2.1}
        Consider the integral

\int_{-\infty}^{\infty}e^{-\frac{x^2+x^4}{2\hbar} }dx =
\sqrt{2\pi}\hbar^{1/2} I(\hbar)
Then the function $I(\hbar)$ is given by

I(\hbar) =
\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-\frac{y^2+\hbar \,y^4}{2}
  The integral has the asymptotic expansion

I(\hbar) = \sum_{n=0}^{\infty}a_n\hbar^n,

a_n = \frac{(-1)^n}{\sqrt{2\pi}}
\int_{-\infty}^{\infty}e^{-\frac{y^2}{2} }\frac{y^{4n}}{2^{2n}} n

The method of {\it stationary phase} is slightly more complicated
and uses the Fresnel integral

  \int_{-\infty}^{\infty} e^{ix^2/2} dx = \sqrt{ 2\pi } e^{\pi i /4}
\nn \eeq instead of Gaussian integral.

We are going only to formulate the result.

\bth{2.2} Assume that $f$ has unique critical point in $c \in
(a,b)$ with $f^{''}(c) \neq 0$ and $g$ vanishes with all its
derivatives at the ends $a$ and $b$. Then

\beq \int_a^b g(x)e^{if(x)/\hbar} dx = \hbar^{1/2} e^{if(c)/\hbar}
I(\hbar), \label{2.7} \eeq
     where $I(\hbar)$ extends to a smooth function on $[0,\infty)$, such that
$$I(0)= \sqrt{2\pi} e^{ sign(f(c)) i\pi/4}

The methods of steepest descent and the stationary phase easily
extend to the multidimensional case. We introduce the following
notation. By $V$ we denote a real vector space of dimension $d$
and by $B$ -- a closed $d$-dimensional box in it. We assume that
the functions $f(x)$ and $g(x)$ are defined on $B$ and smooth.

 Let the function $f$ have a unique global minimum at a point $c \in B$ and
 the form  $D^2 f(c)$ be positive-definite. Then

          \int_Bg(x)e^{-f(x)/\hbar} = \hbar^{d/2} e^{-f(c)/\hbar}I(\hbar),
\label{2.8} \eeq where $I(\hbar)$ extends as smooth function on
$[0,\infty)$, such that

 I(0)= (2\pi)^{d/2}\frac{g(c)}{\sqrt{\det D^2 f(c)}}.
\nn \eeq \eth

In a similar manner we formulate the stationary phase method.

 Let the function $f$ have a unique global minimum at a point $c \in B$
 and let  the form  $D^2 f(c)$ be non-degenerate. Then

          \int_Bg(x)e^{if(x)/\hbar} = \hbar^{d/2} e^{if(c)/\hbar}I(\hbar),
\label{2.9} \eeq where $I(\hbar)$ extends as smooth function on
$[0,\infty)$, such that

 I(0)= (2\pi)^{d/2}e^{{\pi i \sigma}/4}\frac{g(c)}{\sqrt{\det D^2f(c)}}.
\nn \een
      Here $\sigma$ is the signature of the symmetric bilinear
form $D^2f(c)$. \eth

Notice that  the multidimensional Gaussian  and Fresnel integrals
 become respectively

    \int_V e^{-B(x,x)}= (2 \pi)^{d/2}(\det B)^{-1/2} \label{2.10}
      for positive-definite  form B and

\beq \int_V e^{-B(x,x)}= (2 \pi)^{d/2}e^{\pi i \sigma}|\det
B|^{-1/2} \eeq for nondegenerate  form B.
    We leave the details of the proofs for the reader.

\subsection{Definitions of Feynman graphs }

      We aim to compute the entire  asymptotic  expansion of  integrals of
      the form:

            \int_V l_1\ldots l_N e^{-S(x)/\hbar} dx.
    in terms of some combinatorics. The result will be useful as
    it gives the model to define Feynman integrals in physically
    meaningful theories. Here

           S= \frac{1}{2}(Bx,x) + \sum \frac{g_jU_j(x)}{n!}
    The functions $U_j(x)$   are homogeneous polynomials
    of degree $j$q i.e.    symmetric $j$-tensors.
      The integral is a formal power series in $\hbar$ and $g_m$
      in a form that   will be explained below.

      For simplicity assume that $c=0$ and $S(0)=0$. The expansion will
      be done in terms of {\it Feynman diagrams}, which are a
      major object in quantum field theory. Also we make the change
      of the variables $x/\sqrt{\hbar} \rightarrow x$. We use the
      same  variables.  The correlator above becomes:

           \hbar^{(N+d)/2} \int_V l_1\ldots l_N
           e^{-(Bx,x) + \sum \hbar^{j/2}\frac{g_jU_j(x)}{n!}} dx.
    In what follows we are going to  drop the factor
    $\hbar^{(N+d)/2}$. We expand the exponential
    function above as follows:

    e^{-\frac{1}{2}(Bx,x) + \sum \frac{\hbar^{j/2}g_jU_j(x)}{j!} }=
    e^{-\frac{1}{2}(Bx,x)}\Big(1 +
    \frac{1}{1!}  \sum \hbar^{j/2}\frac{g_jU_j(x)}{j!} +
    \\  \frac{1}{2!} \big(\sum \hbar^{j/2}\frac{g_jU_j(x)}{j!} \big)^2 +
    \ldots + \frac{1}{k!}\big(\sum \hbar^{j/2}\frac{g_jU_j(x)}{j!} \big)^k  + \ldots \Big)
          The correlator becomes

\int_V  l_1\ldots l_Ne^{-\frac{1}{2}(Bx,x)}\Big(1 +
    \frac{1}{1!} \sum_j \hbar^{j/2}\frac{g_jU_j(x)}{j!} +
    \frac{1}{2!} \big(\sum \hbar^{j/2}\frac{g_jU_j(x)}{j!} \big)^2 +
    \ldots \Big)dx=\\
\sum_{n=0}^{\infty}\int_V  l_1\ldots l_Ne^{-\frac{1}{2}(Bx,x)}
     \big(\sum_{j=3}^{\infty} \sum \hbar^{j/2}\frac{g_jU_j(x)}{j!}
      \big)^n +    \ldots \Big)dx.
         Also we expand the $n$-th power of the infinite sum
         $\big(\sum_{j=3}^{\infty} \sum \hbar^{j/2}\frac{g_jU_j(x)}{j!}
          This is the formal series we are interested in.

          In the case when there are no functionals $l_j$ the
          corresponding function is called
            {\it partition function}. Explicitly it is

    Z_U = \int e^{-\frac{1}{2}(Bx,x) + \hbar^{1/2} U(x)}dx
         We have the obvious
        Z_U=Z_0e^{\hbar^{1/2} U(\frac{\partial}{\partial b})}
        e^{\frac{1}{2} (b,

Next we define the correlation function of $f_1,\ldots, f_m$ with
respect to the above perturbed action:

<f_1,\ldots,f_m>_U= \frac{1}{Z_U}\int f_1\ldots
f_me^{-\frac{1}{2}(Bx,x) + \hbar^{1/2} U(x)}  dx
         And again we have

<f_1,\ldots,f_m>_U= \frac{Z_0}{Z_U}e^{\hbar^{1/2}
U(\frac{\partial}{\partial b})}f_1(\frac{\partial}{\partial
b})\ldots f_m(\frac{\partial}{\partial b}) e^{\frac{1}{2}

        We want to express the correlator in terms of Feynman's graphs,
        which we define below.

      We denote by $G_{\geq 3}(N)$ the set of isomorphism classes
      of graphs with $N$ $1$-valent external vertices,
      labeled by $1,\ldots , N$ and finite number of unlabeled
      internal vertices of valency $\geq 3$.

      For each graph  $\Gamma$ we define a Feynman amplitude of
      $\Gamma$ by the following rules:
     \item Put the covector $l_j$ at the the $j$-th external vertex.

      \item Put the tensor $-g_mU_m$ at each $m$-valent vertex.

       \item Take the contraction of the tensors along the edges of
      $\Gamma$, using the bilinear form $B^{-1}$. The result will
      be a number denoted by $F_{\Gamma}$. This is the
      {\it Feynman amplitude}.


\subsection{Feynman's theorem}

      \bth{3.12} (Feynman) The correlation function $<l_1\ldots l_N>$
      is given by the asymptotic series:

<l_1\ldots l_N> = Z_0\sum_{\Gamma\in G_{\geq 3(N)}}

      We will give another version of this theorem, easier to
       prove. Before that let us introduce some notation.

   Let $\bf{n} = (n_0,n_1,\ldots)$ be a sequence of nonnegative
   integers, only a finite number of which are nonzero. Let
    $G(\bf{n})$ be the set of isomorphism classes of graphs with
    $n_0$ $0$-valent vertices,   $n_1$ $1$-valent vertices, etc.

    The version of Feynman's theorem that we have in mind goes as

    \bth{3.13} The partition function has the following asymptotic
 \big(\prod_i g_i^{n_i}\big)
 \sum_{\Gamma \in G(\bf{n})}

      \proof  First  expand the  exponential function
      in Taylor series. The partition function becomes

    Z_{\bf{n}}=\int_Ve^{-\frac{1}{2}B(x,x) }\prod_i   \frac{g_i^{n_i}}{(i!)^{n_i}n_i!}
    \big( -\hbar^{i/2 -1} U_i(x,\ldots ,x)^{n_i}dx\big)
          We can write the terms $U_i$ as sums of products of linear functions.
          Then we can apply Wick's lemma. It gives that  each $Z_{\bf{n}}$
          can be computed as follows.


         \item Define {\it  a flower} -- a graph with one vertex and $i$ outgoing
          edges (see fig. 1). Attach it to the tensor$U_i$.


\begin{tikzpicture}\filldraw [black] (0,0) circle (2pt);
 \draw (0,0)..controls (1,0) ..(1.4,0) ;
  \draw (0,0)..controls (0.7,0.3) ..(1.4,0.6) ;
  \draw (0,0)..controls (0.7,-0.3) ..(1.4,-0.6) ;\draw (1.45,0)
  circle (2pt);\draw (1.5,0.6)   circle (2pt);\draw (1.5,-0.6)
    circle (2pt);


\begin{flushleft} \bf{Figure 2.} \end{flushleft}

         \item Consider the set $T$ of these outgoing edges (see fig.)
          and for any pairing of this set, consider the corresponding
          contraction of the tensor $-U_i$ using the form $B^{-1}$.
          This will give the a number  $F_{\sigma}$ corresponding
          to this pairing.

          We can visualize a pairing $\sigma$ by drawing its elements as points
           and connecting the points in each pair them by an edge (see fig .).
           In this way we obtain an unoriented graph $\Gamma=\Gamma_{\sigma}$.
            The number $F_{\sigma}$ is called {\it an amplitude} of the graph


\begin{tikzpicture}\filldraw [black] (0,0) circle (2pt);
 \draw (0,0)..controls (1,0) ..(1.4,0) ;
  \draw (1.4,0)   circle (2pt);

\filldraw [black] (0,2) circle (2pt);
 \draw (0,2)..controls (1,2) ..(1.4,2) ;
  \draw (0,2)..controls (0.7,2.3) ..(1.4,2.6) ;
  \draw (0,2)..controls (0.7,1.7) ..(1.4,1.4) ;\draw (1.4,2)
  circle (2pt);\draw (1.4,1.4)   circle (2pt);\draw (1.4,2.6)
   circle (2pt);

\filldraw [black] (6,0) circle (2pt);
 \draw (4.4,0)..controls (5,0) ..(6,0) ;
  \draw (4.4,0.6)..controls (5.1,0.3) ..(6,0) ;
  \draw (4.4,-0.6)..controls (5.1,-0.3) ..(6,0) ;\draw (4.4,0)
  circle (2pt);\draw (4.4,0.6)   circle (2pt);\draw (4.4,-0.6)
    circle (2pt);

\filldraw [black] (6,2) circle (2pt);
 \draw (4.4,2)..controls (5,2) ..(6,2) ;

   \draw[step= 0.2, black,  dashed](1.4,0)..controls (2,-0.3) .. (4.4,-0.6);

\draw[step= 0.2, black,  dashed](1.4,1.4)..controls (3.6,1) ..
(4.4,0); \draw[step= 0.2, black,  dashed](1,2)..controls (3.6,1.8)
.. (4.4,0.6);

\draw[step= 0.2, black,  dashed](4.4,2)..controls (3.6,2.1) ..

\begin{flushleft} \bf{Figure 3.} \end{flushleft}


\begin{tikzpicture}\filldraw [black] (0,0) circle (2pt);
 \draw (0,0)..controls (1,0) ..(1.4,0) ;
  \draw (0,0)..controls (0.7,0.3) ..(1.4,0.6) ;
  \draw (0,0)..controls (0.7,-0.3) ..(1.4,-0.6) ;\draw (1.45,0)
  circle (2pt);\draw (1.5,0.6)   circle (2pt);\draw (1.5,-0.6)
    circle (2pt);

\filldraw [black] (0,2) circle (2pt);
 \draw (0,2)..controls (1,2) ..(1.4,2) ;
  \draw (0,2)..controls (0.7,2.3) ..(1.4,2.6) ;
  \draw (0,2)..controls (0.7,1.7) ..(1.4,1.4) ;\draw (1.4,2)
  circle (2pt);\draw (1.4,1.4)   circle (2pt);\draw (1.4,2.6)
   circle (2pt);

\filldraw [black] (6,0) circle (2pt);
 \draw (4.4,0)..controls (5,0) ..(6,0) ;
  \draw (4.4,0.6)..controls (5.1,0.3) ..(6,0) ;
  \draw (4.4,-0.6)..controls (5.1,-0.3) ..(6,0) ;\draw (4.4,0)
  circle (2pt);\draw (4.4,0.6)   circle (2pt);\draw (4.4,-0.6)
    circle (2pt);

\filldraw [black] (6,2) circle (2pt);
 \draw (4.4,2)..controls (5,2) ..(6,2) ;
  \draw (4.4,2.6)..controls (5.1,2.3) ..(6,2) ;
  \draw (4.4,1.4)..controls (5.1,1.7) ..(6,2) ;\draw (4.4,2)
  circle (2pt);\draw (4.4,1.4)   circle (2pt);\draw (4.4,2.6)
   circle (2pt);\node at (0,0) {$a$};
   \draw[step= 0.2, black,  dashed](1.4,0)..controls (2,0) .. (4.4,0);
\draw[step= 0.2, black,  dashed](1.4,-0.6)..controls (2,-0.4) ..

\draw[step= 0.2, black,  dashed](1.4,2)..controls (1.6,1.8) ..

\draw[step= 0.2, black,  dashed](4.4,2.6)..controls (4,2.2) ..

\draw[step= 0.2, black,  dashed](4.4,1.4)..controls (4,.8) ..

\draw[step= 0.2, black,  dashed](4.4,0.6)..controls (4,.8) ..
\begin{flushleft} \bf{Figure 3.} \end{flushleft}

\begin{tikzpicture}\filldraw [black] (0,0) circle (2pt);
\draw (0,0) ..controls (0.3,0)..(1,0);  \draw (1.7,0) circle
(20pt); \draw (-0.55,0) circle (15pt);
[black] (1,0) circle (2pt); \filldraw [black] (2.4,0) circle
(2pt); \draw (2.4,0) ..controls (3,0)..(3.3,0);\filldraw [black]
(3.3,0) circle (2pt);

\draw (3.8,0) circle (15pt);



\begin{flushleft} \bf{Figure 4.} \end{flushleft}

          It is easy to see that each graph with $n_i$ $i$-valent vertices
           can be obtained in this way. But it can be obtain
           many times and need to count this number. This means
           that we need to count how many $\sigma$-s can produce a
           fixed graph $\Gamma$. For this we need to find the
           group $G$ of permutations which preserve the "flowers". It
           consists of the following elements:

           (1)  Permutations which permute the set of flowers of
           fixed valency;

           (2)  Permutations which permute the edges of a fixed


 We see that the group $G$ is a semi-direct product
  $(\prod_i S_{n_i}) \rhd \!\!\!\!
   <  (\prod_i S_i^{n_i})$ where $S_j$ is  the  permutation group of $j$ elements.
   Its cardinality $|G|$ is $\prod_i(i!)^{n_i}n_i!$. This is
   exactly the product of the integers at the denominators  in
   \eqref{3.14a}. The group $G$ acts on the set of all pairings of
   $T$. The action is transitive on the set $P_{\Gamma}$ of the
   pairings which produce a fixed graph $\Gamma$. On the other
   hand the stabilizer of a fixed pairing is $Aut(\Gamma)$. Thus
   the number of the pairings producing $\Gamma$ is

                \frac{\prod_i (i!)^{n_i}n_i! }{|Aut(\Gamma)|}
          In this way we obtain a formula connecting the sum
          of the numbers  $F_{\sigma}$ and the sum of the amplitudes
          with weights:

       \sum_{\sigma} F_{\sigma}= \sum_{\Gamma}
        \frac{\prod_i (i!)^{n_i}n_i! }{|Aut(\Gamma)|} F_{\Gamma}
           At the end we will compute the powers of $\hbar$ at the
           amplitudes. We note that the power of $\hbar$ is given
           by the number of edges of $\Gamma$ minus the number of
           vertices, i.e. $b(\Gamma)$. This gives exactly $i/2-1$.
           This proves the theorem.\qed

           Now we are going to extract Feynman's theorem.

           \proof of \thref{3.12}. As in Wick's lemma we can use
           the symmetry of the correlation function with respect
           to $l_j$. So it is enough to consider the case of
           $l_1=l_2=\ldots=l_N=l$. The corresponding correlation
           function is denoted by $<l^N>$ and is also called
           expectation value of $l^N$. Let us compute the
           expectation value $<e^tl>$. Obviously this is the
           generating function of the expectation values
           $<l^N>\frac{1}{N!}$.  If we put in \thref{3.13}
            $g_i=1, \,\,i \geq 3$, $g_0=g_2=0$, $g_1=-\hbar t$ and
            $B_1=l,\,\,B_0=B_2=0$ we get the result.\qed

\subsubsection{Sums over connected graphs}
Here we are going to show that the reduce the computation of the
correlator to a sum only over connected graphs.  This is very
useful in studies of Feynman's integrals in real physics.  We
denote the set of connected graphs in $G(\bf{n})$ by

\bth{3.14} The logarithm of the partition function $\ln(Z_U)$ has
the following asymptotic expansion:
         \ln(Z_U) = \sum_{\bf{n}} \prod_i g_i^{n_i}
         \sum_{\Gamma \in G_c(\bf{n})}
         \frac{\hbar^{b(\gamma)}}{|Aut(\Gamma)|}F_{\Gamma} \label{3.17}
    \proof Denote by $\Gamma_1\Gamma_2$ the disjoint union of two
    graphs $\Gamma_1$ and $\Gamma_2$. Following this notation we
    use $\Gamma^n$ for the disjoint union of $n$ copies of
    $\Gamma$. Thus any graph can be written as
    $\Gamma_1^{k_1}\ldots\Gamma_l^{k_l}$ with some connected
    graphs $\Gamma_j$. Then we have
     $b_{\Gamma_1\Gamma_2}=b_{\Gamma_1}+b_{\Gamma_2}$ and
$|Aut(\Gamma_1^{k_1}\Gamma_2^{k_2})| = |Aut(\Gamma_1)|^{k_1} k_1!
|Aut(\Gamma_2)|^{k_2} k_2!$

 After exponentiating \eqref{3.17} and expanding the r.h.s.  in Taylor series we find the expression
of the partition function, given by \thref{3.13}. \qed

\subsection{Computations with Feynman's graphs}

\subsubsection{Loop expansions} Note that  the number $b({\Gamma})$
  in \thref{3.14} is the number of the loops of  $\Gamma $    minus $1$.
  For this reason this expansion is referred to as "loop expansion".

  Denote by $G^{(j)}_{\bf n}$ the set of graphs from $G_c({\bf n})$
  with $j$ loops. Also denote the $j$-loop term of $\ln (Z)$ by

   \big(\ln(Z)\big)_j = \sum_{\bf{n}} \prod_i g_i^{n_i}
         \sum_{\Gamma \in G^(j)(\bf{n})}
         \frac{\hbar^{b(\gamma)}}{|Aut(\Gamma)|}F_{\Gamma} \label{3.18}
    We are especially interested in the $0$-th and the first
    terms, i.e. in {\it the tree expansion} and {\it the one-loop

        (i)  The tree expansion of $\ln(Z)$ is given by the value of the
          action $S$ with minus sign at the singular point:

     \big(\ln(Z)\big)_0= -S(x_0). \label{3.19}

          (ii) The value of $\big(\ln(Z)\big)_1$ is:

     \big(\ln(Z)\big)_1 = \frac{1}{2}\ln \frac{\det(B)}{\det
     D^2S(x_0)}. \label{3.20}

     \proof   It is enough to study the case when $S$ is a polynomial
     $U=\sum^m_jg_jU_j/j!$. Also assume that the numbers  $g_j$ are
     small enough and that the integration takes place on small
     box $B$ around $x_0$. Then the function $S$ has a global
     maximum at $x_0$ and we can apply the method of steepest
     descent. It gives

          Z Z_0= \hbar^{d/2}e^{-S(x_0)/\hbar} I(\hbar).


  I(\hbar)   = (2\pi)^{d/2} \sqrt {\frac{1}{\det D^2S(x_0) }}
  (1+a_1\hbar^{d/2}  +   \ldots)\,\,\,  \textrm{(asymptotically)}
  Using the value of $Z_0 = (2\pi)^{d/2}\hbar^{d/2}(\det
  B)^{-1/2}$ we find:

          Z = e^{-S(x_0)/\hbar} \sqrt {\frac{\det
  B}{\det D^2S(x_0) }}.

  After taking a logarithm this yields that

   \ln (Z) = -S(x_0)/\hbar +\frac{1}{2} \ln( {\frac{\det B}{\det
   }})+ O(\hbar)
  which are exactly the desired equalities.

  \subsubsection{1-particle irreducible diagrams}

  A powerful method widely used by physicists to compute the
  partition function is to find a new action $ {S_{{\texttt{eff}}}}_{|0}$, called
  {\it effective action} such that

    \ln(S_{\texttt{eff}})_0 = \ln(S)
    Then using the simple formula \eqref{3.19} for
    $ \ln(S_{\texttt{eff}})_0$ we can find the partition
    function for the  initial action. Before that we need some

      An edge of a connected graph is called {\it a bridge}
      if when removed the graph becomes disconnected. A connected
      graph without bridges is called {\it 1-particle
      irreducible} (1PI).

\begin{tikzpicture}\filldraw [black] (1,0) circle (2pt);
\draw (1,0) ..controls (2,0)..(2.4,0);  \draw (1.7,0) circle
[black] (1,0) circle (2pt); \filldraw [black] (2.4,0) circle



\begin{flushleft} \bf{Figure 6.} \end{flushleft}

 The graph on Fig.4  obviously isn't 1-particle
      irreducible. The graph on Fig.6   is an example of 1PI graph.
Note that the 1PI graphs are  what in mathematics are known as

We are ready to describe the rules for computing of the effective

We will consider graphs with at least one internal and one
external vertices. Such a graph is called 1PI if the graph
obtained by removing the external vertices is 1PI. Denote by
$G_{1PI}({\bf n},N)$ the set of isomorphism classes of 1PI graphs
with $N$ external vertices and $n_i$ $i$-valent vertices. Here the
isomorphisms are taken to keep the external vertices fixed.

\bth{3.17} The effective action is given by the formula

    S_{\texttt{eff}}= \frac{(Bx,x)}{2} -
    \sum_{i\geq 0} \frac{\mathcal{U}_i}{i!},
    \mathcal{U}_i(x,x,\ldots,x) =
    \sum_{\Gamma \in G_{1PI}(\bf{n},N)}
    \frac{\hbar^{b(\Gamma)+1}}{|Aut \Gamma|}F_{\Gamma}
    (x_*,\ldots, x_*).
    and the functional $x_*\in V^*$ is defined as $x_*(y):=B(x,y)$

Before giving the proof let us make few comments. Write
 $ S_{\texttt{eff}}$ as a power series:

     S_{\texttt{eff}} = S + \hbar S_1 +  \hbar^2 S_2 + \ldots
The expression $\hbar^j S_j$ is called  $j$-{\it loop correction
to the effective action.} The theorem formulated above shows that
we can work only with 1PI diagrams.  Physicists rarely use other
diagrams, see e.g. the cited textbooks. Notice that the 1PI
diagrams are considerably less than all the diagrams.

\proof of \thref{3.13}  We will make use of the following theorem
from graph theory (see e.g. \cite{Bol}).

  Any connected graph can be uniquely represented as a tree (called  skeleton), whose
  vertices are 1PI subgraphs (with external edges) and the edges
  of the tree are bridges of $\Gamma$.
   \proof of the proposition.

Graph \quad \quad \quad \quad \quad  \quad \quad \quad \quad \quad
\quad \quad Skeleton

\draw (-.30,0) ..controls (0.3,0)..(2.7,0);\draw (1,.30)
..controls (1,0.5)..(1,1.3); \draw (0,0) circle (8pt);
 (1,0) circle (8pt); \draw  (2.4,0) circle
(8pt); \draw [black] (1,1)  circle (8pt);


\filldraw [black] (5,0) circle (2pt); \draw (5,0) ..controls
(5.3,0)..(7.4,0);\draw (6,0) ..controls (6,0.5)..(6,1);
[black] (6,0) circle (2pt); \filldraw [black] (7.4,0) circle
(2pt); \filldraw [black] (6,1)  circle (2pt);


\begin{flushleft} {\bf Figure 7.} The skeleton of a graph.\end{flushleft}


\subsubsection{Legendre Transform}
In this section we are going to express the effective action in
terms of the Legendre transform of the logarithm of the partition

Consider an action $S$ and perturb it with a linear term:

           S(b,x)=S(x)- (b,x)
   Consider the corresponding partition function

    Z_U(b) = \frac{ \int_V e^\frac{-S(x)+(b,x)}{\hbar}dx}
    {\int_V e^{-(Bx,x)/2}dx}
    Using \thref{3.15} we have

    \ln(Z_U(b))= -S_{\texttt{eff}}(0,b)
     Let us find    the perturbed effective action

   \thref{3.14} tells us that $S_{\texttt{eff}}(x,b)$ is given by
   the expansion in    1PI graphs. One of these graphs
   is the graph having a single wedge with vertices --
    tensors $(b,x)$. The only one of them
   is a wedge connecting two vertices labeled by $(b,x)$.

 \sectionnew{Quantum mechanics}


There is a dictionary that translates the objects from classical
mechanics into the corresponding objects from quantum mechanics.
Naturally we start with the phase space $M$. Its analog in quantum
mechanics is a Hilbert space $\mathcal{H}$. This Hilbert space
here will be the space  $L^2(M)$ of functions on the configuration
space with integrable square. {\it The observables}, i.e.
functions of positions and momenta become self-adjoint operators
in this Hilbert space. The eigenvalues and the eigenvectors are
interprest as follows.  An eigenvalue $a$ of a self-adjoint
operator $A$ is  the  probability  to measure an observable $A$
 at the eigenstate $|a>$ (= normed eigenvector).

 In particular, the position $q_j$
translates into the operator $\hat{q_j}$ of multiplication by
$q_j$ and the momentum $p_j$ translates into the operator of
differentiation $i\hbar
\partial_j$. Then we see that the Hamiltonian translates into
the   Schr\"odinger operator:

  \hat{H}= \frac{-\hbar^2}{2m}\sum_j \partial^2_{q_j} + V(q) \label{3.1}
The function $V(q)$ is again called potential and obviously
$\hat{H}= -\frac{-\hbar^2}{2m}\Delta + V(q)$. The constant $\hbar$
is called Planck's constant. The rule (na\"ive) to write the
Schr\"odinger operator is obvious: we put $i\hbar\partial_{x_j}$
instead of $p_j$. Much more important is the analog and the
interpretation of the Hamiltonian equations. They read

   i\hbar\frac{\partial\psi}{\partial t} = \hat{H}\psi \label{3.2}
This is the famous Schr\"odinger equation. It describes a particle
(or more particles) under the action of a potential $V$. The
unknown function $\psi(x,t)$ is called wave function. Its physical
interpretation is that $|\psi(x,t)|^2$ is a probability density,
i.e. the probability to find a particle, described by the equation
\eqref{3.2}  in an infinitesimally small volume $d^3x$ at the
point $x$ and the time $t$, is $|\psi(x,t)|^2d^3 x$. The standard
method for solving Schr\"odinger equation is by the method of
separation of variables. We seek solution of the form $\psi(x,t)=
\psi(x) e^{-iEt/\hbar}$ where  the constant $E$ means energy. Then
the spacial part $\psi(x)$ of the wave function  satisfies the
time-independent Schr\"odinger equation

    \hat{H}\psi = E\psi \label{3.3}

The main problem of quantum mechanics is to solve the eigenvalue
problem \eqref{3.3}.



Unfortunately most of the operators needed in quantum mechanics
have no eigenfunctions in $\mathcal{H}$. E.g., the operator
$-\frac{i \partial}{\partial_{q}}$, acting in $L^2(\mathbb{R})$,
which is basic for quantum physics has no eigenfunction in that
space. On the other hand na\"ively  one can say that any function
of the form $e^{ipq}$ is an eigenfunction with an eigenvalue $p$
in some bigger space. The operator $q$ is even worse; it  has no
eigenfunction in the class of functions but only in the class of
distributions. One standard way to get out of this situation (but
not the only one) is to consider the sequence $\mathcal{S}\subset
\mathcal{H} \subset \mathcal{S}^*$, where $\mathcal{S}$ is the
space of $\mathbb{C}^{\infty}$-functions that decay faster than
any polynomial and $\mathcal{S}^*$ is the space of its

   {\it (Fourier transform.)} Consider the operator  $-i\frac{d}{dq}$.
    Its eigenfunctions are
$e^{-ipq}$ with any fixed $p$. The linear functional

  f(q)\rightarrow \hat{f}(p)\,=\, \int f(q) e^{-ipq}dq,
where $f$ is a test function, belongs to $\mathcal{S}^*$. The
inverse Fourier transform

        f(q)= \frac{1}{2\pi}\, \int \hat{f}(q) e^{ipq}dq,
     gives the expansion of $f$ in the eigenfunctions of the operator
     $-i\frac{d}{dq}$. Of course, they do not belong to the
     Hilbert space.


Each physical state is represented by a vector, i.e. a
$L^2$-function. We are going use Dirac's "ket" and "bra" notation.
By the "ket" $|\psi>$ we (following Dirac) are going to denote the
states (vectors in $\mathcal{H}$. Here $\psi$ could, be e.g. an
eigenvalue, a vacuum state or any letter denoting  some physical
object. In a similar way we denote by "bra" $<\phi|$ the dual
vector. The scalar product $(\phi,A\psi)$ will be denoted by
$<\phi|A|\psi>$ and called {\it matrix element} of $A$. The name
comes from the situation when $|\phi>$  and $|\psi>$ are both
members of an orthogonal basis of $\mathcal{H}$. In that case
$<\phi|A|\psi>$ is really an element of the matrix of $A$ in that

Let $\{a\}$ be a complete orthonormal set of eigenvectors of a
self-adjoint  operator $A$ in $\mathcal{H}$. One can expand any
vector $\psi$ as

    |\psi> = \sum_a|a> <a|\psi>
 i.e. - in Fourier series. This equality will be used
 quite frequently and referred to as {\it insertion of a complete set of
 states}. In a general form it reads:

   \sum_a|a> <a| = \bf{1},
    where by $\bf 1$ we denote the identity operator in
    $\mathcal{H}$.  Here is an example.
 \bex{3.2} Let $|\psi>$ be a state. We want to find the average
 value  of the measurements of the observable $A$ at the state
 $|\psi>$. We have
 \sum_a a|<a|\psi>|^2 = \sum_a a|<\psi|a><a|\psi>=\\
 \sum_a <\psi|A|a><a|\psi>= <\psi|A|\psi>

      The most important observables are the coordinates $q_j$ and
      the momenta $p_j$. Using their definition

      \hat{\,q}_j f(q) := q_jf(q), \,\,\, \hat{p}_jf(q) :=
      -i\hbar\frac{ \,d}{d q_j}
    we find that they satisfy  the following identities

    [q_i,q_j]=0,\,\,  [p_i,p_j]=0,\,\, [q_i,p_j]=i\delta_{ij}.
      Another  important observable is the {\it energy} given by the
      Hamiltonian $\hat{H}$. Further we are going to skip the hat
      denoting quantization.

      We can consider  Schr\"odinger equation \eqref{3.1} as a
      dynamical system in the Hilbert space $\mathcal{H}$. Then we
      can solve it  by the formula:
   \psi(t)= e^{-itH}|\psi(0)>. \label{3.5}
   The evolution is one parameter family of unitary operators

\subsection{Heisenberg picture}
Up to now the main role in our discussion was played by the
Schr\"odinger equation \eqref{3.1}. This setting is referred to as
{\it Schr\"odinger picture}. There is an equivalent quantum
mechanical picture, called {\it Heisenberg picture}. The states
$|\psi>$ at time $t$ is mapped to $e^{iHt}|\psi>$, and the
operators $A$ are mapped to $e^{iHt}A e^{-iHt}$. The operator
$e^{iHt}$ is unitary and hence it preserves the scalar products.
Notice that all measurable quantities are given by  matrix
elements, i.e. by scalar products. This shows that we do not
change the physical picture.

In  Schr\"odinger picture   the observables do not change and the
states change with time. In Heisenberg picture the situation is
the opposite --

 the observables change by the law

   \frac{dA}{dt}= -i[A,H], \label{3.6}
   (this is  obtained by differentiation) but the states stay

\subsection{The Harmonic oscillator} In  classical mechanics
the simplest but very important system is the harmonic oscillator.
The importance lies in the fact that roughly speaking all other
systems can be considered as sets of connected oscillators. The
situation in quantum mechanics and quantum field theory is the

The classical harmonic oscillator is governed by the Hamiltonian

   H= \frac{p^2}{2m} + \frac{kx^2}{2} =
   \frac{p^2}{2m} + \frac{m\omega^2x^2}{2} \label{3.7}
     "Quantizing" it gives for the Schr\"odinger operator

        H=\frac{-\partial_x^2}{2m} + \frac{m\omega^2x^2}{2} \label{3.8}
       Here we assume that the Planck constant $\hbar=1$. Our Hilbert space
       will be $L^2(\Rset)$. The above operator is essentially the
       Hermite operator whose  eigenfunctions are expressed in terms of
       the Hermite polynomials. This is well known fact but we
       will derive it here below.

       In what follows we are going to use simple arguments
       from representation theory. Instead of
       using the operators $x$ and $p$ we are going to present $H$
       in terms of the following two operators:

       a\,=\, x \sqrt{\frac{m\omega}{2}}\, + \, ip
       \sqrt{\frac{1}{2m\omega} }\\
      a^{\dag}\,=\, x \sqrt{\frac{m\omega}{2}}\, - \, ip
     Notice that the operators $a$ and $a^{\dag}$ satisfy {\it the
     canonical commutation relation} $[a,a^{\dag}]= 1$, which
     plays a crucial role below.
         Obviously the Hamiltonian can be written in the form

          H = \frac{\omega}{2}(a^{\dag}a +aa^{\dag})=
    where $N=aa^{\dag}$. The Hermitian operator $N$ satisfies the

       [N,a^{\dag}]=a^{\dag}\,\,\, \textrm{and}\,\,\,
       [N,a]=-a. \label{3.9}
        The above operators define an algebra, called
        {\it Heisenberg algebra}.
    We are going to study the representations of this algebra in
    order to obtain the spectrum of $N$.

    Let $|n>$ be a normalized eigenvector of $N$, i.e. $N|n>=n|n>$ and $<n|n>=1$.
    Consider the vectors $a^{\dag}|n>$ and $a|n>$.
    If we apply to them $N$ and use the the commutation relations
    \eqref{3.9} we     obtain

   Na^{\dag}|n>=(a^{\dag}N +a^{\dag})|n> = a^{\dag}(N+1)|n>=
   Na|n> = (aN-a)|n> = a(N-1)|n>=(n-1)a|n> \label{3.10}

  The  equations \eqref{3.10} explain the names of the
  operators $a^{\dag}$   and $a$ -- operators of
  creation and annihilation.

The last equations show that we can build new eigenstates from old
ones. In particular we can obtain eigenstates with arbitrary
 negative eigenvalues. Below we are going to show  that this
 cannot be true.

  The operator \eqref{3.7} $H$ is a sum of squares of Hermitian
  operators. This shows that it cannot have negative eigenvalues.
  This shows that from some positive $k$ further  the vectors $a^k|n>$
  are zero and we do not produce new eigenvectors from it.
  Let us denote by $|0>$ the last non-zero vector of the sequence
  $|n>, a|n>,\ldots, a^k|n>,\ldots$. The vector $|0>$ is called
  {\it vacuum}.   (Notice that here we have denoted
  {\bf a non-zero state} by  $|0>$!  This is the vacuum
  and  not the zero vector.)  The uniqueness of the vacuum
   is also easy to prove, see below. We have $a|0>=0$.   On the other
   hand all eigenvectors
     $|0>, a^{\dag}|0>,\ldots, a^{\dag k}|0>,\ldots$ are non-zero.
  Let us show this.

  Take a normalized eigenvector $|n>$ as above. The squared
   norm of $a^{\dag}n$  can be computed as follows.

 <a^{\dag}n|a^{\dag}|n>= <n|aa^{\dag}|n>   =(a^{\dag}a
(Why the first equality is true?)

 One can easily show that the eigenspaces of $N$, corresponding to
 the eigenvalues $n$    are one-dimensional. Let
 us start with the vacuum $|0>$. It satisfies ordinary
 differential equation of order one, $a|n>=0$. Hence the statement
 is true. Assume that we have proved the statement for an
 eigenvalue $n-1$. If for the eigenvalue $n$ we have at least two
 independent eigenvectors $|n>$ and $|n^{'}>$ we can act upon them
 by $a$, Then the we obtain

         Na|n> = (n-1)a|n>, \,\,\, Na|n^{'}> = (n-1)a|n^{'}>.
   This shows that $a|n-n^{'}> = 0$ and hence $|n-n^{'}>$ is the
   vacuum. On the other hand $N|n-n^{'}>=n|n-n^{'}>$ contradicting
   to the fact that the vacuum has zero eigenvalue.

   Finally the fact that the eigenvectors of $N$ form a complete
   orthogonal system in $L^2(\mathbb{R})$ is well known fact, e.g.
    from   the theory of Hermite polynomials.

    In this way we obtained an orthogonal basis of
    $L^2(\mathbb{R})$ formed by the eigenvectors of $H$ with
    eigenvalues $\omega(n+1/2)$.

\sectionnew{Path integral formulation of quantum mechanics}
We are going define the path integrals for quantum mechanics by
the same expansion \eqref{3.13} we used in $0$-dimensional QFT.
For this we need to define the Feynman amplitudes, which means we
have to define  the function $S$, the quadratic form $B$, to find
its inverse -- $B^{-1}$ and finally to define the covectors $l_j$.

Let us consider a classical particle with action functional

          S(q)=\int L(q_j,\dot{q}_j) dt
       Then we need to define the Feynman integral, having the
       meaning of correlation function.
   \mathcal{G}(t_1,\ldots , t_N) =  <q(t_1),\ldots, q(t_N)>:=\\
   \frac{ \int q(t_1)\ldots q(t_N)
    e^{iS(q)/\hbar} Dq}{\int e^{iS(q)/\hbar} Dq} \label{4.1}
       An obvious but important remark is that $q(t_j)$ has the
       meaning of a functional. Here $t_j$ is fixed and $q$ is the

    The notation  $\mathcal{G}_n(t_1,\ldots,t_n)$ refers to
    another name of the correlator -- {\it Green's function}.
   Of course we consider first the Euclidian picture. For this we
   need to make Wick's rotation, i.e. to rotate the time in the
   complex domain. We are going to consider only Lagrangians
   of the form $L(q,\dot{q})= \dot(q)^2/2 - U(q)$
   Then our action will become:

    S= \int\big( - (\dot{q})^2/2 - U(q)\big)i dt
and the Green's function will be given by the formula

\mathcal{G}^E(t_1,\ldots , t_N) =  <q(t_1),\ldots, q(t_N)>:=\\
   \frac{ \int q(t_1)\ldots q(t_N)
    e^{-S_E(q)/\hbar} Dq}{\int e^{-S_E(q)/\hbar} Dq} \label{4.1e}
with $S_E=\int \big((\dot{q})^2/2 + U(q)\big)dt$.

       We may assume for simplicity that the particle moves
       in one-dimensional space. The general case is not much
       The potential $U$ will be taken a power series of the
       form $U=\sum_{j=2}^{\infty}U_j$, i.e. without constant
        and linear terms. For further use introduce the notation

        Then in analogy with $0$-dimensional
        case we take the quadratic form $B$ to be
           B=\int (\dot{q}^2 + m^2q^2 )dt.
      Here $m^2q^2=2U_2$. The coefficient $m$ has the meaning of
      mass.  Integrating by parts we obtain
           B= <Aq|q> ,
        where $A= -d^2/dt^2+m^2$.  This will help us define the
        inverse $B^{-1}$; namely we put $B^{-1}(f,f)=<A^{-1}f|f>$.
        The operator $A^{-1}$ is defined as in differential
        equations: if $Aq=f$, then the solution of this equation
        is given by $q=A^{-1}f$. In differential equations this is
        the integral operator with  kernel {\it the Green
        function} $G(x,y)$:
        q(x)= \int G(x,y)f(y)dy.
     It is well known that in our case the Green's function is given
     explicitly by the formula:

              G(x,y) = \frac{e^{-m|x-y|}}{2m}. \label{4.2}
    We see that our Hilbert space $\mathcal{H}$ has to be the
    space of quadratically integrable functions $L^2$. But we are
    going to work with Schwartz spaces $S(\mathbb{R}^n)$ and
    $S^*(\mathbb{R}^n)$ as explained in {\bf Section 2.}

    Now we are ready to give definition of the Feynman integral
    \eqref{4.1} (Euclidean version). Introduce some numeration of
    the internal vertices. The formula below does not depend on
    the choice.

     The correlation (Green's) function \eqref{4.1} is given by the
     asymptotic series

     \mathcal{G}(t_1,\ldots,t_N ) = \sum_{\Gamma\in G^*_{\geq 3}(N)}
                 \label{4.3}     \eeq
To define the numbers $F_{\Gamma}$ we fix the graph $\Gamma$. Then
 the following rules hold:

\item  Put the variable $t_j$ (the functional $q(t_j)$ at the $j$-th
external vertex of $\Gamma$.

\item Put the variable $s_k$ at the internal  vertex  $k$.

\item For each  edge connecting $u$ and $v$ write the
Green's function $G(\alpha,\beta)$.

\item The number $F_{\Gamma}$ is defined by the formula

          F_{\Gamma} = \prod_j(-u_{v(j)})\int G({\bf t},
 {\bf s})d{\bf s},  \label{4.3a}
    where $v(j)$ is the valency of the $j$-th vertex of  $\Gamma$.

  \bex{4.1} (Wick's Lemma.) Let us examine in detail the free theory:

      S=\int (-\frac{\dot{q}^2}{2} - \frac{m^2q^2}{2})dt.
In this case each  graph is disconnected union  of subgraphs with
two vertices and edge connecting them.
 The above formula gives us that

  \mathcal{G}(t_1,\ldots , t_{2k}) = \hbar^k\sum_{\sigma
                   \in \Pi_m} \prod_{i \in \{1,\ldots, 2m\}/\sigma}
                  G(t_i- t_{\sigma(i)}).

    $\phi^3$-theory. Consider action with Lagrangian
    $L=\dot{\phi}^2/2 - m^2\phi^2/2 + \phi^3$. Let us compute
    the two-point correlation function up to some order.


\begin{tikzpicture}\filldraw [black] (0,0) circle (2pt);
\draw (0,0) ..controls (0.3,0)..(3.4,0);  \draw (1.7,0) circle
[black] (1,0) circle (2pt); \filldraw [black] (2.4,0) circle
(2pt); \filldraw [black] (3.4,0) circle (2pt);



\begin{flushleft} \bf{Figure 8.} \end{flushleft}


   \subsubsection{The partition function}  Let us consider the
   partition function with slight modification -- {\it partition
    function with  external current} $J$:
Z(J):={\int e^{iS_E(q)/\hbar}+<J|q> Dq}. \label{4.2a}
   Here the arbitrary function $J\in S$ (the space of fast decaying
   functions $S$. Then we have the   equality (only formally!):

 \mathcal{G}_n(t_1,\ldots,t_n)J(t_1)\ldots J(t_n)dt_1\ldots dt_n.

    This will be our definition for $\frac{Z(J)}{Z(0)}$. We see
    that this the generating function of all the Green's functions
$ \mathcal{G}_n(t_1,\ldots,t_n)$.

As in the $0$-dimensional QFT here we have

\bpr{4.1a} The following formula holds:

  W(J):=\ln \frac{Z(J)}{Z(0)}=
 \mathcal{G}^c_n(t_1,\ldots,t_n)J(t_1)\ldots J(t_n)dt_1\ldots dt_n.
    The \proof is the same as in the $0$-dimensional QFT. In this
    way we have generating function of all the Green's functions
$ \mathcal{G}^c_n(t_1,\ldots,t_n)$.

Also as in $0$-dimensional QFT we have $j$-loops expansion:

    W(J)= \hbar^{-1}W_0(J) +W_1(J) +\ldots +\hbar^{j-1}W_j(J)
\een where $W_0$ is the sum over trees, $W_1$ is the $1$-loop
contribution, etc. Furthermore,

\bpr{4.2} (0) The tree approximation is given by

    W_0(J)= -S_E(q_J) + <q_J,J>,
    where $q_J$ is the extremal of the functional
    $S^J_{E}:=S_E(q_J) -     <q_J,J>$;

(1) The one-loop contribution is given by

\ben W_1(J)= -\frac{1}{2}\ln\det L_J,
    where $L_J$ is the linear operator on $\mathcal{H}$ such that
    $d^2S^J_{E}(q_j)(f_1,f_2)=d^2 S^0J_{E}(0)(Lf_1,f_2)$.

    In a similar vein we can write explicitly a generation
    function for the on-particle irreducible Green functions
    $\mathcal{G}^{1PI}_n(t_1,\ldots,t_n)$, i.e. the Green
    functions that are defined only over the 1PI-graphs.

\subsubsection{"Derivation" of Feynman's formula }
 \subsubsection{Feynman-Kac formula}

\subsection{Example - the Harmonic Oscillator}

\subsection{Example - $\phi^3$ Theory}

\subsection{Momentum space formulation} The computations in the
position variables are quite heavy. In particular the Feynman
amplitude is given by an integral over a space of dimension equal
to the number of internal vertices, which can be enormous even for
trees. Instead one can pass to {\it momentum representation} by
applying Fourier transform. Let us start with the classical

     (-\frac{\partial^2}{\partial t^2} + m^2)G(x)=\delta
    Applying Fourier transform to it (with the variable $p$ instead
    of $\xi$) we obtain:

    (p^2   +m^2 )\hat{G}= 1
This gives

    \hat{G}(E) = \frac{1}{E^2 +m^2}
  Of course

     G(t-s)=\int \frac{e^{ip(t-s)}dp}{2\pi(p^2 +m^2)}

  Below we introduce the following notation. We denote by $p_j$
  the  edges of a fixed graph $\Gamma$ and by
  $\alpha(p_j), \beta(p_j)$ the vertices adjacent  to $p_j$.
  Both  $\alpha(p_j)$ and $\beta(p_j)$ denote either $t$ or $s$.
We  plug in the above  expression for $G$ with corresponding
variables into the formula for the amplitude
$F_{\Gamma}(t_1,\ldots,t_N)$. We also can perform Fourier
transform on $F_{\Gamma}(t_1,\ldots,t_N)$ (with respect to

 Denote the dual variables by $E_1,\ldots, E_N$.
      Then we get $E_j=p_{k_j}$. All the exponentials will
      disappear. The integrations with respect to $p$-s
       will remain but with  some connections between the $p$-s.

     We obtain
      \hat{F}({\bf E}) = \prod_k
      \int_{t_k \in \mathbb{R}} \Big[\int_{\bf s} \Big(\prod_j  \int_{p_j
      \in \mathbb{R}}
     \frac {e^{ip_j(\alpha(p_j)-\beta(p_j))}dp_j}{2\pi(p_j^2
     +m^2)}\Big)d{\bf s}e^{iE_k(\alpha(p_k)-t_k)}\Big]dt_k
          We can change the order of integration; first integrate
          with respect to ${\bf s}$ and ${\bf t}$ and then with respect to
          ${\bf p}$.

      \hat{F}({\bf E}) = \prod_j \int_{p_j
      \in \mathbb{R}} \frac {1}{2\pi(p_j^2      +m^2)}
      \int_{\bf t} \int_{\bf s}
    d{\bf s} d{\bf t} dp_j

          The integration with respect to ${\bf s}$ and ${\bf t}$
          will produce some delta-functions involving ${\bf p}$ and $E$.
           In more  details each fixed $s_j$  gives  $\delta$-s with all
          the edges (i.e. the variables $p$) connecting
          $s_j$ with the other vertex  of $p$.  Then using the
          meaning of $\delta(p_{i_1} +  \epsilon_j p_{i_j}
           \ldots,) \,\, \epsilon_j =\pm 1$ we obtain a linear
           relation between $p$-s with  coefficients  $\pm 1$.

          Consider as an example the diagram on Fig. 8. We can
          first write the propagator as inverse Fourier transform:

      G(t-s) = \int \frac{ e^{ip(t-s)}dp}{2\pi (p^2+m^2)}

   Then we plug it in the formula for the amplitude:

   F(t)=  \int \Big( \prod
    \int \frac{e^{i\sum p_j(t_1-s_1)} e^{i\sum p_j(t_2-s_2)}
    d{\bf p}}{2\pi (p_j^2+m^2)}
   \Big)d{\bf s}.
   Next perform Fourier transform with respect to $t$ where the dual
   variable is denoted by $E$. Also perform  the integration
   with respect to $s$
     This gives:

     \hat{F}(E) =
      \int \frac{ \delta{(E_1- \sum_jp_j)} \delta{(E_2- \sum_jp_j)}}
   {\prod_{j=1}^3 2\pi(p_j^2+m^2)}   d{\bf p}.
       This gives
     \hat{F}(E) =
      \int \frac{  1}
   {\prod_{j=1}^3 2\pi(p_j^2+m^2)}   d{\bf p},
    where the variables $E_1=E_2$ and $\sum p_j=E_1$.

     Below we give the rules defining Feynman's amplitude in
       momentum variables.  Recall that the dual variables to $t$
       are denoted by $E$.
           The dual variables to $s$ will be denoted by $Q$.
           The rules include checking what are the the signs.
          In fact we can choose quite arbitrary these signs but
           still there is some rules.

\bde{5.1} (Feynman's rules for the amplitudes in momenta
variables.) The Fourier transform of an amplitude $F_{\Gamma}$ are
as follows:


\item Put a variable $E_j$ at each external edge and a variable
$Q_j$ at each internal one;

\item Assign a propagator $\frac{1}{p^2 +m^2}$ to each edge and
substitute $p$ by $E_j$ for the external edges and by $Q_j$ by the
internal ones. Multiply all the propagators and denote the result by

\item Orient the external edges  inward;

\item Orient the internal edges  arbitrarily;

\item For each internal vertex write "the Kirchhoff law": the sum of
the incoming variables is equal to the sum of the outgoing ones.
This will produce relations among the variables $Q$ and $E$. One
of them is $\sum_j^N E_j=0$. The rest define a linear subspace
$Y(E)$ of the space of the $Q$-s;

\item Define the momentum-space amplitude of $\Gamma$ by

      \hat{F}_{\Gamma}(E)= \prod_l (-a_{v(l)})\int_{Y(E)} \Phi(E,Q)dQ.

\item The measure $dQ$ on $Y(E)$ is defined to be in such a way
that the volume of $Y(E)/Y_Z(0) = 1$, where $Y_Z(0)$ is the set of
integer points on $Y(0)$.



      Consider the Feynman graph given on fig.5.1.


\sectionnew{Symmetries} Symmetries are everywhere around us. Quite
often we attribute beauty to some visible symmetry. In science
they are less visible but no less important. Even in classical
mechanics the symmetries are responsible for the integrability of
mechanical equations. Some of the corresponding symmetries can be
seen easily, e.g. -- the rotational symmetry yields the
conservation of the momentum. But other, as  the symmetries in
rigid body equations are not at all obvious.

The adequate mathematical tool describing symmetries is group
theory. In this section we assume some knowledge of groups and
present some of the theory needed in the course. On the other hand
we are going to consider  simple enough examples that presumably
would help the reader to get more insight even without preliminary
acquaintance with groups.

As the definition of a group is simple let's recall it.

\bde{4a.1} A group is a set $G$ with the following properties:

\item Multiplication. For any ordered pair of elements $g_1,g_2 \in G$
 there exits an element $g_1\cdot g_2 \in G$;

\item Inversion. For any element $g \in G$ there exits an element
$g^{-1} \in G$

\item Unit. There exist an element ${\bf 1}\in G$ such that
${\bf 1}\cdot g = g$ for any $g \in G$;

\item Associativity. For any three elements $g_1,g_2,g_3 \in G$
the associativity equation holds:

    g_1\cdot (g_2\cdot g_3)= (g_1\cdot g_2)\cdot g_3.

  If the order of multiplication is irrelevant, i.e.
  $g_1\cdot g_2= g_2\cdot g_1$ we say that the
  group is {\it commutative} or {\it Abelian.}


   In the subsections that follow we study few examples all
   important for QFT.
\subsection{The Group SO(2)} This is the group of rotations of the
circle. (Show that it is a group.) We can identify its elements
with the $2\times 2$ matrices of the form:

      \cos \theta  & \sin \theta     \\
      -\sin \theta & \cos \theta    \\
   Obviously such a matrix describes a rotation of angle $\theta$.

   In this example we meet one of the most important notions in
   mathematics and physics -- the notion of {\it representation}.
   In simple terms we described above our group as subgroup of a
   matrix group. The description of the representations of groups
   is a major goal of mathematics. Soon we will see its importance
   for quantum theory.

   First we will give a precise definition.

   \bde{4a.2} Let $V$ be a vector space (it could be
   infinite-dimensional). Denote by $Inv(V)$ the group of
   invertible linear operators in $V$. A homomorphism of a group
   $G$ into a subgroup of $Inv(V)$ is called representation.

\subsection{The Groups SO(3) and SU(2)}

\subsection{The Lorentz  and Poincar\'e groups}

\subsection{Clifford algebras} Let $V$ be a complex space with
scalar product.

     Clifford algebra is an algebra spanned by the elements of $V$
     and the complex numbers $\Cset$, satisfying the relation

        \xi\eta +\eta \xi = 2( \xi,\eta), \,\,  \xi,\eta \in V.


\sectionnew{Classical fields}
\subsection{Multidimensional  Variational Problems} Here we going to
generalize the variational approach in mechanics to some other
physical problems, where the configuration space is

\subsection{Klein-Gordon equation}  The simplest
non-trivial Poincar\'e-invariant  Lagrangian is:

    \mathcal{L}= \frac{1}{2}\partial_{\mu}\phi
    \partial^{\mu}\phi - \frac{1}{2}m^2\phi^2
 Klein-Gordon equation is the corresponding Euler-Lagrange
  equation for the action $S=\int \mathcal{L}(\phi,\nabla \phi)d^4x$:

   (\Box + m^2)\varphi = 0 \label{5.1}
\subsection{Electromagnetic field}
The electromagnetic field is governed by the Maxwell equations. We
recall them. Let ${\bf E(x,t)} =(E^1,E^2,E^3)$ and  ${\bf B(x,t)}
=(B^1,B^2,B^3)$ be the electric and magnetic fields in a
three-dimensional space correspondingly. Denote also by $j$ and by
$\rho$   the current density and the charge density. Then the
Maxwell's equations are:

 a)\,\,\, \nabla \cdot{\bf B}= 0,\qquad   \qquad     b)\,\,
  \nabla \times\, {\bf E} +
 \frac{\partial {\bf B}}{\partial t}=0\\
c)\,\,\, \nabla\cdot{\bf E}= \rho,\qquad   \qquad     d)\,\,
  \nabla \times\, {\bf B} -
 \frac{\partial {\bf E}}{\partial t}=j
 The meaning of the equations is the following. Equation a) means
 that there are no magnetic charges. Next comes Faraday's law b) of
 induction: if the magnetic field is changing, then an electric
 field appears. Equation c) is nothing but Gauss's (Stokes, Green,
 Ostrogradsky, etc.)  theorem
  in differential form. Finally equation
 d) is the Amp\`ere's circuital law, with the Maxwell correction.

 By Helmholtz's theorem, $\bf B$ can be written in terms
 of a vector field $\bf A$, called the {\it magnetic potential}:
         {\bf B} = \nabla \times {\bf A}.
     Differentiating and using Faraday's law we find
          \nabla \times
          ({\bf E} + \frac{\partial {\bf A}}{\partial t})=0.
        This shows, again by Helmholtz's theorem, that there exists
        a function $\varphi$ such that
        ${\bf E} + \frac{\partial {\bf A}}{\partial t}=\varphi$.
        Denote by   $A^{\mu} = (\varphi, {\bf A})$. It is called
        {\it 4-potential}.

 We are going to write the Maxwell equations in terms of the
 4-potential. Introduce the   {\it
 electromagnetic tensor} $F^{\mu \nu}$ by the equalities:

F^{\mu \nu} = \partial^{\mu}A^{\nu}- \partial^{\nu}A^{\mu} = -
F^{\nu \mu} \label{5.2}.
   Component-wise it reads:

       0 &-E^1 &-E^2 & -E^3 \\
       E^1 & 0 &-B^3 & B^2\\
       E^2 & B^3 &0 & -B^1\\
      E^3 & -B^2 &B^1 &0        \label{5.3}
           It is quite obvious that the electromagnetic tensor is
           invariant under the transformation

         A^{\mu}\rightarrow  A^{\mu} +

\subsection{Dirac Equation}

Introduce the {\it Dirac matrices} for four-dimensional Minkowski
space. They are:

       0 & 1  \\
       1 & 0       \end{array}  \right),
       \quad \gamma^i= \left(
            0 & \sigma^i  \\
       -\sigma^i & 0
         where $\sigma^i$ are the Pauli matrices  \begin{footnote}
  {Warning! This notation is not the only one used in literature.}

   This representation is called {\it Weyl} or {\it chiral representation}

         In terms of Dirac matrices we can write Dirac equation as:

      (i\gamma^{\mu}\partial_{\mu} - m)\psi(x)=0,
               or in Dirac's notation $(i\partial \!\!\!/ - m)\psi(x)=0$

   \bpr{5.1} (i) Dirac equation is Lorentz invariant.

   (ii) Klein-Gordon operator
   \partial^2 +m^2 = (-i\gamma^{\mu}\partial_{\mu} - m)
   (i\gamma^{\mu}\partial_{\mu} -  m),
      i.e. Klein-Gordon equation follows from Dirac equation.
     \proof is elementary computation and is left for the reader.

The Lagrangian for Dirac theory is:

    L_{Dirac} = \bar{\Psi} (i\partial \!\!\!/ -  m)\Psi,
     where $\bar{\Psi}=\psi^{\dag}\gamma^0$.

Dirac propagator, i.e. the fundamental solution of Dirac equation

\sectionnew{Quantum fields}

We start with one scalar field $\phi$ on Minkowski  space. This
means we have a vector space $V$ with a signature $-1,\ldots,1$.
We also have an action $ S=\int L dx$ with Lagrangian
$L(x,\partial_x)$. In QFT there is an operator of {\it time
ordering} $T$ acting on fields as follows. If  $(x-y)_M^2\geq0$.
$(x-y)_M^2\geq0$ then $T(\phi(x) \psi(y) )= \phi(x) \psi(y)$.
Otherwise $T(\phi(x) \psi(y) )=  \psi(y)\phi(x)$.

     We want to put sense in the expressions of the form:

   \mathcal{G}(x^1,\ldots , x^N) =  <\phi(x^1),\ldots, \phi(x^N)>:=\\
   \frac{ \int T( \phi(x^1)\ldots \phi(x^N))
    e^{iS(\phi)/\hbar} D\phi}{\int e^{iS(\phi)/\hbar} D\phi}. \label{6.1}

       Of course after that we need to learn how to compute them.

     As explained earlier we are going to study first the
     Euclidian theory. In Euclidian theory we have to find the

   \mathcal{G}(x^1,\ldots , x^N) =  <\phi(x^1),\ldots, \phi(x^N)>:=\\
   \frac{ \int T( \phi(x^1)\ldots \phi(x^N))
    e^{-S(\phi)/\hbar} D\phi}{\int e^{-S(\phi)/\hbar} D\phi},

       We will proceed as in quantum mechanics.

\subsection{$\phi^4$ Theory}

       Our first   example will be Klein-Gordon's Lagrangian:

         \mathcal{L}_{KG} =\frac{1}{2}
          (\partial_0\phi)^2 - \frac{1}{2}\sum_{j=1}^{m-1}
          (\partial_j\phi)^2 -m^2\phi^2
     perturbed by $\sum_j U_j\phi^j$, i.e.

          \mathcal{L}= \mathcal{L}_{KG}+\sum_j U_j\phi^j. \label{6.2}
 After Wick's rotation it becomes

   \mathcal{L}_{KG}^E= - \frac{1}{2}\big((\nabla \phi)^2 +

   Denote the Green's function of the Euclidian Klein-Gordon equation by
   $G_{KG}^E(x-y)$. In other words we look for solution
   of the equation

       (-\Delta+m^2) G_{KG}^E(x-y)=\delta(x-y)
    Performing Fourier transform on both sides gives:

   Then the Green's function is given by the inverse Fourier

       \frac{1}{(2\pi)^d}  \int e^{-i(x-y)k}\frac{dk}{k^2+m^2}.
    As in quantum mechanics we see the advantages of  Wick's rotation -- the
    integrand     has no poles in the real domain.
            We can define the Euclidian correlation functions as follows.

       The correlation function, corresponding to the Lagrangian
       \eqref{6.2} and to functionals $\phi(x^j)$ is given by
       the following rules:

\item Put the variable $y^j$ at the $j$-th external vertex.

\item Put the variable $w^k$ at each internal vertex.

\item Put the Green's function $G_{KG}(w^j-w^k)$ at each edge
connecting two internal vertices $w^j,w^k$.

\item Put the Green's function $G_{KG}(y^j-w^k)$ at each edge
connecting the external vertex $y^j$ with the internal vertex

\item Put the Green's function $G_{KG}(y^j-y^k)$ at each edge
connecting two external vertices $w^j,w^k$.

\item The number $F_{\Gamma}$ is defined by the formula
\ben F_{\Gamma} = \prod_j (-U_v(j)) \int G({\bf{y}};{\bf{w}})
d{\bf w};



Elementary particles are divided into two types: bosons and
fermions. Examples of the former are photons, $W$, and $Z$
particles. The electrons, protons, neutrons are examples of
fermions. The bosons are characterized by the fact that several
bosons can occupy the same quantum state, while fermions cannot.
Mathematically this difference is expressed by the corresponding
Hilbert spaces. If the Hilbert space for a single particle is
$\mathcal{H}$ then for $k$ bosons it is $S^k\mathcal{H}$ (the
$k$-th symmetric power  of $ \mathcal{H}$), while for $k$ fermions
it is $\Lambda^k \mathcal{H}$ (the $k$-th exterior power of
$\mathcal{H}$). The quantum theory we have developed up to now
describes mostly bosons -- the fields commute. Now we need to
develop field theory of anti-commuting fields. The relevant
mathematical tool is the notion of supermanifolds.

\subsection{Linear Superspaces} Of course we  start with the
relevant linear algebra.

\bde{7.1} A supervector space (or superspace) $V$ is
$\Zset_2$-graded vector space -- $V= V_0\oplus V_1$ with the
following  additional structure. We define tensor product
$v\otimes u$ of two vectors, where $v\in V_i, u \in V_j,\quad i,j
\in \{0,1\}$ satisfying the rule $v\otimes u = (-1)^{ij} u\otimes
v$. Let us define the operation of changing parity $\Pi$, by $\Pi
V_i= V_{1-i},\quad i,j \in \{0,1\}$. With this notation we can
define the following extension of the notions of symmetric and
exterior powers:

     S^m V= \Pi(\Lambda^m (\Pi V), \quad
        \Lambda^m V = \Pi(S \Pi (V).
    When  $V_0= \Rset^n,\quad V_1=\Rset^m$ we denote $V$ by
    $\Rset^{n|m}$. In general we say that $V$ has dimension
    ${n|m}$, where ${n,m}$ are the dimensions of $V_0,V_1$,
  The elements of $V_0$ are called {\it even} and the
  elements of $V_1$ are called {\it odd}.

We define the algebra of polynomial functions $\mathcal{O}(V)$ on
a superspace $V$ as $SV^*$, where $S$ acts as defined above on the
superalgebra $V^*$. In more details if $x_1,\ldots, x_n$ are
linear coordinates on $V_0$,called {\it even variables} and
$\xi_1,\ldots, \xi_m$, called {\it odd variables} are linear
coordinates $V_1$ then $\mathcal{O}(V)$ is $\Rset[x_1,\ldots,
x_n,\xi_1,\ldots, \xi_m]$ with the relations

   x_ix_j = x_jx_i, \quad \xi_i\xi_j = - \xi_j\xi_i,
   \quad x_i\xi_j=\xi_jx_i.
  The algebra spanned only by the odd variables $\xi_1,\ldots, \xi_m$ is
  called {\it Grassmann or exterior algebra}. Using the standard
  notation of {\it anticommutator} -- $\{a,b\} = ab+ba$ we can write
 the defining relations $\{a,b\}=0$ for any $a,b$ of the Grassmann algebra.
  It is a finite-dimensional space, while $\mathcal{O}(V)$
 is (in general) an infinite-dimensional supervector space.

 \subsection{Supermanifolds} More generally we can define the algebra
 of smooth functions $C^{\infty}(V)$ on  $V$ as
 $C^{\infty}(V_0) \otimes \Lambda V_1^*$. We can look on the
 smooth functions on a supermanifold $V$ as functions of the form:

     F(x,\xi) = \sum f_i(x)\xi_1^{\alpha_1}\ldots \xi_m^{\alpha_m},
   where $\alpha_i = 0\,\, \textrm{or} \,\,1$.

        A supermanifold $M$ is an ordinary manyfold $M_0$ but
        instead of the standard sheaf of smooth functions we
        consider a sheaf of smooth functions $C^{\infty}(V)$ on
        a superspace. This means that that the structure sheaf is
        locally isomorphic to
         $C^{\infty}_{M_0}\otimes \Lambda(\xi_1,\ldots , \xi_m)$.

\subsection{Calculus on Supermanifolds}


  Let us define the notion of integral of Grassmann functions.
  It will have the  properties:

     \int 1 d\xi=0,\,\,\, \int \xi d\xi=1,\,\,\,\\
     \int \xi_2 \Big( \int\xi_1 d\xi_1\Big)d\xi_2 = \int \int \xi_2\xi_1 d\xi_1d\xi_2 =1

Next we define an integral for  functions in both even and odd
variables. Consider functions $f(x,\xi)$ that in even variables
are compactly supported, i.e.

     f(x,\xi) = \sum_{\alpha} f_{\alpha}(x)\xi^{\alpha},
   and the functions $f_{\alpha}$ are with compact support.
   It is enough to define the integral for
   the summands: $ f_{\alpha}(x)\xi^{\alpha}$. The integral will

     \int_{V} f_{\alpha}(x)\xi^{\alpha}dx d\xi =
     \int_{V_0}f_{\alpha}(x)dx \int_{V_1}\xi^{\alpha}d \xi.
         The general case is defined by linearity.

    We need to learn how to make  changes of variables.
    Consider  the case when there are only odd variables.
    To get an idea about the natural formulas
     we start with 2-dimensional $V=V_1$.
    The linear change of variables $F$ has the form:

   \xi_1 = f_{11}\eta_1 + f_{12}\eta_2,\quad
    \xi_2 = f_{21}\eta_1 + f_{22}\eta_2.
    Then the function $\xi_1\xi_2$ transforms into

(f_{11}\eta_1 + f_{12}\eta_2)(f_{21}\eta_1 + f_{22}\eta_2)=\\
(f_{11}f_{22}-f_{12}f_{21})\eta_1\eta_2 = \det{(F)}\eta_1\eta_2.
    We want to keep the value of the integral
    $\int\xi_1\xi_2 d\xi_2 d\xi_1= 1$.
       This yields that the change of the variables must be:

        \xi_1\xi_2d\xi_2  d\xi_1 =
        \det{(F)}^{-1}\eta_1\eta_2 d\eta_2d\eta_1.
     Obviously  the same formula has to be applied to odd spaces
     in any dimension.

To guess the formula for change the variables
 in general, i.e. when we have integral  $f(x)\xi_1\ldots\xi_m$ we can
 apply the above arguments. Again take two odd variables.

 The linear map will be

    \\ x  = Ay +b_{11} \eta_1 +b_{12}\eta_2,\\
      \xi_1  =   c_1 y +d_{11}\eta_1 +d_{12}\eta_2.\\
      \xi_2  = c_2 y +d_{21}\eta_1 +d_{22}\eta_2. \\  \label{7.1}
    Here   $A,D$ have even elements while $B,C$ have odd elements.
    The matrix $B$ is $m\times 2$ and the matrix $C$ is $2\times m$.

  The change \eqref{7.1} gives

    f(x)\xi_1\xi_2dx d\xi_2 d\xi_1 =
    f(x)\eta_1\eta_2( A dy +b_1d \eta_1 +b_2d \eta_2)\\
    ( c_1 dy +d_{11}d \eta_1 +d_{12}d \eta_2)
    ( c_2 dy +d_{21}d \eta_1 +d_{22}d \eta_2).

 Assume that  $\det D \neq 0$. After some manipulations we obtain

  \xi_1 \xi_2 dx d\xi_1 d\xi_2 =\\
   \eta_1  \eta_2 (\det A \det D)  dy d\eta_1 d\eta_2 -\\
   \eta_1   \eta_2 \det( B  D^{-1}C )\det D  dy d\eta_1 d\eta_2
     \eta_1  \eta_2 \det\big( A -B  D^{-1}C \big)  \det D dy d\eta_1 d\eta_2
   Having in mind that the integral of $\xi_1\xi_2 d \xi_1 d\xi_2$
   must be $1$ we finally find that the formula for the change of
   the variables is given by

   \int f(x)dx d\xi = \int f \textrm{Ber}(F)dy d\xi
where  {\it the Berezinian} of $F$ is the number

          \textrm{Ber}(F) = \frac{\det(A - B D^{-1}C)}{\det D}.

     We also need to learn how to  differentiate functions
      of anticommuting variables.  Here we are  going to distinguish
      between {\it left derivative} $\frac{ \partial^L}{\partial_{\xi}}$
      and {\it right derivative}  -- $ \frac{\partial^R}{\partial_{\xi}}$.
     It is enough to define them for the function $\xi_1\xi_2$. We have

     =\delta_{i1}\xi_2 - \delta_{i2}\xi_1, \quad
     =\delta_{i2}\xi_1 - \delta_{i1}\xi_2.

\subsection{ Fermionic Quantum Mechanics } The simplest
fermionic Lagrangian is

    \mathcal{L}= \psi \dot{\psi}.
   This is quantum-mechanical Lagrangian of a single massless fermion.

\subsection{Path Integrals for Free  Fermionic Fields}
We already know  some fermionic  Lagrangians -- Weyl, Majorana,


 The Dirac Lagrangian is

      \mathcal{L}_D = \psi^{\dag}_L\sigma \partial \psi_L +
      \psi^{\dag}_R\bar{\sigma} \partial \psi_R +
      i m\big(\psi^{\dag}_R \psi_L + \psi^{\dag}_L \psi_R \big).
  It describes a pair particle-antiparticle, for example electron
  and positron. Unlike Majorana's Lagrangian here the antiparticle
  is  different from the particle.

    Using the four-component Dirac's spinor

\left(\begin{array}{cc} \psi_L\\
\end{array} \right)
    we can express Dirac's Lagrangian in a compact form:

 \mathcal{L}_D = \bar{\Psi}_D (i\Dir -  m)\Psi_D.

Let us write the Feynman rules for free theories. We simply add a
source coupling to get the generating functional:

     Z =\frac{1}{N}\int e^{i\int [ \mathcal{L}_D + i\bar{\Psi}_D \zeta +
     i\bar{\zeta}\Psi_D]dx}D\Psi_D D\bar{\Psi}_D.
  Denote by $\hat{\Psi}_D({\xi})$ the Fourier transform of $\Psi_D$.
  Then the equation for the propagator in momenta variables reads:

   ( -\Dmom -m)\hat{\Psi}_D({\xi})=1.


Unfortunately many of the diagrams have divergent amplitudes.
Let's consider the following example. In momenta variables the
Klein-Gordon propagator is (after Wick rotation)

   \hat{G} = \frac{1}{k^2 +m^2}
  Let us study  $\phi^4$ theory.  Consider the four-point function.
  It  contains diagrams like the one on Fig. 8. Then by Feynman
  rules  the amplitude for this graph is

    F_{\Gamma}(x^{(1)}) =
    \int_{\Rset^d} \frac{dk}{(k^2 +m^2)((k-x^{(1)})^2 +m^2)}
   In the case of $d\geq 4$, which we need in QFT, the integral is
   divergent at $\infty$. This is the so called ultra-violet (UV)

  The physicists have invented a number of ways to overcome  this
  difficulty. The objective of this section is to give  idea
  of some of these methods.

\subsection{Renormalizability of Field Theories}

  \subsection{Dimensional Regularization}

\sectionnew{Quantum electrodynamics}

\sectionnew{Gauge theories}

\subsection{Chern-Simons theory}

\subsection{Yang-Mills Theories}


\subsection{Linear and Multilinear Algebra}

Here we give some definitions. For a more detailed treatment of
the topics see e.g. \cite{Gel, Gr}.

Let $U, V$ and $E$ be a vector spaces.

\bde{9.1} We say that the mapping $\phi$ from $U\bigoplus V$ to $E$
is bilinear if it is linear in each argument when the other is
fixed. \ede

Exactly in the same manner we define a multilinear mapping from
$\bigoplus_jV_j$ to $E$.

\bde{9.2} Let    the mapping $\phi$ from $V\bigoplus U$ to $E$ is
bilinear. We say that $E$ is a tensor product of $U$ and $V$ if the
image of $\phi$ is $E$ and if $\psi$ is a map from $U\bigoplus V$ to
some vector space $F$ then there exists a linear mapping $\chi$ from
$E$ to $F$ such that $\psi = \chi \circ \phi$. In other words the
following diagram is commutative:


\xymatrix{V\bigoplus U \ar[r]^{\phi}\ar[rd]^{\psi} & E \ar[d]^{\chi}\\
  & F\\ }

\subsection{Differential Geometry} The main object of differential
geometry is the notion of {\it connection}. This notion  makes
precise the idea of transporting data along a curve or family of
curves in a parallel and consistent manner.

\subsection{Classical Mechanics} Here we give a brief account on
some notions of classical mechanics. Our exposition follows
\cite{Arn} and for more thorough course the same book is
excellent. Of course there are  many other books that could serve
the purpose.

We start with the notion of a functional. Roughly speaking this is
a function whose arguments are functions. We will be interested in
functionals defined as follows in a particular but very important
case, describing classical mechanics. Let $\mathcal{L}(r,q)$ be a
function defined on an open set $R\times U$ of $ \Rset^{2d}$ . Let
$q(t)$ be a smooth path with $\dot{q},q$ taking values in $U$.

  {\it Action} is the
functional (= function in which the variable is the path $q(t))$:

   S(q) = \int_{t_0}^{t_1}\mathcal{L}(\dot{q}(t), q(t))dt
  The function $\mathcal{L}$ is called {\it Lagrangian}. Most of the time we
  will consider Lagrangians of the form

   \mathcal{L}=T-U(q) = \frac{||\dot{q}||^2}{2} -U(q).
   The quadratic form $T$ is called {\it kinetic energy} and the
   function $U$ is called {\it potential (energy)}.

   One can define {\it variational derivative} of $S$ with respect
   to the path $q$ as usually. Let $\delta q(t) $ is a small
   change of the path $q(t)$. The difference:

     \delta S(q)= S(q + \delta q(t)) - S(q)
     is small and can be written as

     \delta S(q) = F(q)\delta q(t) + \mathcal{O}(|\delta q|^2).
          The function $F(q)$ is called variational derivative of
          $S$ and is denoted by

         \frac{\delta S}{\delta q}.

   The paths for which  the variational derivative becomes zero
   satisfy the {\em Euler-Lagrange equations}:

     \frac{d}{dt}\frac{\partial \mathcal{L}}{\partial \dot{q}_j} -
     \frac{\partial L}{\partial q_j}=0, \quad j=1,\ldots,d.
  We will need also the equivalent formulation of classical
  mechanics (and field theory) in Hamiltonian form.
    Let us first recall the notion of Legendre transform. Consider
   a convex (or concave) function $f(x)$ and define the
   function in $p$ and $x$
   F(x,p)= (p,x)-f(x).
   For a fixed $p \in V^*$ find the unique extremum
   of   $F(x,p)$ as a function in $x$. This yields the equation:

     Due to the convexity of $f$ this equation has a unique
     solution $x=x_0 \in V$. {\it The Legendre transform of $f$}
     is the function $g(p)$ defined by

     L(f)(p)= (p,x_0(p))- f(x_0(p)).
       Using Legendre transform we can arrive at Hamiltonian
   formulation of classical mechanics in the following manner.
    Let $\mathcal{L}(\dot{q},q)$ be a Lagrangian. Fix the variable
    $q$ and perform Legendre transform with respect to the
    variable $\dot{q}$. We obtain a new function $H(p,q)$, which is
    called {\it Hamiltonan}. Then the Euler-Lagrange equations
    \eqref{12.1} are equivalent to the Hamiltonian system of

            \dot{q}_j=\frac{\partial H}{\partial p_j},\quad
              \dot{p}_j=-\frac{\partial H}{\partial q_j},\quad
              j=1,\dots,d. \label{12.2}

  We will often use the following terminology. The variables $q$ will be
  called {\it position variables} this implying that $\dot{q}$ are
  velocities. The variables $p$ are {\it momenta}. The set where the
  position variables are defined is called {\it configuration space.}
  The entire space where the Hamiltonian is defined is called {\it
   phase space.}

 \subsection{Functional Analysis and Differential Equations}

We will need quite often Hilbert spaces. Here is the definition.

\bde{12.1} Hilbert space $\mathcal{H}$ is a linear space over
$\Rset$ (or $\Cset$) with a vector product $(x,y)$ (hermitian
vector product $(x,\bar{y}$) which is complete with respect to the
norm $||x||=\sqrt{(x,x)}$. \ede

\bex{12.1} Let $X$ be a set with a Lebesgue measure
   $d\mu$. Denote by $L^2(X)$ the space of complex functions
   with integrable square $\int_X |f|^2d\mu$.
    Define a scalar product by $\int_X f\bar{g}d\mu$ where
   $f,g \in L^2(X)$. Then by the theory of Lebesgue integral
$L^2(X)$ is a Hilbert space. This is the most important example.

 The continuous operators are exactly those which satisfy $||Ax|| \leq
c||x||$ with some constant $c$. They are also called {\it
bounded}. However we will need operators that are unbounded as
well as operators that are defined only on subspace of
$\mathcal{H}$. For example the operators $\hat{x}_j$ in
$L^2(\Rset^n)$ (multiplication by ${x}_j$ is neither defined
everywhere, nor bounded. The same is true for the Schr\"dinger
operator $-\delta + U(x)$. We will need to find the spectrum of
such operators. In fact this problem is in the center of quantum
mechanics. More generally we will need to find solutions of
partial differential equations. Even when they have "good
solutions" (which is not so often) it is very convenient to have
broader spaces of "functions" to operate with. The corresponding
spaces are different spaces of {\it distributions} (= {\it
generalized functions}). We are going to work with the space of
tempered distributions, which we define below. First define the
Schwartz space $\mathcal{S}$ of all infinitely differentiable
functions on $\Rset^n$ which decay at infinity faster than any
power of $x_j$. We define topology on this space by the semi-norms

\ben p_{\alpha, \beta}(\phi) = sup_{x\in \Rset^n}|| x^{\alpha}
D^{\beta}\phi|| . \een
 The space of continuous functionals on
this space is called the {\it space of tempered distributions}. It
is denoted by $\mathcal{S}^*$. A very important example of a
tempered distribution is {\it Dirac's delta-function}. It is
defined as

       \delta(f) = f(0).

{\it Fourier transform} of a function $f\in
\mathcal{S}(\mathbb{R}^n)$ is defined by the formula:

    \hat{f}(\xi) = \int_{\mathbb{R}^n}f(x) e^{-i(x,\xi)}dx
   The inverse transform is given by

    f(x) = \frac{1}{(2\pi)^n} \int_{\mathbb{R}^n}
    \hat{f}(\xi) e^{i(x,\xi)}d\xi
     We can define {\it Fourier transform of tempered
     distribution} by the formula:

         \hat{F}(\phi) := F (\frac{1}{(2\pi)^n}\hat{\phi}),
       where $F\in \mathcal{S}^*$ and $\phi \in \mathcal{S}$
       for any test function $\phi  \in \mathcal{S}$. Let us
       compute the Fourier transform of $\delta$. We have

            \hat{\delta}(\phi) = \delta (\frac{1}{(2\pi)^n}\hat{\phi})=
          \frac{1}{(2\pi)^n}   \hat{\phi}(0) = \frac{1}{(2\pi)^n} \int_{\Rset^n} \phi(x)dx.
       This yields that $\hat{\delta}=\frac{1}{(2\pi)^n}$. In physics and mathematics
       we often need $\delta$-functions supported at more
       complicated sets than one point.

Hermitian operator $A$ is an operator, satisfying the equality
$(Ax,y) = (x,Ay)$ for all vectors from the definition domain of

 Differential equations

Spectral theorem

Representation theory

\subsection{Relativistic Notations}

{\it Minkowski space} is a space $\Rset^n$ with Minkowskian
metric, i.e. a metric with signature $(-1,1\ldots,1)$. Minkowski
inner product is defined by $(x,y)_M:=x_0y_0 -x_1y_1-\ldots -

In Minkowski space we define the {\it light cone} by the equation
$x^2_0 -x^2_1-\ldots - x^2_{n-1}=0$. A point with coordinates
$(x_0,x_1\ldots,x_{n-1})$ is said to be {\it space-like}
 if  $(x,y)_M< 0$.
If $(x,y)_M> 0$ the point is said to be {\it time-like}.

We would like to introduce {\it time ordering}. If $x,y$ are
points we
 say that $x$ chronologically precedes $y$ if  $(x-y)^2>0$.

 \subsection{Miscellaneous Notations}

& \nabla \cdot {\bf A }  =  \partial_{1} A^1 + \partial_{2} A^2
+\partial_{3} A^3 \,\, --\,\, \textrm{ called\,\it nabla  of} \, {\bf A}\\
& \nabla \times{\bf A}  = (\partial_{2}A^3-\partial_{3}A^2,
\partial_{3}A^1-\partial_{1}A^3, \partial_{1}A^2-\partial_{2}A^1)
  \textrm{ called\,{\it rotor}, or \it curl  of} \,   \, {\bf A}




%%%%%%%%%%%%%%%%%% References %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
       V. I.  Arnol'd, {\it Mathematical methods of  classical
       mechanics}. Moscow,

A. Berezin, {\em The Method of Second Quantization}, Academic
Press, (1966).

B. Bolobas

  Pierre Deligne, Pavel Etingof, Daniel S. Freed, Lisa C. Jeffrey, David Kazhdan,
John W. Morgan, David R. Morrison, and Edward Witten, editors.
{\em Quantum fields and strings: a course for mathematicians.}
Vol. 1, 2. American Mathematical Society, Providence, RI, 1999.
Material from the Special Year on Quantum Field Theory held at the
Institute for Advanced Study, Princeton, NJ, 1996�1997. (also
lecture notes available online)

 Ph. Di Francesco, P. Mathieu, David S\'en\'echal,  {\em Conformal
Field Theory}.  Springer, New York, 1997.

I. Dolgachev,  {\it Introduction to string theory}, preprint - Ann
Arbor., lecture notes available online:
http://www.math.lsa.umich.edu/ ~idolga/lecturenotes.html.

B. Dubrovin, S. P. Novikov, A. Fomenko, {em Modern Geometry} Part 1
and Part 2, Springer, 1992.

P. Etingof,  {\em Mathematical ideas and notions of quantum field
theory}. preprint - MIT lecture notes available online:


 L. D. Faddeev, O. A. Yakubovsky, {\em Lectures in quantum mechanics
 for students in mathematics}, Leningradskii universitet, 1980.
 (in Russian). English translation:     Lectures on Quantum Mechanics for Mathematics
 Students - L. D. Faddeev, Steklov Mathematical Institute,
 and O. A. Yakubovskii(, St. Petersburg
 University - with an appendix by Leon Takhtajan - AMS, 2009

R. P. Feynman,  {\em The character of physical laws}. Cox and
Wyman Ltd., London, 1965.

R. P. Feynman, R. B. Leighton and M. Sands, {\em The Feynman
Lectures on Physics}.,  (Addison-Wesley, 1963), Vol III, Chapter

G I. M. Gel'fand, {\em Lectures in linear algebra}

W. H. Greub, {\em Multilinear algebra}, Springer, 1967

C. Itzykson,  and  J. B. Zuber, {\em Quantum Field Theory},
McGraw-Hill, 1980.

\bibitem {Ka}
M. Kaku, {\em Quantum Field Theory, A Modern Introduction}, Oxford
University Press, 1993.

\bibitem {Kon1}
M. Kontsevich, Intersection theory on the moduli spaces of curves
and the matrix. Airy function, Comm.Math.Phys.,vol.147(1992),1-23.

\bibitem {Kon2}
M. Kontsevich,     Vassiliev's knot invariants, Adv.Soviet
Math.,vol.16,Part 2(1993),137-150.

\bibitem {PS}
M. E. Peskin, D.V. Schr\"oder, {\em An introduction in quantum
field theory}. Perseus Books, Reading Massachusetts, 1995.

 M. Polyak, {\em Feynman diagrams for pedestrians and
 mathematicians}, in: Graphs and Patterns in Mathematics and Theoretical Physics
Edited by: Mikhail Lyubich,  and Leon Takhtajan, Proceedings of
Symposia in Pure Mathematics, AMS, 2005.

 J. Rabin, {\em Introduction to QFT for mathematicians}. in:
 Freed, D. and Uhlenbeck, K., eds., Geometry and Quantum Field Theory,
American Mathematical Society, 1995.

Ramond P. {\em Field theory: A modern primer} (2ed., Westview,

 L. Ryder, {\em Quantum Field Theory}. Cambridge University Press.

\bibitem{ Schw}
L. Schwartz, {\em Cours d'analyse} (French Edition), 1981.

M.E. Taylor, {\em Partial differential equations 1. Basic theory},
AMS115, Springer, 1996.

R. Ticciati,  {\em Quantum field theory for mathematicians} (CUP,

E. Witten, "Quantum field theory and the Jones polynomials", CMP,

E. Witten,


 H Woolf, ed., {\em Some strangeness in proportion},
 Addison-Wesley,  1980.
Действия към документ