2024年3月25日 星期一

Solution to Partial Differential Equations: Methods and Applications (Robert McOwen) Section 6.1

Solution to Partial Differential Equations: Methods and Applications (Robert McOwen) Section 6.1

    1. If $X$ is a normed vector space, prove that $|\|x\|-\|y\||\leq\|x-y\|$.
    2. Show that the norm defines a continuous function on $X$.
    3. If $X$ is a real inner product space, prove (1).
  1. Solution
    1. By the triangle inequality, we have

      $\begin{aligned} &\|x\|-\|y\|=\|(x-y)+y\|-\|y\|\leq\|x-y\|+\|y\|-\|y\|=\|x-y\|,\\&\|y\|-\|x\|=\|(y-x)+x\|-\|x\|\leq\|y-x\|+\|x\|-\|x\|=\|x-y\|.\end{aligned}$

      Thus, we obtain $-\|x-y\|\leq\|x\|-\|y\|\leq\|x-y\|$, i.e., $|\|x\|-\|y\||\leq\|x-y\|$.
    2. Let $f:X\to\mathbb R$ be defined by $f(x)=\|x\|$ for $x\in X$. For any $\epsilon>0$, we pick $\delta=\epsilon$. Then for $x,y\in X$ and $\|x-y\|<\delta$, we use (a) to find

      $|f(x)-f(y)|=|\|x\|-\|y\||\leq\|x-y\|<\delta=\epsilon$.

      This shows $f$ is continuous on $X$.
    3. If $y=0$, then we have $\|y\|=\sqrt{\langle0,0\rangle}=0$ and

      $\begin{aligned}\langle x,0\rangle&=\overline{\langle0,x\rangle}=\overline{\langle0+0,x\rangle}=\overline{\langle0,x\rangle+\langle0,x\rangle}\\&=\overline{\langle0,x\rangle}+\overline{\langle0,x\rangle}=\langle x,0\rangle+\langle x,0\rangle,\end{aligned}$

      which shows $\langle x,0\rangle=0$. Thus, $|\langle x,0\rangle|=0=\|x\|\|y\|$, i.e., the equality holds true (even over the complex field). Now we suppose $y\neq0$ and consider the function $g$ $g:\mathbb R\to\mathbb R$ be defined by $g(t)=\|x-ty\|^2$. Clearly, $g$ is nonnegative on $\mathbb R$. Then we observe that

      $\begin{aligned}0\leq\|x-ty\|^2=\langle x-ty,x-ty\rangle=\langle x,x-ty\rangle-t\langle y,x-ty\rangle\\&=\langle x-ty,x\rangle-t\langle x-ty,y\rangle\quad{(since $X$ is a real inner product space)}\\&=\langle x,x\rangle-t\langle y,x\rangle-t\langle x,y\rangle+t^2\langle y,y\rangle\\&=\|x\|^2-2t\langle x,y\rangle+t^2\|y\|^2\quad\text{for}~t\in\mathbb R.\end{aligned}$

      Taking $\displaystyle\frac{\langle x,y\rangle}{\|y\|^2}$, we get

      $\|x\|^2-\frac{2\langle x,y\rangle^2}{\|y\|^2}+\frac{\langle x,y\rangle^2}{\|y\|^4}\cdot\|y\|^2\geq0,$

      which implies $\langle x,y\rangle^2\leq\|x\|^2\|y\|^2$, i.e., $|\langle x,y\rangle|\leq\|x\|\|y\|$. The proof is complete.

  2. Use Hölder's inequality to prove the generalized Hölder's inequality

    $\displaystyle\int_\Omega\!|uvw|\,\mathrm dx\leq\|u\|_p\|v\|_q\|w\|_r$

    for $u\in L^p(\Omega)$, $v\in L^q(\Omega)$, and $w\in L^r(\Omega)$, where $p^{-1}+q^{-1}+r^{-1}=1$.
  3. SolutionFirstly we claim that $vw\in L^{p'}$, where $p'$ satisfies $\displaystyle(p')^{-1}=1-p^{-1}=q^{-1}+r^{-1}$. Since $v\in L^q(\Omega)$ and $w\in L^r(\Omega)$, we have $v^{p'}\in L^{q/p'}(\Omega)$ and $w^{p'}\in L^{r/p'}(\Omega)$. Since $\displaystyle\frac1{q/p'}+\frac1{r/p'}=1$, we can apply Hölder's inequality to get $v^{p'}w^{p'}\in L^1(\Omega)$ and

    $\displaystyle\|vw\|_{p'}^{p'}=\int_\Omega\!|vw|^{p'}\,\mathrm dx=\int_\Omega\!|v^{p'}w{p'}|\,\mathrm dx\leq\|v^{p'}\|_{q/p'}\|w^{p'}\|_{r/p'}=\left(\int_\Omega\!|v|^q\,\mathrm dx\right)^{p'/q}\left(\int_\Omega\!|w|^r\,\mathrm dx\right)^{p'/q}=\|v\|_q^{p'}\|w\|_r^{p'},$

    which gives $\|vw\|_{p'}\leq\|v\|_q\|w\|_r$. Therefore, we can use Hölder's inequality again to get

    $\displaystyle\int_\Omega\!|uvw|\,\mathrm dx=\int_\Omega\!|u||vw|\,\mathrm dx\leq\|u\|_p\|vw\|_{p'}\leq\|u\|_p\|v\|_q\|w\|_r.$

    The proof is complete.

  4. Use Hölder's inequality to prove

    $\|u\|_q\leq\|u\|_p^{\lambda}\|u\|_r^{1-\lambda}\quad\text{for}~$$u\in L^p(\Omega)\cap L^r(\Omega)$,

    where $p\leq q\leq r$ and $q^{-1}=\lambda p^{-1}+(1-\lambda)r^{-1}$.
  5. SolutionSince $\displaystyle\frac1{p/(\lambda q)}+\frac1{r/((1-\lambda)q)}=1$, we can use Hölder's inequality to get

    $\begin{aligned}\|u\|_q&=\|u^q\|_1^{1/q}=\|u^{\lambda q}\cdot u^{(1-\lambda)q}\|_1^{1/q}\leq\|u^{\lambda q}\|_{\frac p{\lambda q}}^{1/q}\|u^{(1-\lambda)q}\|_{\frac r{(1-\lambda)q}}^{1/q}\\&=\left(\int_\Omega\!|u^{\lambda q}|^{\frac p{\lambda q}}\,\mathrm dx\right)^{\frac\lambda p}\left(\int_\Omega\!|u^{(1-\lambda)q}|^{\frac r{(1-\lambda)q}}\,\mathrm dx\right)^{\frac{(1-\lambda)}r}=\left[\left(\int_\Omega\!|u|^p\,\mathrm dx\right)^{\frac1p}\right]^\lambda\left[\left(\int_\Omega\!|u|^r\,\mathrm dx\right)^{\frac1r}\right]^{1-\lambda}=\|u\|_p^\lambda\|u\|_r^{1-\lambda}.\end{aligned}$

    Remark. When $\Omega$ has a finite Lebesgue measure, i.e., $\mu(\Omega)<\infty$, $L^r(\Omega)\subseteq L^p(\Omega)$ for $p<r$. To see this, we let $f\in L^r(\Omega)$ and define $\Omega_1=\{x\in\Omega\,:\,|f|\leq1\}$ and $\Omega_2=\{x\in\Omega\,:\,|f|>1\}$. Then

    $\displaystyle\int_\Omega\!|f|^p\,\mathrm d\mu=\int_{\Omega_1}\!|f|^p\,\mathrm d\mu+\int_{\Omega_2}\!|f|^p\,\mathrm d\mu\leq\int_{\Omega_1}\!1\,\mathrm d\mu+\int_{\Omega_2}|f|^r\,\mathrm d\mu<\infty,$

    which shows $f\in L^p(\Omega)$. Here we have used the fact that $\mu(\Omega_1)\leq\mu(\Omega)$ and $\mu(\Omega)$ is finite. However, it is not true when $\mu(\Omega)=\infty$ so one may modify the condition for this case.


  6. let $\ell^2$ denote the space of all sequences of real numbers $\{a_n\}_{n=1}^\infty$ such that $\sum a_n^2<\infty$.
    1. Verify that $\langle\{a_n\},\{b_n\}\rangle=\sum a_nb_n$ is an inner product on $\ell^2$.
    2. Verify that $\ell^2$ is a Hilbert space (i.e., complete in the induced norm).
  7. Solution
    1. Let ${\bf 0}=\{0\}_{n=1}^\infty$. Then $\langle{\bf 0},{\bf 0}\rangle=\sum0^2=0$. For $\{a_n\}\neq{\bf 0}$, i.e., there exists $m\in\mathbb N$ such that $a_m\neq0$, we have

      $\langle\{a_n\},\{a_n\}\rangle=\sum a_n^2\geq a_m^2>0.$

      Since $\{a_n\}$ and $\{b_n\}$ are sequences of real numbers, it is clear that

      $\langle\{a_n\},\{b_n\}\rangle=\sum a_nb_n=\overline{\sum b_na_n}=\overline{\langle\{b_n\},\{a_n\}\rangle}$.

      For $\lambda,\mu\in\mathbb R$ and $\{a_n\},\{b_n\},\{c_n\}\in\ell^2$, we have

      $\langle\lambda\{a_n\}+\mu\{b_n\},\{c_n\}\rangle=\sum(\lambda a_n+\mu b_n)c_n=\lambda\sum a_nc_n+\mu\sum b_nc_n=\lambda\langle\{a_n\},\{c_n\}\rangle+\mu\langle\{b_n\},\{c_n\}\rangle$.

      Therefore, $\langle\cdot,\cdot\rangle$ is an inner product on $\ell^2$.
    2. Let $\{a_n\}^{(k)}=\{a_n^{(k)}\}$ be the Cauchy sequence in $\ell^2$ for $k\in\mathbb N$, i.e., for any $\epsilon>0$, there exists $K_1\in\mathbb N$ such that $\|\{a_n\}^{(k)}-\{a_n\}^{(m)}\|<\epsilon$ for $k,m>K_1$, where $\|\{a_n\}\|=\sqrt{\langle\{a_n\},\{a_n\}\rangle}$. In addition, by the triangle inequality, there exists $K_2$ such that $\|\{a_{n}^{(k)}\}\|<1+\|\{a_n^{(K_1)}\}\|:=M$ for $k>K_2$. For any $n\in\mathbb N$ and $k,m>N$, we have

      $\displaystyle|a_n^{(k)}-a_n^{(m)}|\leq\sqrt{(a_n^{(k)}-a_n^{(m)})^2}\leq\sqrt{\sum_{j=1}^\infty(a_j^{(k)}-a_j^{(m)})^2}=\|\{a_n\}^{(k)}-\{a_n\}^{(m)}\|<\epsilon.$

      Thus, for any fixed $n\in\mathbb N$, $\{a_n^{(k)}\}_{k=1}^\infty$ is Cauchy sequence in $\mathbb R$, i.e., $\displaystyle\lim_{k\to\infty}a_n^{(k)}$ exists and is denoted by $a_n^{(\infty)}$. Now we claim that the sequence $\{a_n^{(\infty)}\}$ is in $\ell^2$. For any positive integer $N\in\mathbb N$, we notice that

      $\displaystyle\sum_{n=1}^N(a_n^{(\infty)})^2=\sum_{n=1}^N\lim_{k\to\infty}(a_n^{(k)})^2=\lim_{k\to\infty}\sum_{n=1}^N(a_n^{(k)})^2\leq\lim_{k\to\infty}\sum_{n=1}^\infty(a_n^{(k)})^2\leq\lim_{k\to\infty}M^2=M^2$.

      Since the upper bound for $\displaystyle\sum_{n=1}^N(a_n^{(\infty)})^2$ is independent of $N$, the series $\displaystyle\sum_{n=1}^N(a_n^{(\infty)})^2$ is convergent, i.e., $\displaystyle\sum_{n=1}^\infty(a_n^{(\infty)})^2<\infty$ and hence $\{a_n^{(\infty)}\}\in\ell^2$. Finally, we note that

      $\begin{aligned}\sum_{n=1}^N(a_n^{(k)}-a_n^{(\infty)})^2&=\sum_{n=1}^N\left(a_n^{(k)}-\lim_{m\to\infty}a_n^{(m)}\right)^2=\lim_{k\to\infty}\sum_{n=1}^N(a_n^{(k)}-a_n^{(m)})^2\\&\leq\lim_{k\to\infty}\sum_{n=1}^\infty(a_n^{(k)}-a_n^{(m)})^2\leq\lim_{k\to\infty}\|\{a_n^{(k)}\}-\{a_n^{(m)}\}|^2\leq\epsilon^2\quad\text{for}~N\in\mathbb N~\text{and}~k\geq K_1.\end{aligned}$

      Therefore, $\|\{a_n^{(k)}-a_n^{(\infty)}\|\leq\epsilon$ for $k\geq K_1$, which implies $\displaystyle\lim_{k\to\infty}\|\{a_n^{(k)}-a_n^{(\infty)}\|=0$, i.e., $a_n^{(k)}$ converges to $a_n^{(\infty)}$ in $\ell^2$. Therefore, $\ell^2$ is a Hilbert space.

  8. Let $\Omega=(0,1)$.
    1. Verify that $\mathrm du/\mathrm dx$ (given in Example 4) is the weak derivative of $u$.
    2. For what values of $\alpha$ is $u(x)=|x|^\alpha$ in $H^{1,2}(\Omega)$?
  9. Solution
    1. For $\phi\in C_0^\infty(\Omega)=C_0^\infty((0,1))$, it is easy to verify that $\displaystyle\int_\Omega\!u\frac{\mathrm d\phi}{\mathrm dx}\,\mathrm dx=-\int_\Omega\!\phi\frac{\mathrm du}{\mathrm dx}\,\mathrm dx$ as follows:

      $\begin{aligned}\int_0^2\!u\frac{\mathrm d\phi}{\mathrm dx}\,\mathrm dx&=\int_0^{1/2}\!x\frac{\mathrm d\phi}{\mathrm dx}\,\mathrm dx+\int_{1/2}^1\!(1-x)\frac{\mathrm d\phi}{\mathrm dx}\,\mathrm dx\\&=x\phi(x)\Big|_0^{1/2}-\int_0^{1/2}\!\phi(x)\cdot1\,\mathrm dx+(1-x)\phi(x)\Big|_{1/2}^1-\int_{1/2}^1\!\phi(x)\cdot(-1)\,\mathrm dx\\&=-\int_0^{1/2}\!\phi(x)\,\mathrm dx+\int_{1/2}^1\!\phi(x)\,\mathrm dx=\int_0^1\!\phi\frac{\mathrm du}{\mathrm dx}\,\mathrm dx.\end{aligned}$

      Thus, $\mathrm du/\mathrm dx$ is the weak derivative of $u$.

      Remark. The original question is that "Compute the weak derivative of the function $u$ of Example 4." However, the weak derivative is given in Example 4 at page 166 so I modify the statement to make it sense.
    2. For $x\in\Omega=(0,1)$, function $u$ can be denoted as $u(x)=x^\alpha$. Then $u'(x)=\alpha x^{\alpha-1}$ for $x\in(0,1)$. Clearly, $u\in C^1(\Omega)$. Hence it suffices to find suitable $\alpha$ such that $\|u\|_{1,2}^2<\infty$. When $\alpha>1/2$, we find

      $\displaystyle\|u\|_{1,2}^2=\int_0^1\!(|u'|^2+|u|^2)\,\mathrm dx=\int_0^1(\alpha^2x^{2\alpha-2}+x^{2\alpha})\,\mathrm dx=\left.\frac{\alpha^2x^{2\alpha-1}}{2\alpha-1}+\frac{x^{2\alpha+1}}{2\alpha+1}\right|_0^1=\frac{\alpha^2}{2\alpha-1}+\frac1{2\alpha+1}<\infty$.

      However, when $\alpha=1/2$, we have

      $\displaystyle\|u\|_{1,2}^2=\int_0^1\!(|u'|^2+|u|^2)\,\mathrm dx=\int_0^1\left(\frac14x^{-1}+x\right)\,\mathrm dx=\left.\frac{\ln x}4+\frac{x^2}2\right|_0^1=\infty$.

      For $\alpha<1/2$ but $\alpha\neq-1/2$, we have

      $\displaystyle\|u\|_{1,2}^2=\int_0^1(|u'|^2+|u|^2)\,\mathrm dx=\int_0^1(\alpha^2x^{2\alpha-2}+x^{2\alpha})\,\mathrm dx=\left.\frac{\alpha^2x^{2\alpha-1}}{2\alpha-1}+\frac{x^{2\alpha+1}}{2\alpha+1}\right|_0^1=\infty.$

      For $\alpha=-1/2$, we have

      $\displaystyle\|u\|_{1,2}^2=\int_0^1(|u'|^2+|u|^2)\,\mathrm dx=\int_0^1\!\left(\frac14x^{-3}+x^{-1}\right)\,\mathrm dx=\left.-\frac{x^{-2}}8+\ln x\right|_0^1=\infty$.

      Therefore, $u\in H^{1.2}(\Omega)$ for and only for $\alpha>1/2$.

  10. Prove that $H_0^{1,p}(\mathbb R^n)=H^{1,p}(\mathbb R^n)$ for $1\leq p<\infty$.
  11. SolutionFor $u\in C_0^1(\mathbb R^n)$, we have $\|u\|_{1,p}<\infty$, which implies

    $C_0^1(\mathbb R^n)\subseteq\{u\in C^1(\mathbb R^n)\,:\,\|u\|_{1,p}<\infty\}\subseteq\overline{\{u\in C^1(\mathbb R^n)\,:\,\|u\|_{1,p}<\infty\}}=H^{1,p}(\mathbb R^n).$

    Taking the clourse in the norm $\|\cdot\|_{1,p}$, we obtain $H_0^{1,p}(\mathbb R^n)=\overline{C_0^1(\mathbb R^n)}\subseteq H^{1,p}(\mathbb R^n)$. Conversely, let $u\in H^{1,p}(\mathbb R^n)$, i.e., there exists a sequence $\{u_m\}_{m=1}^\infty$ such that $u_m\in C^1(\mathbb R^n)$ with $\displaystyle\lim_{m\to\infty}\|u_m-u\|_{1,p}=0$. Let $\phi\in C_0^\infty([0,\infty))$ satisfy $\phi(t)=1$ for $0\leq t\leq1$; $\phi(t)=0$ for $2\leq t<\infty$ and $0\leq\phi(t)\leq1$ for $t\geq0$. Then we define $u_{m,k}(x)=u_m(x)\phi(k^{-1}|x|)$ for $x\in\mathbb R^n$ and $m,k\in\mathbb N$. Clearly, $u_{m,k}\in C_0^1(\mathbb R^n)$. For $i\in\{1,\dots,n\}$, we have

    $\displaystyle\frac{\partial u_{m,k}}{\partial x_i}(x)=\frac{\partial u_m}{\partial x_i}(x)\phi(k^{-1}|x|)+u_m(x)\phi'(k^{-1}|x|)\cdot\frac{x_i}{k|x|}\quad\text{for}~x\in\mathbb R^n.$

    For any fixed $m\in\mathbb N$, we find

    $\begin{aligned}\left\|\frac{\partial u_{m,k}}{\partial x_i}-\frac{\partial u_m}{\partial x_i}\right\|_p^p&=\int_{\mathbb R^n}\!\left|\frac{\partial u_m}{\partial x_i}(x)[\phi(k^{-1}|x|)-1]+u_m(x)\phi'(k^{-1}|x|)\cdot\frac{x_i}{k|x|}\right|^p\,\mathrm dx\\&=\int_{|x|\geq k}\!\left|\frac{\partial u_m}{\partial x_i}(x)[\phi(k^{-1}|x|)-1]+u_m(x)\phi'(k^{-1}|x|)\cdot\frac{x_i}{k|x|}\right|^p\,\mathrm dx\end{aligned}$

    Recall an elementary inequality: $(|a|+|b|)^p\leq2^{p-1}(|a|^p+|b|^p)$ for $p\geq1$. Then we have

    $\begin{aligned}\left\|\frac{\partial u_{m,k}}{\partial x_i}-\frac{\partial u_m}{\partial x_i}\right\|_p^p\leq2^{p-1}\left(\int_{|x|\geq k}\!\left|\frac{\partial u_m}{\partial x_i}\right|^p\,\mathrm dx+\frac{\sup\limits_{[0,\infty)}|\phi'|^p}{k^p}\int_{|x|\geq k}\!|u_m|^p\,\mathrm dx\right)\to0~\text{as}~k\to\infty.\end{aligned}$

    Thus, there exists $k=k(m)\in\mathbb N$ such that $u_{m,k(m)}\in C_0^1(\mathbb R^n)$ satisfies $\displaystyle\lim_{m\to\infty}\|u_{m,k(m)}-u\|_{1,p}=0$, i.e., $u\in H_0^{1,p}(\mathbb R^n)$. The proof is complete.

    Remark. It is plausible to write the range of $p$ based on the definition of $H_0^{1,p}(\mathbb R^n)$ and $H^{1,p}(\mathbb R^n)$ at the page 166 in the textbook.


  12. Suppose $\Omega$ is a bounded domain, and let $S\equiv C(\bar\Omega)\subset X\equiv L^p(\Omega)$. Pick $x_0\in\Omega$. Does the functional $F_{x_0}(u)=u(x_0)$ for $u\in S$ extend to a bounded linear functional on $X$? If not, why not?
  13. SolutionNo. Suppose by contradiction that $F_{x_0}$ can be extended to a bounded linear functional $\tilde F_{x_0}:X\to\mathbb R$. Then for any $u\in X$ with $\|u\|_p=1$, we have $|\tilde F_{x_0}(u)|\leq\|\tilde F_{x_0}\|$, where $\|\tilde F_{x_0}\|$ is a constant independent of $u$. It is clear that there exists a function $\bar u\in C(\bar\Omega)$ satisfying $\bar u(x_0)=\|\tilde F_{x_0}\|+1$ and $\|\bar u\|_p=1$. Then we get a contradiction that:

    $\|\tilde F_{x_0}\|+1=|\tilde F_{x_0}(\bar u)|\leq\|\tilde F_{x_0}\|$.

    Therefore, $F_{x_0}$ cannot be extended to a bounded linear functional on $X$.

  14. If $X$ is a Hilbert space and $S$ is any subset of $X$, define

    $S^\bot=\{y\in X\,:\,\langle x,y\rangle=0~\text{for all}~x\in S\}.$

    1. Show that $S^\bot$ is a closed subspace of $X$.
    2. Show that $S\cap S^\bot$ can contain only the zero vector.
    3. If $S\subset T$ are both subsets of $X$, show that $T^\bot\subset S^\bot$.
    4. If $\bar S$ is the closure of $S$ in $X$, show that $S^\bot=\bar S^\bot$.
  15. Solution
    1. For $y_1,y_2\in S^\bot$ and $c_1,c_2\in\mathbb R$, we can find

      $\langle x,c_1y_1+c_2y_2\rangle=\langle c_1y_1+c_2y_2,x\rangle=\overline{\langle c_1y_1+c_2y_2,x\rangle}=\overline{c_1\langle y_1,x\rangle+c_2\langle y_2,x\rangle}=c_1\langle x,y_1\rangle+c_2\langle x,y_2\rangle\quad\text{for}~x\in S$,

      which shows $c_1y_1+c_2y_2\in S^\bot$. To show the closeness of $S^\bot$, we let $\{y_n\}_{n=1}^\infty$ be a sequence in $S^\bot$ such that $\{y_n\}_{n=1}^\infty$ converges to $y\in X$, i.e., $\displaystyle\lim_{n\to\infty}\|y_n-y\|=0$. By the Cauchy-Schwarz inequality, we have

      $|\langle x,y\rangle|=|\langle x,y-y_n\rangle|\leq\|x\|\|y-y_n\|.$

      By the squeeze theorem, we have $\langle x,y\rangle=0$, which means $y\in S^\bot$. Therefore, $S^\bot$ is a closed subspace.
    2. Let $x_0\in S\cap S^\bot$. Since $x_0\in S^\bot$, we know $\langle x,x_0\rangle=0$ for all $x\in S$. In particular, $x_0\in S$ so $\langle x_0,x_0\rangle=0$, which implies $x_0=0$ by the definition of inner product. Thus, $S\cap S^\bot\subseteq\{0\}$ and hence $S\cap S^\bot=\{0\}$.
    3. For $y\in T^\bot$, we have $\langle x,y\rangle=0$ for all $x\in T$. Since $S\subset T$, the equality $\langle x,y\rangle$ holds true for all $x\in S$. This means $y\in S^\bot$. Therefore, $T^\bot\subset S^\bot$.
    4. Since $S\subseteq\bar S$, by (c), we have $\bar S^\bot\subseteq S^\bot$. It suffices to show that $S^\bot\subseteq\bar S^\bot$. Fix $y\in S^\bot$ arbitrarily. For $x\in\bar S$, there exists a sequence $\{x_n\}_{n=1}^\infty\subseteq S$ such that $\displaystyle\lim_{n\to\infty}\|x_n-x\|=0$. Note that $\langle x_n,y\rangle=0$ for all $n\in\mathbb N$. Then by Cauchy-Scharz inequality, we have

      $|\langle x,y\rangle|=|\langle x-x_n,y\rangle|\leq\|x-x_n\|\|y\|.$

      By the squeeze theorem, we have $\langle x,y\rangle=0$ for any $x\in\bar S$, which implies $y\in \bar S^\bot$ and hence $S^\bot\subseteq\bar S^\bot$. The proof is complete.

  16. For vector-valued functions $\vec u=(u_1,\dots,u_N)$ on a domain $\Omega\subset\mathbb R^n$, it is natrual to define $L^p(\Omega,\mathbb R^N)$ as the functions $\vec u:\Omega\to\mathbb R^N$ for which $|\vec u|\in L^p(\Omega)$; that is,

    $\displaystyle\|\vec u\|_p\equiv\left(\int_\Omega\!|\vec u|^p\,\mathrm dx\right)^{1/p}<\infty,\quad\text{where}~|\vec u|^2=\sum_{i=1}^Nu_i^2.$

    Simiarly, if each $u_i\in C^1(\Omega)$, we may define

    $\displaystyle\|\nabla\vec u\|_p=\left(\int_\Omega\!|\nabla\vec u|^p\,\mathrm dx\right)^{1/p},\quad\text{where}~|\nabla\vec u|^2=\sum_{i=1}^N\sum_{j=1}^n\left(\frac{\partial u_i}{\partial x_j}\right)^2.$

    1. Define $H_0^{1,p}(\Omega,\mathbb R^N)$.
    2. If $N=n$, show that $\text{div}:H_0^{1,p}(\Omega,\mathbb R^n)\to L^p(\Omega)$ is a continuous linear operator.
    3. Show that $\tilde H_0^{1,p}(\Omega,\mathbb R^n)\equiv\{\vec u\in H_0^{1,p}(\Omega,\mathbb R^n)\,:\,\text{div}(\vec u)=0\}$ is a closed subspace of $H_0^{1,p}(\Omega,\mathbb R^n)$.
  17. Solution
    1. As for the definition of $H_0^{1,p}(\mathbb R^n)$, $H_0^{1,p}(\Omega,\mathbb R^N)$ can be defined by

      $H_0^{1,p}(\Omega,\mathbb R^N)=\overline{C_0^1(\Omega,\mathbb R^N)}$

      where the bar denotes the completion in the following norm

      $\|\vec u\|_{1,p}=(\|\vec u\|_p^p+\|\nabla\vec u\|_p^p)^{1/p}$ for $\vec u\in C_0^1(\Omega,\mathbb R^N)$.

    2. Recall that for $\vec u=(u_1,\dots,u_n)\in H_0^{1,p}(\Omega,\mathbb R^n)$, $\displaystyle\text{div}(\vec u)=\sum_{i=1}^n\frac{\partial u_i}{\partial x_i}$. Since $\partial u_i/\partial x_i\in L^p(\Omega)$, the divergence operator is well-defined. For any $\vec u,\vec v\in H_0^{1,p}(\Omega,\mathbb R^n)$ and $c\in\mathbb R$, we have

      $\displaystyle\text{div}(c\vec u+\vec v)=\sum_{i=1}^n\frac{\partial}{\partial x_i}(cu_i+v_i)=c\sum_{i=1}^n\frac{\partial u_i}{\partial x_i}+\sum_{i=1}^n\frac{\partial v_i}{\partial x_i}=c\cdot\text{div}(\vec u)+\text{div}(\vec v)$,

      which means $\text{div}$ is a linear operator. By Theorem 1 at page 167 in the textbook, the continuity of a linear operator is equivalent to its boundedness. For nonzero $\vec u\in H_0^{1,p}(\Omega,\mathbb R^n)$ with $\|\vec u\|_{1,p}=1$, the Cauchy–Schwarz inequality implies

      $\displaystyle\left|\sum_{i=1}^n\frac{\partial u_i}{\partial x_i}\right|^p\leq n^{p/2}\left(\sum_{i=1}^n\left(\frac{\partial u_i}{\partial x_i}\right)^2\right)^{p/2}\leq n^{p/2}|\nabla\vec u|^p$.

      Then we obtain

      $\displaystyle\|\text{div}(\vec u)\|_p=\left(\int_\Omega\!\left|\sum_{i=1}^n\frac{\partial u_i}{\partial x_i}\right|^p\,\mathrm dx\right)^{1/p}\leq\left(n^{p/2}\int_\Omega\!|\nabla\vec u|^p\,\mathrm dx\right)^{1/p}=\sqrt n\|\nabla\vec u\|_p\leq\sqrt n\|\vec u\|_{1,p}=\sqrt n$.

      Hence the operator $\text{div}$ is bounded. The proof is complete.
    3. Clearly, $\tilde H_0^{1,p}(\Omega,\mathbb R^n)$ is a subset of $H_0^{1,p}(\Omega,\mathbb R^n)$ and is nonempty because the zero function is in $\tilde H_0^{1,p}(\Omega.\mathbb R^n)$. By (b), it is easy to see that $c\vec u+\vec v\in\tilde H_0^{1,p}(\Omega,\mathbb R^n)$ for any $\vec u,\,\vec v\in\tilde H_0^{1,p}(\Omega,\mathbb R^n)$ and $c\in\mathbb R$. Thus, $\tilde H_0^{1,p}(\Omega,\mathbb R^n)$ is a subspace of $H_0^{1,p}(\Omega,\mathbb R^n)$. To see $\tilde H_0^{1,p}(\Omega,\mathbb R^n)$ is closed, we pick $\vec u\in\overline{\tilde H_0^{1,p}(\Omega,\mathbb R^n)}$. By the definition of closure, there exists a sequence of functions $\{\vec u_k\}_{k=1}^\infty\subseteq\tilde H_0^{1,p}(\Omega,\mathbb R^n)$ such that $\displaystyle\lim_{k\to\infty}\|\vec u_k-\vec u\|_{1,p}=0$. Since $\vec u_k\in\tilde H_0^{1,p}(\Omega,\mathbb R^n)$, we have $\text{div}(\vec u_k)=0$. By (b), we have

      $\displaystyle\text{div}(\vec u)=\text{div}\left(\lim_{k\to\infty}\vec u_k\right)=\lim_{k\to\infty}\text{div}(\vec u_k)=\lim_{k\to\infty}0=0.$

      This implies $\vec u\in\tilde H_0^{1,p}(\Omega,\mathbb R^n)$. The proof is complete.

      Remark. A direct observation is that $\tilde H_0^{1,p}(\Omega,\mathbb R^n)=H_0^{1,p}(\Omega,\mathbb R^n)\cap\text{div}^{-1}(\{0\})$. Since $\text{div}$ is continuous, the set $\text{div}^{-1}(\{0\})$ is closed, which implies $\tilde H_0^{1,p}(\Omega,\mathbb R^n)$ is closed.

  18. In Lemma 2 in this section, the vector $v\in Y$ was claimed (but not proved) to be unique. Establish this uniqueness.
  19. SolutionSuppose by contradiction that there exist $v'\in Y$ and $w'\in X$ such that $u=v'+w'$ and $\langle w',Y\rangle=0$ but $v'\neq v$. Then from $v+w=u=v'+w'$, we have $w'-w=v-v'\in Y$. Since $\langle w'-w,Y\rangle=\langle w',Y\rangle-\langle w,Y\rangle=0$, we have $\langle w'-w,w'-w\rangle=0$, which means $w'=w$ and hence $v'=v$. This leads a contradiction and the uniqueness of $v$ and $w$ follows.

  20. If $F:X\to Y$ is a bounded linear operator between normed vector spaces $X$ and $Y$, show that the nullspace $N=\{x\in X\,:\,F(x)=0\}$ is a closed subspace of $X$.
  21. SolutionFor $x_1,x_2\in N$ and $c\in\mathbb R$, we have $F(cx_1+x_2)=cF(x_1)+F(x_2)=c\cdot0+0=0$, which implies $cx_1+x_2\in N$. Moreover, since $N\subseteq X$ and $0\in N$, $N$ is a subspace of $X$. Note that $F$ is continuous linear operator due to its boundedness by Theorem 1 at page 167 in the textbook. Since $\{0\}$ is a closed set, we have $N=F^{-1}(\{0\})$ is also closed. Therefore, $N$ is a closed subspace and we complete the proof.

    Remark. Let $\{x_n\}_{n=1}^\infty$ be a sequence in $N$ and converges to $x\in X$, i.e., $F(x_n)=0$ for all $n\in\mathbb N$. Then by the continuity of $F$, we have $\displaystyle F(x)=F\left(\lim_{n\to\infty}x_n\right)=\lim_{n\to\infty}F(x_n)=\lim_{n\to\infty}0=0$, which implies $x\in N$, i.e., $N$ is a closed set. The proof is complete.


  22. If $Y$ is a subspace of a Hilbert space $X$, define $Y^\bot$ as in Exercise 8. Show $(Y^\bot)^\bot=\bar Y$.
  23. SolutionRecall that the closure of a set $A$ is the smallest closed set containing $A$, i.e., if $A\subseteq B$ and $B$ is closed, then $\bar A\subseteq B$. For $y\in Y$, we know $\langle y,x\rangle=0$ for all $x\in Y^\bot$. Moreover, we have

    $\langle x,y\rangle=\overline{\langle y,x\rangle}=0$ for any $x\in Y^\bot$.

    This means $y\in(Y^\bot)^\bot$, i.e., $Y\subseteq(Y^\bot)^\bot$. Hence we have $\bar Y\subseteq(Y^\bot)^\bot$. It suffices to show that $(Y^\bot)^\bot\subseteq\bar Y$. For any $y\in(Y^\bot)^\bot$, then we have $\langle x,y\rangle=0$ for all $x\in Y^\bot$, which implies $y\in\bar Y$, i.e., $(Y^\bot)^\bot\subseteq\bar Y$. The proof is complete.

  24. Suppose $S$ is a subspace of a Hilbert space $X$ and $f:S\to\mathbb R$ is a linear functional with $|f(s)|\leq C\|s\|$ for all $s\in S$. Prove that there is a unique linear functional $F:X\to\mathbb R$ extending $f$ (i.e., $F(s)=f(s)$ for $s\in S$) and preserving the norm:

    $\displaystyle\|F\|=\sup_{x\in X,\,x\neq0}\frac{|F(x)|}{\|x\|}=\sup_{s\in S,\,s\neq0}\frac{|F(s)|}{\|s\|}.$

    (Notice that this special case of the Hahn-Banach theorem does not require the axiom of choice.)
  25. SolutionBy the Riesz Representation theorem (Theorem 3), there exists a unique element $x_f\in S$ such that

    $f(s)=\langle s,x_f\rangle\quad\text{for all}~s\in S$.

    Then we define the extension of $f$ to $X$ by

    $F(x)=\langle x,x_f\rangle\quad\text{for}~x\in X$.

    Clearly, $F$ is linear functional and $F(s)=f(s)$ for $s\in S$. Moreover, the norm of linear operator satisfies

    $\displaystyle\|F\|=\|x_f\|=\|f\|=\sup_{s\in S,\,s\neq0}\frac{|f(s)|}{\|s\|}=\sup_{s\in S,\,s\neq0}\frac{|F(s)|}{\|s\|}$.

    Suppose there ia an another linear extension $F'$ of $f$ to $X$. Then by the Riesz Representation Theorem again, there exists an element $x_{F'}\in X$ such that $F'(x)=\langle x,x_{F'}\rangle$ for all $x\in X$. Since $F'(s)=f(s)=F(s)$ for all $s\in S$, we have $\langle s,x_f-x_{F'}\rangle=0$ for all $s\in S$ and hence $x_f-x_{F'}\in S^\bot$. But we find

    $\|x_f\|=\|F\|=\|F'\|=\|x_{F'}\|=\|x_f+(x_{F'}-x_f)\|=\sqrt{\|x_f\|^2+\|x_{F'}-x_f\|^2}$,

    which shows $\|x_{F'}-x_f\|=0$ and hence $x_{F'}=x_f$ and $F=F'$.

  26. Suppose $X$ is a Hilbert space and $\{x_n\}_{n=1}^\infty$ is a collection of orthonormal vectors. Given $u\in X$, define its Fourier coefficients by $\alpha_n=\langle u,x_n\rangle$.
    1. Prove Bessel's inequality: $\displaystyle\sum_{n=1}^\infty\alpha_n^2\leq\|u\|^2$.
    2. Let $Y$ be the finite-dimensional subspace of $X$ generated by taking linear combinations of $x_1,\dots,x_N$. Show that $\displaystyle v=\sum_{n=1}^N\alpha_nx_n$ is the element of $Y$ that minimizes $\|u-v\|$ as in Lemma 1 of this section.
    3. Repeat (b) when $Y$ is infinite dimensional subspace generated by taking linear combinations and limits of all the $x_n$'s.
  27. Solution
    1. For any $N\in\mathbb N$, we have $\displaystyle\left\|u-\sum_{n=1}^N\alpha_nx_n\right\|^2\geq0$, which implies

      $\displaystyle\|u\|^2-\sum_{n=1}^N\alpha_n^2=\|u\|^2-2\sum_{n=1}^N\alpha_n\langle u,x_n\rangle+\sum_{n=1}^N\alpha_n^2\geq0$.

      Here we have used the fact that $\langle x_n,x_m\rangle=0$ if $n\neq m$ and $\langle x_n,x_n\rangle=1$ for all $n\in\mathbb N$. Since the sequence $\displaystyle\sum_{n=1}^N\alpha_n^2$ is increasing in $N$ and is bounded by $\|u\|^2$, we obtain Bessel's inequality and complete the proof.
    2. For any $v\in Y$, there exists $c_1,\dots,c_N\in\mathbb R$ such that $\displaystyle v=\sum_{n=1}^Nc_nx_n$. Then we find

      $\begin{aligned}\|u-v\|^2&=\|u\|^2-2\sum_{n=1}^Nc_n\langle u,x_n\rangle+\sum_{n=1}^Nc_n^2=\|u\|^2-2\sum_{n=1}^N\alpha_nc_n+\sum_{n=1}^Nc_n^2\\&=\|u\|^2+\sum_{n=1}^N[(c_n-\alpha_n)^2-\alpha_n^2]\geq\|u\|^2-\sum_{n=1}^N\alpha_n^2.\end{aligned}$

      The equality holds when $c_n=\alpha_n$ for $n=1,\dots,N$. Hence $\displaystyle\sum_{n=1}^N\alpha_nx_n$ is the unique element of $Y$ that minimizes $\|u-v\|$.
    3. It suffices to show that if $\displaystyle\sum_{n=1}^\infty c_nx_n=u$, i.e., $\displaystyle\lim_{N\to\infty}\sum_{n=1}^Nc_nx_n=u$, then $c_n=\alpha_n$ for all $n\in\mathbb N$. Since the inner product $\langle\cdot,\cdot\rangle$ is continuous in both arguments because of the Cauchy-Schwarz inequality. Thus, for any $y\in X$ we have

      $\displaystyle\langle u,y\rangle=\left\langle\sum_{n=1}^\infty c_nx_n,y\right\rangle=\left\langle\lim_{N\to\infty}\sum_{n=1}^Nc_nx_n,y\right\rangle=\lim_{N\to\infty}\sum_{n=1}^Nc_n\langle x_n,y\rangle=\sum_{n=1}^\infty c_n\langle x_n,y\rangle.$

      For $y=x_m$, we get $\displaystyle\alpha_m=\sum_{n=1}^\infty c_n\langle x_n,x_m\rangle=\sum_{n=1}^\infty c_n\delta_{nm}=c_m$ for $m\in\mathbb N$. The proof is complete.

  28. A bounded bilinear form on a real Hilbert space $X$ is a map $B:X\times X\to\mathbb R$ satisfying
    1. $B(\alpha x+\beta y,z)=\alpha B(x,z)+\beta B(y,z)$,
    2. $B(x,\alpha y+\beta z)=\alpha B(x,y)+\beta B(x,z)$,
    3. $|B(x,y)|\leq C\|x\|\|y\|$ for all $x,y,z\in X$ and $\alpha,\beta\in\mathbb R$.
    Under these assumptions, show that there is a unique bounded linear operator $A:X\to X$ such that $B(x,y)=\langle Ax,y\rangle$ for all $x,y\in X$, where $\langle\cdot,\cdot\rangle$ denotes the inner product on $X$.
  29. SolutionFix $x\in X$ arbitrarily. Then we define $F_x:X\to\mathbb R$ by $F_x(y)=B(x,y)$ for $y\in X$. By (ii), we know

    $F_x(cy_1+y_2)=B(x,cy_1+y_2)=cB(x,y_1)+B(x,y_2)=cF_x(y_1)+F_x(y_2)$,

    which shows $F_x$ is a linear operator on $X$. In addition, by (iii), we know $\|F_x\|\leq C\|x\|$, which means $F_x$ is bounded. Hence by Theorem 3 (Riesz Representation), there exists a unique $v_x\in X$ such that $F_x(y)=\langle y,v_x\rangle$ for all $y\in X$. Note that $\langle y,v_x\rangle=\langle v_x,y\rangle$ because $X$ is a real Hilbert space. Now we define the operator $A:X\to X$ by $Ax=v_x$ for all $x\in X$. It suffices to show that $A$ is a bounded linear operator. For $x_1,x_2\in X$ and $d\in\mathbb R$, it remains to show $v_{dx_1+x_2}=dv_{x_1}+v_{x_2}$, which is equivalent to verify that $F_{dx_1+x_2}=dF_{x_1}+F_{x_2}$. This follows directly from (i):

    $F_{dx_1+c_2}(y)=B(dx_1+x_2,y)=dB(x_1,y)+B(x_2,y)=dF_{x_1}(y)+F_{x_2}(y)$.

    Hence $A$ is a linear operator. By (iii), we have

    $|\langle Ax,y\rangle|=|B(x,y)|\leq C\|x\|\|y\|$,

    which implies $\|Ax\|^2\leq C\|x\|\|Ax\|$ and $\|Ax\|/\|x\|\leq C$. This means $A$ is bounded. To end this proof, we need to establish the uniqueness of $A$. Suppose by contradicion that there exists an another bounded linear operator $A':X\to X$ such that $B(x,y)=\langle A'x,y\rangle$ for all $x,y\in X$. Then we have $\langle(A'-A)x,y\rangle=0$ for all $x,y\in X$. Taking $y=(A'-A)x$, it gives $\|(A'-A)x\|=0$ and hence $A'x=Ax$ for all $x\in X$, i.e., $A'=A$. This leads a contradiction. Therefore, we complete the proof.

    Remark. The linearity of $A$ can be proved as follows. For $x_1,x_2\in X$ and $c\in\mathbb R$, we have

    $\langle A(cx_1+x_2),y\rangle=B(cx_1+x_2,y)=cB(x_1,y)+B(x_2,y)=c\langle Ax_1,y\rangle+\langle Ax_2,y\rangle=\langle cAx_1+Ax_2,y\rangle$,

    which gives $\langle A(cx_1+x_2)-(cAx_1+Ax_2),y\rangle=0$ for all $y\in X$. Thus, $A(cx_1+x_2)=cAx_1+Ax_2$.


  30. If $X$ is a Hilbert space and $T:X\to X$ is a bounded linear operator, the adjoint of $T$ is an operator $T^*:X\to X$ defined as follows:
    1. For $y\in X$, use the Riesz representation theorem to define $T^*y\in X$ satisfying $\langle Tx,y\rangle=\langle x,T^*y\rangle$ for all $x,y\in X$.
    2. Show that $T^*:X\to X$ is a bounded linear operator with $\|T^*\|=\|T\|$.
  31. Solution
    1. Given $y\in X$. Then we define the linear functional $f_y:X\to\mathbb R$ by $f_y(x)=\langle Tx,y\rangle$ for $x\in X$. Since $T$ is bounded, $|f_y(x)|\leq\|Tx\|\|y\|\leq\|T\|\|x\|\|y\|$, which implies $|f_y(x)|/\|x\|\leq\|T\|\|y\|$. This shows that $f_y$ is also bounded. Hence by Theorem 3 (Riesz Representation), there exists a unique element $v_y$ such that $f_y(x)=\langle x,v_y\rangle$ for $x\in X$. Hence we define $T^*:X\to X$ by $T^*y=v_y$ for $y\in X$.
    2. To get the linearity of $T^*$, we need to show $v_{cy_1+y_2}=cv_{y_1}+v_{y_2}$ for $y_1,y_2\in X$ and $c\in\mathbb R$, which follows from $f_{cy_1+y_2}=cf_{y_1}+f_{y_2}$. This can be verified by

      $f_{cy_1+y_2}(x)=\langle Tx,cy_1+y_2\rangle=c\langle Tx,y_1\rangle+\langle Tx,y_2\rangle=cf_{y_1}(x)+f_{y_2}(x)\quad\text{for}~x\in X$.

      This shows that $T^*$ is a linear operator. Let $y\in X$ be a unit vector. If $T^*y=0$, then surely $\|T^*y\|\leq\|T\|$. If $T^*y\neq0$, then we have

      $\|T^*y\|^2=\langle T^*y,T^*y\rangle=\langle TT^*y,y\rangle\leq\|TT^*y\|\|y\|=\|TT^*y\|\leq\|T\|\|T^*y\|$,

      which gives $\|T^*y\|\leq\|T\|$. Taking the supremum over all $y$ with $\|y\|=1$, we get $\|T^*\|\leq\|T\|$. On the other hand, we note that $\langle T^*x,y\rangle=\langle y,T^*x\rangle=\langle Ty,x\rangle=\langle x,Ty\rangle$, which implies $T=(T^*)^*$ and hence $\|T\|=\|(T^*)^*\|\leq\|T^*\|$. Therefore, $T^*$ is bounded and $\|T^*\|=\|T\|$.

  32. If $T:X\to X$ is a bounded linear operator on a Hilbert space $X$, let $N=\{x\in X\,:\,T(x)=0\}$ be the nullspace of $T$ and $R=\{Tx\,:\,x\in X\}$ be the range of $T$. Similarly, let $N^*$ and $R^*$ denote the nullspace and range of the adjoint $T^*$ defined in Exercise 16. Prove $(N^*)^\bot=\bar R$ (where $S^\bot$ is defined in Exercise 8).
  33. SolutionWe firstly prove that $R\subseteq(N^*)^\bot$. Let $y\in R$. Then by the definition of the range, there is an element $x\in X$ such that $y=Tx$. For any $z\in N^*$, we find

    $\langle z,y\rangle=\langle y,z\rangle=\langle Tx,z\rangle=\langle x,T^*z\rangle=\langle x,0\rangle=0$.

    This means $y\in(N^*)^\bot$ and $R\subseteq(N^*)^\bot$. By the part (a) of Exercise 8, $\bar R\subseteq(N^*)^\bot$. Here we have used the fact that if $A\subseteq B$ and $B$ is closed, then $\bar A\subseteq B$. On the other hand, we want to prove that $(N^*)^\bot\subseteq\bar R$, which is equivalent to show that $(\bar R)^c\subseteq((N^*)^\bot)^c$. Here $A^c$ denotes the complement set of $A$. Suppose by contradiction that $y\in(\bar R)^c$ but $y\in(N^*)^\bot$. By Lemma 2 (The Projection Theorem), there exists a unique $v_y\in\bar R$ and $w_y\in X$ such that $y=v_y+w_y$ and $\langle w_y,\bar R\rangle=0$, where $w_y\neq0$. The condition $\langle w_y,\bar R\rangle=0$ implies $\langle w_y,Tu\rangle=0$ and hence $\langle T^*w_y,u\rangle=0$ for all $u\in X$. Thus, $w_y\in N^*$. From $y\in(N^*)^\bot$, we have

    $0=\langle z,y\rangle=\langle z,v_y+w_y\rangle=\langle z,v_y\rangle+\langle z,w_y\rangle=\langle z,w_y\rangle\quad\text{for}~z\in N^*,$

    which means $w_y\in(N^*)^\bot$. Here we have used the facts that $v_y\in\bar R$ and there is an element $u_y\in X$ such that $Tu_y=v_y$ and

    $\langle z,v_y\rangle=\langle v_y,z\rangle=\langle Tu_y,z\rangle=\langle u_y,T^*z\rangle=\langle u_y,0\rangle=0$.

    Hence we arrive at $w_y\in N^*\cap(N^*)^\bot$. This means $w_y=0$ and leads a contradiction. Therefore, we complete the proof of $(N^*)^\bot=\bar R$.

沒有留言:

張貼留言