Banach-Steinhaus Theorem

Foundations of Complex Analysis in Non Locally Convex Spaces

Aboubakr Bayoumi , in North-Holland Mathematics Studies, 2003

1.4 Uniform Boundedness Principle

As usual E and F denote F-spaces. If a subset B of L(E, F) is bounded, i.e. sup{∥A∥; A ∈ B}   M for some M >   0, then

x E , M x > 0 such that sup A x ; A B M x .

That is, B is bounded pointwise on E. (We may also say that each orbit Γ(x) is bounded).

For a p-Banach space E we shall show that the converse is also true, i.e. if sup{∥A(x)∥,AB}   Mx for each xE, then sup{∥A∥;AB}   M. That is, B is bounded with respect to the F-norm.

Thus an extension of the Banach-Steinhaus Theorem (which is known as the principle of uniform boundedness) is given here to locally bounded F-spaces and the proof is analogus to that for Banach spaces.

Theorem 16 [26] Generalized Banach-Steinhaus Theorem

Let E be a p-Banach space and F a q-normed space. Given a family {Ak}kϵI in L(E,F) the following are equivalent :

(a)

{Ak }kϵI is an equicontinuous family, i.e. for any ϵ >   0, there exists δ >   0 such that

x < δ A k x < ϵ k I .

(b)

{Ak } kϵI is bounded pointwise, i.e. for each xE, there exists Mx >   0 such that

A k x M x k I .

(c)

{Ak } kϵI is uniformly bounded, i.e. there exists M >   0 such that

A k M k I .

Proof

(a)     (b). Assuming the statement (a), we can find a δ >   0 such that ∥x∥< δ     ∥A k (x)∥ <   1 for all kI. If x    0, then

A k δ 1 / p x x 1 / p 1 ,

i.e.

A k x x q / p δ q / p = M x

for all kI.

(b)     (c). For each natural number n, let

D n = x E ; A k x n k I .

Since each Ak is continuous, it follows that Dn is closed. By (b) we have E = ∪ n     1Dn . Then the Baire category theorem ensures that some Dn contains a closed ball B ¯ E ξ r . Consequently we have

A k x n

for all x B ¯ ξ r and kI. This implies that if ∥x∥≤ r, then

A k x A k x + ξ + A k ξ 2 n

for all kI; note that ∥x     ξ∥≤ r, B ¯ ξ r D n .

Therefore, A k q 2 n r q / p = M for all kI, since ∥A k (x)∥≤   Ak q x q/p , see part I, to get A k 2 n 1 / q r 1 / p = M 1 l q .

(c)     (a): Let ϵ >   0 be given and let δ = ϵ M . Then for ∥x∥< δp/q , we have

A k x M x q / p < ϵ

for all kI; so the family {Ak } kϵI is equicontinuous. ■

For a subset B of L(E, F), where E and F are Fréchet spaces (complete metric spaces), we have the following,result, see Rolewicz [186].

Theorem 17

[186]

Let E and F be Fréchet spaces. Then every bounded family B = {Ak } kϵI in L(E,F) is equicontinuous.

Proof

First we claim that a closed absorbing set V of E contains an open set. Since V is absorbing,

E = n = 1 nV .

By the Baire theorem, the space E is of second category. Therefore, there is a positive integer n 0 such that n 0 V is of the second category. Since n 0 V is a closed set, it contains an open set U. Hence 1 n 0 U V .

Secondly we prove that the bounded family B is equicontinuous.

Let ϵ >   0 and let

U 1 = A B x E ; A x ϵ

Since the maps A are continuous, the set U 1 is closed. Let us show that U 1 is an absorbing set. In fact, if xE is arbitrary, then the hypothesis implies that the set {A(x); AB} is bounded. Hence there exists a positive number a such that for b with 0   < b   <   a, we get

bA x < ϵ A .

Therefore bxU 1. From the first part, the set U 1 contains an open set U 2. Let x 0U 2. The set {A(x 0); AB} is bounded. Hence there is a positive number b with |b| <   1 such that ∥bA(x 0)∥ < ϵ Thus bx 0U 1. Let U = b(U 2  x 0). Hence U is a neighborhood of zero. Let xU. Then x = by - bx 0, where yU 2. Therefore A x bA x 0 + bA y ϵ + sup | b | 1 z < ϵ b z .

Hence the continuity of multiplication by scalars implies the Theorem.■

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0304020803800191

THE WEAK TOPOLOGY IN A BANACH SPACE

L.V. KANTOROVICH , G.P. AKILOV , in Functional Analysis (Second Edition), 1982

THEOREM 4.

If X is a normed space, then a set EX is weakly bounded if and only if it is bounded relative to the norm of X.

We promised a proof of this theorem as far back as Chapter III (see Theorem III.3.3), but it is only now that the Banach–Steinhaus Theorem enables us to give it. Since weakly compact sets are weakly bounded, it follows from Theorem 4 that every weakly compact set is bounded relative to the norm.

We also recall that the weak closure and the closure relative to the norm coincide for convex subsets of a normed space X (see Theorem III.3.2.). We shall show below that, in spite of this property, the weak topology and the norm topology are distinct for an infinite-dimensional B -space.

The reasoning at the beginning of 1.1 shows that, if X is a B -space, then X *is weak*sequentially complete: that is, if the numerical sequence {f n (x)} (f n X *) has a limit for each xX, then there exists an fX *with f n f (σ(X*, X)).

The space X itself does not always have the analogous property. A B -space X is said to be weakly sequentially complete if the LCS (X, σ(X, X *)) is sequentially complete, that is, if the following condition is satisfied: if a numerical sequence {f (x n )} (x n X) has a limit for each fX *then there exists an xX such that x n x (σ(X, X *)). It follows from the fact that X *is weak*sequentially complete that a reflexive B -space X is weakly sequentially complete. In Chapter X we shall see that the space c 0is not weakly sequentially complete,while the non-reflexive space L 1[0, 1] is weakly sequentially complete (see Theorem X.4.9). An essential difference between the weak topology and the strong topology is apparent from

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978008023036850014X

Handbook of Dynamical Systems

Luis Barreira , ... Omri Sarig , in Handbook of Dynamical Systems, 2006

Definition 1

A probability preserving transformation (X,ℬ,m,T) is called strongly mixing if for every f , g L 2 , Cov ( f , g T n ) : = f g T n f g n 0 .

It is natural to ask for the speed of convergence (the faster it is the less predictable the system seems to be). Unfortunately, without extra assumptions, the convergence can be arbitrarily slow: For all sequences εn 0 and all 0 ≠ gL 2 s.t. ∫ g = 0, ∃fL 2 with Cov(f, g ○ Tn ) ≠ O (εn ). 9

We will therefore refine the question stated above and ask: How fast does Cov(f, g ○ Tn ) → 0 for f, g in a given collection of functions L 2 ? The collection ℒ varies from problem to problem. In practice, the challenge often reduces to the problem of identifying a class of functions ℒ which is large enough to generate ℬ, but small enough to admit analysis.

We discuss this problem below. The literature on this subject is vast, and cannot be covered in an appendix of this size. We will therefore focus on the methods used to attack the problem, rather than their actual application (which is almost always highly nontrivial, but also frequently very technical). The reader is referred to Baladi's book [28] for a more detailed account and a more complete bibliography.

In what follows, (X,ℬ,m,T) is a probability preserving transformation, and ℒ is a collection of square integrable functions. We assume for simplicity that T is noninvertible (the methods we describe below can be applied in invertible situations, but are easier to understand in the noninvertible setting). A key concept is:

Definition 2

The transfer operator (or dual operator, or Frobenius–Perron operator) of T is T ^ : L 1 L 1 where T ^ is the unique L 1-function s.t:

g L , g · T ^ f = g T · f .

The definition of T ^ is tailored to make the following statement correct: If dμ = f dm, then d μ T 1 = T ^ f d m . Thus, T ^ is the action of T on density functions.

It is easy to check that T ^ is a positive operator, a contraction (i.e., T ^ f 1 f 1 ) and that T ^ f 1 = f 1 for all f ≥ 0. The T-invariance of m implies that T ^ 1 = 1 . The relation between T ^ and Cov(f, g ○ Tn ) is the following identity:

(A.1) Cov ( f , g T n ) = [ T ^ n f f ] g .

We see that the asymptotic behavior of Cov(f, g ○ Tn ) can be studied by analyzing the asymptotic behavior of T ^ n as n → ∞. This is the viewpoint we adopt here.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874575X06800275

History of Functional Analysis

In North-Holland Mathematics Studies, 1981

§5 Banach's Book and Beyond

In 1932 S. Banach published a book [15] containing a comprehensive account of all results known at that time in the theory of normed spaces, and in particular the theorems he had published in his papers of 1923 and 1929. A large part was devoted to the concept of weak convergence and its generalizations, which he had begun to study in 1929; we shall postpone to chap. VIII, §1 the discussion of these questions. The most remarkable result contained in that book is another consequence of Baire's theorem, discovered by Banach, and much deeper than the Banach-Steinhaus theorem: if u is a continuous linear mapping from a complete normed space E into a complete normed space F, then either u(E) is meager in F (a set "of first category" in the terminology of Baire), or u(E) = F. An immediate consequence is the famous closed graph theorem: if u is a linear mapping from E to F having a closed graph in ExF, then u is continuous. These surprising results have become two of the most powerful tools in all applications of Functional Analysis.

These features, as well as many applications to classical Analysis, gave the book a great appeal, and it had on Functional Analysis the same impact that van der Waerden's book had on Algebra two years earlier. Analysts all over the world began to realize the power of the new methods and to apply them to a great variety of problems; Banach's terminology and notations were universally adopted, complete normed spaces became known as Banach spaces, and soon their theory was considered as a compulsory part in most curricula of graduate students. After 1935, the theory of normed spaces became part of the more general theory of locally convex spaces, which we shall discuss in chapter VIII; more recently however, there has been a renewed surge of interest in the special properties of normed spaces and their "geometry"; it is too soon, as yet, to have a clear idea of the scope of these results and of their relation to other parts of mathematics, and we refer the interested reader to [4], [17], [47], [50], [116], [134], [149], [150] and [185].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S030402080871853X

Topological Vector Spaces

Henri Bourlès , in Fundamentals of Advanced Mathematics 2, 2018

3.9 Continuous multilinear mappings

3.9.1 Continuous bilinear mappings

(I) Separately continuous bilinear mappings Let E, F, and G be three topological vector spaces over K and suppose that u : E × FG is a K -bilinear mapping. Let (x 0, y 0) ∈ E × F. If the two partial linear mappings u (x 0, .) : yu (x 0, y) and u (., y 0): xu (x, y 0) are continuous, then the mapping u is said to be separately continuous at the point (x 0, y 0). It is said to be separately continuous if it is separately continuous at every point (x 0, y 0) ∈ E × F.

Lemma 3.135

Let s E G (resp. s F G ) be the space of continuous linear mappings from E into G (resp. from F into G) equipped with the topology of pointwise convergence. The correspondences u ↦ [yu (., y)], u ↦ [xu (x, .)] are canonical isomorphisms between the space of separately continuous mappings from E × F onto G and the spaces F s E G and E s F G respectively, which allows us to identify these three spaces (exercise).

(II) Continuous bilinear mappings With the same hypotheses as in (I), we can define the notion of a continuous bilinear mapping at the point (x 0, y 0) ∈ E × F by equipping E × F with the product topology. Any bilinear mapping that is continuous at (x 0, y 0) is clearly separately continuous at (x 0, y 0). Furthermore:

Lemma 3.136

The bilinear mapping u : E × FG is continuous at (x 0, y 0) if and only if it is continuous at (0, 0).

Proof

Simply observe that u (xx 0, yy 0) = u (x, y) − u (x 0, y) − u (x, y 0) + u (x 0, y 0).

Suppose that E, F, and G are locally convex. With this notation, the following result also holds:

Theorem 3.137

(Bourbaki [BKI 50] , Grothendieck ( [GRO 54] , Corollary p. 66 )) If u is separately continuous, then it is continuous in both of the following two cases:

a)

E and F are metrizable and at least one of these spaces is barreled.

b)

E and F are barreled ( D )-spaces and G is Hausdorff.

Proof

(a): exercise* : use the Banach-Steinhaus theorem ( Theorem 3.58); cf. ([SCF 99], Chapter III, Theorem 5.1). (b): cf. loc. cit. or ([GRO 73], Chapter 4, Part 1, section 2, Corollary 1 of Theorem 2).

We also have the following result (exercise*: cf. [BKI 81], Chapter III, section 5.5, Corollaries 1 and 2 of Proposition 9):

Theorem 3.138

Let R, S, T be three locally convex spaces.

1)

Given any equicontinuous subset H of S T , the bilinear mapping (u, v)   v  u from R S × H into R T is continuous.

2)

Hence, if S is barreled, (u n ) → u in R S , and (v n ) → v in S T , then (v n   u n )   v  u in R T .

3.9.2 Hypocontinuous bilinear mappings

The notion of hypocontinuous bilinear mapping was introduced by Bourbaki [BKI 50] as an intermediate stage between separately continuous bilinear mappings and continuous bilinear mappings. Let E, F, and G be three locally convex spaces and suppose that S is a bornology on E (section 2.5.1).

Definition 3.139

A bilinear mapping u : E × FG is said to be S -hypocontinuous if it is separately continuous and furthermore satisfies the property that, given any neighborhood W of 0 in G and any set M S , there exists a neighborhood V of 0 in F such that u (M × V) ⊂ W.

A bilinear mapping u : E × FG is S -hypocontinuous if and only if, for every set M S , the image of M under the mapping xu (x, .) is an equicontinuous subset of F G (exercise). We can similarly define the notion of T -hypocontinuous bilinear mapping from E × F into G when T is a bornology on F; we say that a bilinear mapping from E × F into G is ( S , T )-hypocontinuous if it is both S -hypocontinuous and T -hypocontinuous. The next result (which may be proved as an exercise) generalizes Lemma 3.135:

Theorem 3.140

The correspondences u ↦ [yu (., y)], u ↦ [xu(x, .)] are canonical isomorphisms between the space of ( S , T )-hypocontinuous mappings from E × F into G and the spaces T (F; S (E; G)) and S (E; T (F; G)) respectively, which allows us to identify these three spaces.

Furthermore, the Banach–Steinhaus theorem (Theorem 3.58) implies the following result (exercise*: cf. [BKI 81], Chapter III, section 5.3, Proposition 6):

Theorem 3.141

If E is barreled, every separately continuous mapping from E × F into G is S -hypocontinuous for any bornology on E.

The space b (F; b (E; G)) ≅ (E; b (F; G)) is called the space of hypocontinuous bilinear mappings.

3.9.3 Bounded multilinear mappings

(I) If E and F are two locally convex spaces, then every continuous linear mapping from E into F is bounded (Lemma 3.37). Moreover, if E is bornological, then conversely every bounded linear mapping is continuous (Theorem 3.62). In general, let E 1, …, E n , and F be locally convex spaces. The mapping u from E 1 × … × E n into F is bounded if, for every bounded subset B of E 1 × … × E n , u (B) is bounded in F (section 2.5.1). For every integer k ∈ {1, …, ni + 1}, write E i E i + k 1 F for the vector space of bounded k-linear mappings from E i × … × E i  +  k    1 into F, and write b (E i , …, E i  +  k    1; F) for this space equipped with its equibornology (section 2.7.4).

Theorem 3.142

For all k ∈ {1, …, n − 1}, the canonical mapping

b E 1 E n F b E 1 E n k b E n k + 1 E n F

is an isomorphism of bornological sets.

Proof

Suppose that n = 2. We will show that b E 1 E 2 F b E 1 b E 2 F is an isomorphism of bornological sets. Let H be a set of bilinear mappings from E 1 × E 2 into F. It suffices to show that the following conditions are equivalent:

i)

For every bounded subset A i of E j (i = 1, 2), H (A 1 × A 2) is bounded in F.

ii)

For every bounded subset A 1 of E 1, {u (x 1, .): uH, x 1E 1} is an equibounded set of E 2 F .

Now, observe that (ii) is equivalent to saying that, for every bounded set A i of E i (i = 1, 2), ∪ u  H, x 1  A 1 is bounded in F; we also know that ∪ u  H, x 1  A 1   = H(A 1  × A 2).

This result may be extended to any arbitrary integer n ≥ 2 by induction.

Let (E 1, …, E n ; F) be the space of continuous n-linear mappings from E 1 × … × E n into F. Then E 1 E n F E 1 E n F (cf. Lemma 3.37). The reader may wish to show the next result (which gives a partial generalization of Theorem 3.42) as an exercise:

Theorem 3.143

Let E 1, …, E n , and F be normed vector spaces with norms |.|.

1)

Let u be an n-linear mapping from E 1 × … × E n into F. The following conditions are equivalent:

i)

u    ℬ(E 1,   …, E n ; F).

ii)

u    ℒ(E 1,   …, E n ; F).

iii)

u  <   ∞, where

[3.10] u sup x 1 , , x n 0 u x 1 x n x 1 x n = sup x 1 , , x n 1 u x 1 x n

2)

‖.‖ is a norm on (E 1, …, E n ; F), and, whenever F is a Banach space, (E 1, …, E n ; F) is also a Banach space equipped with this norm.

(II) More generally, if E 1, …, E n are normed vector spaces all equipped with the norm |.| and F is a locally convex space whose topology is defined by a family of semi-norms (|.| γ ) γ    Γ, then the family (‖.‖ γ ) γ    Γ defined on (E 1, ..., E n ; F) by

u γ sup x 1 , , x n 1 u x 1 x n γ

is a family of semi-norms. When equipped with this family of semi-norms, (E 1, ..., E n ; F) = (E 1, ..., E n ; F) is a locally convex space that is Hausdorff whenever F is Hausdorff, and quasi-complete (resp. complete) whenever F is quasi-complete (resp. complete). With these conventions, the linear mapping (E 1, E 2; F) → (E 1, (E 2; F)) defined by u u ˜ : x 1 u x 1 . is an isomorphism of locally convex spaces, and is furthermore an isometry (see the proof of Theorem 2.80) whenever F is a normed vector space (exercise).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781785482496500031

The Riesz Theorem

Joe Diestel , Johan swart , in Handbook of Measure Theory, 2002

THEOREM 3.2 (Grothendieck)

Any weakly compact linear operator u: C(K) → X carries weakly convergent sequences to norm convergent sequences.

L 1-spaces share with the C(K)-spaces the phenomena described above, a fact discovered by N. Dunford and B.J. Pettis: in light of this. Grothendieck called such spaces 'spaces with the Dunford-Pettis property': after Grothendieck's discovery that C(K)-spaces have the Dunford-Pettis property it was to be several decades before other significant examples were uncovered and when they were it was with the aid of the Riesz theorem in subtle, analytically-delicate situations.

A word or two about how the BartleDunford-Schwanz description of weakly compact operators provides an approach to Theorem 3.2. If (fn ) is a weakly null sequence in C(K), then the Banach-Steinhaus theorem tells us there's a constant M > 0 so that | f n ( k ) | M for all nN and all kK; further. Banach tells us that (fn (k)) tends to 0 for each kK. But the point of a norm countably additive integral is that the Lebesgue bounded convergence theorem holds, for much the same reason as for scalar integrals: Egoroff's theorem. So ( f n d F n ) tends to 0 in X's norm and with it lim n u f n = 0, too.

Before you get carried away with the apparent elegance of this proof, we rush to warn you that Grothendieck's proof, while never explicitly naming names or specifying a representing vector measure, follows pretty much the same line of attack as that of Banle-Dunford-Schwartz. He does more. His deep analysis of weak compactness in C (K) * permits him to prove the converse of Theorem 3.2. In fact he shows the following.

THEOREM 3.3 (Grothendieck)

Let K be a compact Hausdorff space and X be a Banach space. Suppose u: C(K) → X is a bounded linear operator: Then the following are equivalent.

(1)

u is weakly compact.

(2)

u is completely continuous, that is, it takes weakly convergent sequences to norm convergent sequences, Or what's the same, u rakes (relatively) weakly compact sets to (relatively) compact sets.

(3)

u is weakly completely continuous, that is, u carries weakly Cauchy sequences onto weakly convergent sequences.

(4)

u takes weakly Cauchy sequences into norm convergent sequences.

It is Grothendieck's supple handling of regularity that wins the day; paving the way is his stunning improvement of a result of his mentor. J. Dieudonné. The result?

THEOREM 3.4 (Dieudonne-Grothendieck)

For a bounded subset B of C(K)* to be relatively weakly compact it is necessary and sufficient that given any sequence (Gn) of pairwise disjoint open subsets of K we have

Grothendieck was not only one who had something to say about interesting variants of weakly compact operators on C(K). Soon after Grothendieck. A. Pelczyński introduced the notion of an unconditionally converging operator; u: XY is unconditionally converging if whenever n x n , is a series of terms in X for which n | x * ( x n ) | < for each x* ∈ X*. then n u ( x n ) is unconditionally convergent in Y. The celebrated theorem of W Orlicz and B.J. Pettis assures that weakly compact operators are unconditionally converging regardless of their domain, codomain. Pełczyński showed the converse for operators acting on C(K)'s.

THEOREM 3.5 (Pelczyński)

A bounded operator u: C(K) → X is weakly compact if and only if u is unconditionally converging.

Again this special result about operators on C(K)'s leads to the isolation of an important Banach space invariant. With Pelczynski, we say that a Banach space X has property V if any unconditionally converging operator u: XY is weakly compact.

It's an elegant piece of functional analysis that there is but one possible obstruction to an operator being unconditionally converging: the classical Banach space c 0 of all null sequences of scalars; indeed, as noted by Pełczyński, a bounded linear operator u: XY fails to be unconditionally converging if and only if there is a subspace X 0 of X that's isomorphic to c 0 such that u's restriction to X 0 is an isomorphism.

This leads to a fundamental consequence about the structure of Banach spaces of C(K) ilk.

THEOREM 3.6 (Pełczyński)

If X is a complemented (closed linear) subspace of C(K) and X is infinite dimensional, then X contains an isomorphic copy of c0 .

Complementation means there is a bounded linear projection P: C(K) → C(K) whose range is X. Were X to be without a subspace isomorphic to c 0, then by what we're said above. P is unconditionally converging, after all. P's range has no c 0's in it. But Theorem 3.5 tells us that P is weakly compact. Theorem 3.3(2) tells us that P is also completely continuous. Let's take stock: start with a bounded sequence (f n) in C(K); apply P and the resulting sequence (P f n ) has a weakly convergent subsequence (Pgn ); apply P again and the result (P 2 g n) = (P(Pg n)) is norm convergent. P 2 (=P) takes bounded sets to relatively compact sets; P is a compact linear operator! The only closed linear subspaces that could possibly serve as the range of a compact linear operator are finite-dimensional ones.

We rush to point out that Pełczyński called on the BartleDunford-Schwartz Theorem 3.1 to prove a version of Theorem 3.5 that led him to Theorem 3.6; more precisely, he showed that if X contains no copy, of c0, then every u: C (K) → X is represented by an X-valued Fu and so is a weakly compact operator.

As yet no one has proved Theorem 3.5 by purely vector measure theoretic techniques using Theorem 3.1.

Before leaving this aspect of the Riesz theorem for operators we'd like to point out that Bartle, Dunford and Schwartz based much of their analysis on a basic feature of vector measures discovered during their work: If ∑ is a σ-algebra, X is a Banach space and F: ∑ → X is norm-countably additive, then there is a countably additive scalar valued μ on ∑ so that the family {x* F: x * ⩽ 1} is uniformly absolutely continuous with mspect to μ. Consequently, if u: C(K) → X is a weakly compact linear operator then there is a regular Borel probability μ on K so that u*X*L 1 (μ); of course, u*: X* → L1 (μ) ⊆ C(K)* is still a weakly compact operator and u*B x* is bounded and uniformly integrable. An old chestnut of de la Vallée Poussin now provides us with a convex increasing Φ:[0, ∞[→ [0, ∞[such that

Φ ( x ) > 0 ( if x > 0 ) , lim x Φ ( x ) x = 0 , lim x + Φ ( x ) x = +  and k Φ ( | u * x * ( k ) | ) d μ ( k ) 1

for each x* ∈ B x*; u* is actually a bounded linear operator into the Orlicz space L Φ (μ). A bit of tender love and care allows us to factor u through L ψ(μ) where ψ is the N-function conjugate to Φ, in fact through the 'absolutely continuous' part of L ψ (μ). Much can be derived from this we mention but one analytic consequence: if u: C (K) → X is a weakly compact linear operator and (fn) is a bounded sequence in C(K), then (u (fn)) has a subsequence with norm convergent arithmetic means.

What of other classes of operators? Naturally, compact linear operators are weakly compact; what of their representing measures? Bartle, Dunford and Schwartz were up to the task: a bounded linear operator u: C (K) → X is compact precisely when the mpmsenting Borel measure Fu is X-valued and has a relatively compact range.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444502636500105

SEQUENCES OF LINEAR OPERATORS

L.V. KANTOROVICH , G.P. AKILOV , in Functional Analysis (Second Edition), 1982

§ 2 Some applications to the theory of functions

The theorems of the preceding section have various applications. Let us look at some of these.

2.1.First we consider the question of the convergence of mechanical quadrature formulae.

For the approximate evaluation of integrals one usually makes use of mechanical quadrature formulae having the form

a b x ( t ) d t Σ k = 0 n A k x ( t k )     ( a t 0 t 1 < t n b ) .

The rectangular, trapezium and Simpson formulae are examples. More complicated examples of exactly the same type are the Newton–Cotes and Gauss formulae. A general theory of cubic formulae has been developed by S. L. Sobolev (see Sobolev-II).

Since we cannot ensure a desired level of accuracy from a single formula, it is natural to consider sequences of formulae

(1) a b x ( t ) d t Σ k = 0 n A k ( n ) x ( t k ( n ) ) (a≤ t 0 (n) < t 1 (n) <…< t n (n) ≤b; n =0, 1,…)

and to pose the question: under what conditions will the error in calculating integrals by these formulae tend to zero as n → ∞? If this does happen for a given function x, we shall say that the mechanical quadrature formulae (1) converge for x.

One answer to the question posed above is given by

Theorem 1 (Szegö).

The following conditions are necessary and sufficient for the mechanical quadrature formulae (1) to converge for every continuous function:

1)

Σ k = 0 n | A k ( n ) | M      ( n = 0 , 1 , ) ;

2)

the formulae converge for every polynomial.

Proof. Consider the following functionals on the space C [a, b]:

f n ( x ) = k = 0 n A k ( n ) x ( t k ( n ) )     ( n = 0 , 1… ) ,             f ( x ) = a b x ( t ) d t .

As we showed in V.2.1,

| | f n | | Σ k = 0 n | A k ( n ) |     ( n = 0 , 1 , ) .

Thus condition 1) means that the norms of the f n are bounded in aggregate, and condition 2) that f n (x) → f (x) for x belonging to the dense subset of all polynomials in C [a, b]. Hence the stated result is a special case of the Banach–Steinhaus Theorem (if we take the Remark following it into account).

REMARK 1.

If the coefficients A (n) k are positive for all k and n, then the first condition follows from the second.

For, taking x (t) ≡ 1, we deduce the convergence of the formulae from the second condition: that is, we have

b a = a b d t = lim n Σ k = 0 n A k ( n ) ,

and from this it also follows that the sums Σ k = 0 | A k ( n ) | = Σ k = 0 n A k ( n ) are bounded.

REMARK 2.

In the second condition, the set of all polynomials can be replaced by another dense subset of C [a, b], for example the set of all piecewise linear functions, or even by a set which is complete in C [a, b], for example, the set of powers of the independent variable (see Remark 2 following the Banach–Steinhaus Theorem).

REMARK 3.

What we have said above about the formulae (1) carries over without any change to the more general case of formulae

(2) a b p ( t ) x ( t ) d t Σ k = 0 n A k ( n ) x ( t k ( n ) ) ( a t 0 ( n ) < t 1 ( n ) < < t n ( n ) b ;     n = 0 , 1 , ) ,

where p (t) is a fixed summable function, called the weight function.

The following is one of the basic methods of obtaining mechanical quadrature formulae.

For n = 0, 1,…, we specify values t (n) 0, t (n) 1,…, t (n) n in [a, b] in any manner and construct the Lagrange interpolation polynomials P n (x; t) associated with x (t), coincidingwith x (t) at t (n) 0, t (n) 1,…, t (n) n . It is well known that

P n ( x ; t ) = Σ k = 0 n l k ( n ) ( t ) x ( t k ( n ) ) ,

where

l k ( n ) ( t ) = ω n ( t ) ( t t k ( n ) ) ω n ( t k ( n ) ) ( ω n ( t ) = ( t t 0 ( n ) ) ( t t 1 ( n ) ) ( t t n ( n ) ) ;     k = 0 , 1 , , n ; n = 0, 1 , ) .

If we replace x (t) in the integral a b p ( t ) x ( t ) d t by its interpolation polynomial, we obtain the mechanical quadrature formulae

(3) a b p ( t ) x ( t )dt Σ k = 0 n A k ( n ) x ( t k ( n ) )     ( A k ( n ) = a b p ( t ) l k ( n ) ( t ) d t ) .

Formulae obtained by this method are called interpolation formulae (see Natanson-I).

If x (t) is a polynomial of degree at most n, then it coincides with its interpolation polynomial, so in this case (3) is an exact formula. Thus, if x (t) is an arbitrary polynomial, then the error of the formula is zero for sufficiently large n —that is, interpolation formulae for mechanical quadrature always converge on the set of all polynomials. Hence the first condition of Theorem 1 is, by itself, a necessary and sufficient condition for such formulae to converge for all continuous functions. In particular, the convergence is guaranteed when all the coefficients A (n) k are non-negative, by Remark 1.

This latter situation arises, for example, when the weight function is positive and the points t (n) 0, t (n) 1,…, t (n) n are chosen such that the polynomials ω n (t) form an orthogonal system with respect to the weight p (t). The quadrature formulae thus obtained are called formulae of Gaussian type. They are distinguished from other interpolation formulae for mechanical quadrature by being exact for polynomials of degree 2 n + 1 (see Natanson-I).

2.2.Now we consider the space C ˜ , whose elements are the continuous periodic functions defined on the whole real line and having the same period (which, for definiteness, we take to be 2π). Every such function may clearly be regarded as a function defined on some interval [a, a + 2π] of length 2π and satisfying x (a + 2π) = x (a). This enables us to identify C ˜ with a closed subspace of C [a, a + 2π].

It follows from this that the operator y = U (x) given by

(4) y ( s ) = a a + 2 π K ( s , t ) x ( t ) d t     ( s [ a , a + 2 π ] ) ,

where the kernel K (s, t) is continuous, is a linear operator from C ˜ into C. We leave it to the reader to check that the norm of U is given by the expression

(5) | | U | | = max s a a + 2 π | K ( s , t ) | d t

(cf. V.2.4).

In general, the operator (4) maps periodic functions into non-periodic ones. An obvious necessary and sufficient condition for U to be an operator from C ˜ into C ˜ is that K (s, t)have period 2π in its first argument; that is, K (s + 2π, t) = K (s, t). Finally, if K (s, t) is defined in the whole plane and has period 2π in its second argument also, then the integration in (4) and (5) can be carried out over any interval of length 2π.

Let us form the Fourier series of the continuous 2π-periodic function x (t):

x ( t ) ~ a 0 2 + Σ k = 1 ( a k cos k t + b k sin k t ) , a k = 1 π 0 2 π x ( t ) cos k t d t , b k = 1 π 0 2 π x ( t ) sin k t d t ( k = 0 , 1 , ) .

If we associate with each function x (t) in C ˜ the partial sum S n (x) of its Fourier series, we obtain an operator S n mapping C ˜ into C ˜ . It is well known that this sum is expressible as a Dirichlet integral:

y = S n ( x ) , y ( s ) = 1 2 π 0 2 π x ( t ) sin ( 2 n + 1 ) t s 2 sin t s 2 d t ,

that is, S n is of the form (4), and furthermore, by the continuity of the kernel, S n is a continuous linear operator. Let us show that || S n || → ∞ as n → ∞.

In fact, using (5) and the periodicity of the kernel, we have

| | S n | | = 1 2 π 0 2 π | sin ( 2 n + 1 ) t s 2 sin t s 2 | d t = 1 2 π 0 2 π | sin 2 n + 1 2 t sin t 2 | d t                 = 1 π 0 π | sin m t sin t | d t ,

where m = 2 n + 1. Using the following well-known inequalities from analysis,

| sin t | | t | , sin s 2 π s ( 0 s π 2 ) ,

we have also

| | S n | | = 1 π 0 π | sin m t sin t | d t > 1 π Σ k = 1 m 1 k p m l 2 k + 1 l k 2 m | sin m t | sin t d t =                  = 1 π Σ k = 1 m 1 0 π 2 m | sin m t | sin ( t + k π m ) d t

Hence

| | S n | | 1 8 π ln n ,

from which we obtain the required result.

From Theorem 1.3 we conclude that there exists a continuous periodic function whose Fourier series does not converge uniformly to any function.

The arguments we have given also enable us to establish the existence of a continuous periodic function whose Fourier series diverges at an arbitrary preassigned point.

For this we consider the sequence of functionals f n on the space C ˜ , defined by

f n ( x ) = S n ( x ) ( t 0 ) = 1 2 π 0 2 π sin ( 2 n + 1 ) t t 0 2 sin t t 0 2 x ( t ) d t .

Exactly as in V.2.2, the norm of f n is determined by the equation

| | f n | | = 1 2 π 0 2 π | sin ( 2 n + 1 ) t t 0 2 sin t t 0 2 | d t = 1 2 π 0 2 π | sin ( 2 n + 1 ) t 2 sin t 2 | d t = | | S n | |

and hence || f n || → ∞ as n + ∞. Therefore by Theorem 1.1 there exists an x 0 C ˜ such that sup | f n (x 0)| = ∞, which is what we required to prove.

Now take an arbitrary countable set e = {t k } on the real line and form the functionals f (k) n , given by

f n ( k ) ( x ) = S n ( x ) ( t k )     ( k , n = 1 , 2 , ) .

Applying the principle of condensation of singularities to these, we find an element x 0 C ˜ such that

sup n | f n ( k ) ( x 0 ) | =     ( k = 1 , 2 , ) ,

that is, we have a function x 0(t) whose Fourier series diverges at each point of the set e.

An example of a continuous function whose Fourier series is nowhere convergent was first given by du Bois Reymond (see Zygmund).

If S n is regarded as a linear operator from L 1into L 1, then, in view of the symmetry of the kernel, || S n || keeps its former value; and thus we conclude from Theorem 1.3 that there exists a summable function whose Fourier series does not converge in mean on L 1.

2.3.We can obtain a wide generalization of the preceding results if we introduce the concept of a polynomial operator.

We again consider the space C ˜ of continuous periodic functions (with period 2π) and denote by H ˜ n the subspace consisting of all trigonometric polynomials of degree at most n.

A continuous linear operator U on C ˜ is called a (trigonometric) polynomial operator of degree n if

1)

U (x) ∈ H ˜ n for every x C ˜ ;

2)

U (x) = x for every x H ˜ n .

In other words, a polynomial operator assigns to each 2π-periodic function a trigonometric polynomial of degree at most n and leaves these polynomials themselves fixed.

The simplest example of a polynomial operator is the operator S n studied in 2.2. Another example is provided by the operator that associates with a function one of its (trigonometric) interpolation polynomials, constructed with respect to a fixed system of weights.

Let us introduce the following notation. If y = U (x), then the value of the function y for a given s will be denoted by U (x; s). For example, S n (x; s) = S n (x)(s). Further, if x (t) is a function in C ˜ , then we shall denote by x h (t) the function obtaining from x (t) by translating the argument:

x h ( t ) = x ( t + h ) .

Clearly x h C ˜ , for every h. Notice also that, as x C ˜ is uniformly continuous,

(6) | | x h x | | = max | t x h ( t ) x ( t ) | 0     when h 0.

It is possible to establish some very important general facts concerning polynomial operators and sequences of polynomial operators. These are all based on the following lemma, which connects an arbitrary polynomial operator with the simplest one, namely S n .

LEMMA 1.

If U is a polynomial operator of degree n, then we have the identity

(7) 1 2 π 0 2 π U ( x τ ; s τ ) d τ = S n ( x ; s )     ( x C ~ )

(the Zygmund–Marcinkiewicz–Berman identity).

Proof. First suppose that x H ˜ n , so that x τ H ˜ n also. Then

U ( x τ ; s τ ) = x τ ( s t ) = x ( s ) .

But, since S n is a polynomial operator of degree n, we have also

S n ( x : s ) = x ( s ) ,

which proves the identity in this case.

Now suppose that x (t) = cos mt or x (t) = sin mt, where m > n. Restricting ourselves fordefiniteness to the first case, we have

x τ ( t ) = cos m ( t + τ ) = cos m t cos m τ sin m t sin m τ           = x 1 ( t ) cos m τ + x 2 ( t ) sin m τ .

Therefore, if we set

y 1 = U ( x 1 ) , y 2 = U ( x 2 )

(y 1, y 2being elements of H ˜ n ), then

U ( x τ ; s τ ) = y 1 ( s τ ) cos m τ + y 2 ( s τ ) sin m τ .

But y 1(s − τ) and y 2(s − τ) are trigonometric polynomials in τ of degree not exceeding n, so they are orthogonal to the functions cos m τ and sin m τ. Hence

1 2 π 0 2 π U ( x τ ; s τ ) d τ = 0 .

The right-hand side of (7) is clearly also zero.

Thus identity (7) is proved in this case, and therefore—since both sides are additive in x —also for arbitrary trigonometric polynomials.

Now we consider the left-hand side of (7) and prove that, for fixed s, it is a continuous linear functional on C ˜ .

Denoting the left-hand side of (7) by f s (x), we verify that the functional f s makes sense for any element x C ˜ . We do this by showing that the integrand is continuous in τ. We have

| U ( x τ + h ; s τ h ) U ( x τ ; s τ ) | U ( x τ + h ; s τ h ) U ( x τ ; s τ h ) | + + | U ( x τ ; s τ h ) U ( x τ ; s τ ) | | | U ( x τ + h x τ ) | | + | U ( x τ ; s τ h ) U ( x τ ; s τ ) | ≤||U | | | | x τ + h x t | | + | U ( x τ ; s τ h ) -U ( x τ ; s τ ) | .

For sufficiently small h, the first term here is as small as we please, by (6), and the same is true of the second term, since U (x τ, t) is continuous.

The functional f s is obviously additive, and, since

| f s ( x ) | 1 2 π 0 2 π | U ( x τ ; s τ ) | d τ 1 2 π 0 2 π | | U | | | | x τ | | d τ = | | U | | | | x | | ,

it is also a bounded functional.

Finally, if we write g s (x) = S n (x; s), then (7) amounts to the assertion that f s and g s coincide. But it was shown earlier that they coincide on the dense subset of trigonometric polynomials in C ˜ . Since they are continuous functions, it follows from this that they coincide on the whole of C ˜ , which is what we required to prove.

THEOREM 2.

Among all trigonometric polynomial operators of degree n, the operator S n has the least norm: that is, we always have

| | U | | | | S n | | > A l n n .

For, by (7),

| | S n ( x ) | | = max s | s n ( x ; s ) | 1 2 π 0 2 π max s | U ( x τ ; s τ ) | d τ | | U | | | | x | | ,

so that || S n || ⩽ || U ||.

THEOREM 3 (Lozinskii–Kharshiladze).

If {U n } is a sequence of trigonometric polynomial operators, where U n has degree n, then the norms of these operators tend to infinity. In particular, no such sequence can be convergent on the whole space C ˜ .

The first statement follows immediately from the last theorem. The second is obtained using Theorem 1.1.

As a special case of the theorem just stated, we note the following important fact. For any given points of interpolation, there exists a continuous 2π-periodic function whose associated sequence of interpolation polynomials is not uniformly convergent (Faber's Theorem).

We note without proof that the statements of Theorems 2 and 3 apply also to the space L 1.

One can, in an analogous fashion, consider (algebraic) polynomial operators on C [0, 1], by which we mean continuous linear operators that map C [0, 1] into the subspace H n of all algebraic polynomials of degree at most n and leave the elements of H n fixed. However, it is more convenient to study the algebraic case by reducing it to the trigonometric case (see Natanson-I).

Let us consider the (algebraic) polynomial operator U n assigning to a continuous function the n -th partial sum of its Fourier series with respect to a given system of orthogonal polynomials. By applying the algebraic analogue of Theorem 3 to sequences of operators, we obtain the result (Nikolaev [1]): for any system of orthogonal polynomials, there exists a continuous function whose Fourier series with respect to this system is not uniformly convergent.

Theorems 2 and 3 were established by S. M. Lozinskii and F. I. Kharshiladze (see Lozinskii [1], [2]). S. M. Lozinskii has made a far-reaching development of these ideas.

2.4.Let us consider the problem of representing functions by singular integrals.

Suppose we are given a sequence of functions {K n (s, t)} on the square [a, b; a, b]. A function x (s) is said to be representable by a singular integral if the sequence

(8) x n ( s ) = a b K n ( s , t ) x ( t ) d t     ( n = 1 , 2 , )

converges to x (s) in one sense or another.

Singular integrals occur regularly in various problems in analysis. By way of example we mention the Dirichlet integral, the Fejér integral, the de la Vallé–Poussin integral, the Hilbert integral, etc.

Let us prove a theorem on the convergence in mean of singular integrals in L 1(a, b), restricting ourselves to continuous kernels.

THEOREM 4.

A necessary and sufficient condition for the sequence (8) to converge to x in the space L 1, for every summable function x, is that

1)

(9) a b | a b K n ( s , t ) x ( t ) d t x ( s ) | d s 0 n

for every x in a complete subset D of L 1 ; and
2)

(10) a b |K n ( s , t ) | d s M     ( t [ a , b ] ; n = 1 , 2 , ) .

For if U n denotes the operator on L 1associated with the kernel K n (s, t):

y = U n ( x ) , y ( s ) = a b K n ( s , t ) x ( t ) d t .

then the first condition amounts to requiring that U n (x) → x for xD, and the second that || U n || ⩽ M, by V.2.5. Thus the stated result is a special case of the Banach–Steinhaus Theorem (or, more precisely, of Remark 2 following it) (see 1.2).

REMARK 1.

For the sufficiency part, condition 1) can be replaced by two simpler conditions, namely

(11) α β K n ( s , t ) d t 1 n     ( s ( α , β ) [ a . b ] ) ,

(12) a b | K n ( s , t )| d t M     ( s [ a . b ] ; n = 1 , 2 , ) .

It is easy to verify that in this case one can take D to be the collection of characteristic functions of intervals contained in [a, b].

In fact, if x ˜ is the characteristic function of [α, β], then, for s ∈ (α, β),

x ~ n ( s ) = α β K n ( s , t ) d t n 1 = x ~ ( s ) ;

while if s ∉ [α, β] and we have, say, as <α, then

x ~ n ( s ) = α β K n ( s , t ) d t = α β K n ( s , t ) d t α α K n ( s , t ) d t n 1 1 = 0 = x ~ ( s ) .

Thus x ˜ n (s) → x ˜ (s) for all s ≠ α, β; that is, almost everywhere. Condition (12) ensures that we can take the limit under the integral sign in a b | a b K n ( s , t ) x ~ ( t ) d t x ~ ( s ) | d s , from which we see that this limit is zero.

REMARK 2.

If the kernels K n (s, t) are symmetric, the condition (12) coincides with condition 2) of the theorem; hence conditions (11) and (12) are in this case both necessary and sufficient.

If the operators U n associated with the kernels K n (s, t) are regarded as operators on C [a, b], then one obtains, in an analogous way, conditions for the sequence (8) to converge uniformly to a continuous function x (s).

The reader can obtain more detailed information on singular integrals in Dunford and Schwartz-II and in Natanson [1].

2.5.In conclusion, we consider the problem of generalized summation of series (see Zygmund).

Suppose we have a numerical sequence

(13) a 1 + a 2 + + a n + .

and let {s k } be the sequence of its partial sums. We introduce the infinite matrix

(14).

and form the expression

(15) σ n = Σ k = 1 α n k s k     ( n = 1 , 2 , ) ,

assuming that all the series on the right-hand side converge.

The series (13) is said to be generalized-summable by means of the matrix (14) if the sequence {σ n } has a finite limit. The value

σ = lim n σ n

of this limit is called the generalized sum of the series (13).

For example, the Cesàro summation method, where

σ n = 1 n Σ k = 1 n s k ,

is characterized by the matrix

Instead of talking of generalized sums of series, one can consider the problem of defining the generalized limit of a sequence, associating with a given numerical sequence x = {ξ k } the sequence

σ n ( x ) = Σ k = 1 α n k ξ k     ( n = 1 , 2 , )

and studying its behaviour as n → ∞. Clearly, one way of formulating the problem is easily reduced to the other. Since the sequence point of view turns out to be more convenient, we shall restrict ourselves to this, though we retain the term "summation method".

It is natural to consider only those summation methods that are applicable to every sequence which converges in the usual sense, and that assign the usual limit as the generalized limit of such a sequence. Summation methods having this property are called permanent or regular.

Conditions for the permanence of the summation method defined by (14) are formulated in the following theorem.

THEOREM 5 (Toeplitz).

A necessary and sufficient condition for the summation method defined by the matrix (14) to be permanent is that

1

lim n α n k = 0     ( k = 1 , 2 , ) ;

2

lim n Σ k = 1 α n k = 1 ;

3

Σ k = 1 | α n k | M     ( n = 1 , 2 , ) .

Proof. Consider the following functionals on the space c of convergent sequences:

σ n ( x ) = Σ k = 1 α n k ξ k     ( x = { ξ k } ; n = 1 , 2 , )

and

σ ( x ) = lim k ξ k .

It was shown in VI.2.2 that

| | σ n | | = Σ k = 1 | α n k |     ( n = 1 , 2… ) ,

so condition 3) means that the functionals σ n have bounded norms.

We also introduce the sequences

x 0 = ( 1 , 1 , , 1 , )     and x k = ( 0 , , 0 , 1 , 0 , )

(with unity in the k -th place).

Since

σ n ( x 0 ) = Σ k = 1 α n k , σ n ( x k ) = α n k     ( n , k = 1 , 2 , ) ,

conditions 1) and 2) can be expressed in the form

σ n ( x k ) n 0= σ ( x k ) , σ ( x 0 ) n 1= σ ( x 0 ) .

Since the collection of elements x 0, x 1,…, x k ,… is complete in c, the conditions of the theorem agree with those of the Banach–Steinhaus Theorem (taking into account Remarks 1 and 2).

Hence 1)–3) are necessary and sufficient conditions for

σ n ( x ) n σ ( x ) ,

and this is just the required permanence condition for the summation method.

The matrix associated with the Cesàro summation method obviously satisfies the conditions of the theorem. From this follows the well-known fact that arithmetic mean (Cesàro) summation is a permanent summation method.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080230368500138

Spectral collocation method for solving Fredholm integral equations on the half-line

Azedine Rahmoune , in Applied Mathematics and Computation, 2013

4 Convergence analysis

Before starting our analysis of the convergence, it should be noted that the main advantage of the approach we used previously by scaled Laguerre–Gauss interpolation is to truncate our unbounded domain Λ to a bounded domain that containing all finite collocation points, may be the smallest closed one denoted by Λ ^ and such that | u ( t ) | 0 for all t outside of Λ ^ . This truncation will enable us also to use the results of standard spectral projection methods.

Here for simplicity, let denote K n = P n K and f n = P n f , then Eq. (5) is rewritten as

(31) u n - K n u n = f n .

It is well known that to establish convergence for projection methods, the usual assumptions made are

(i) K - K n 0 ,

(ii) f n f ,

we have the following result (see [2])

Theorem 1

Assume that (i) and (ii) hold. Then for all n n 0 , ( I - K n ) - 1 exist, ( I - K n ) - 1 is uniformly bounded, and u - u n 0 . The error estimate

(32) u - u n ( I - K n ) - 1 u - P n u

holds.

First, since K is assumed to be compact, according to [3], the Banach–Steinhaus theorem implies that K - K n 0 . On the other hand, according to Erdös–Turán theorem (see [9]) we have.

Theorem 2

If L n ( f ) be the polynomial interpolating to a continuous function f on the bounded truncated domain Λ ^ at the scaled Laguerre–Gauss–Radau points ξ k β k = 0 n . Then

lim n Λ ^ | f - L n ( f ) | 2 d ξ = 0 .

By using this it follows that f n f for all f C ( Λ ^ ) . Than, the two assumptions of Theorem 1 are satisfied. It now follows from Eq. (32) that u n ( t ) converges in mean square to u ( t ) .

4.1 Condition number

We consider κ ( A n ) , the condition number of related sequence of matrix problems A n c = γ where A n is the ( n + 1 ) -dimensional representation of the operator I - K n defined by (26). It is well known (see [2,3]) that

(33) κ ( A n ) = A n A n - 1 A n P n Γ n - 1 ( I - K n ) - 1 ,

where Γ n = L k β ( ξ j β ) j , k = 0 n . First, it can be verified numerically that for n less than 100, we get 3 Γ n - 1 < 5 . Furthermore, according to theorem (1) we have for some n 0

(34) sup n n 0 ( I - K n ) - 1 σ .

It is also clear that P n is linear and finite rank. In addition, it assumed to act from C ( Λ ) to it self. So, κ ( A n ) will be reasonably well-behaved if σ is not large.

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S0096300313002968