Enumerating Matroids and Linear Spaces

We show that the number of linear spaces on a set of $n$ points and the number of rank-3 matroids on a ground set of size $n$ are both of the form $(cn+o(n))^{n^2/6}$, where $c=e^{\sqrt 3/2-3}(1+\sqrt 3)/2$. This is the final piece of the puzzle for enumerating fixed-rank matroids at this level of accuracy: the numbers of rank-1 and rank-2 matroids on a ground set of size $n$ have exact representations in terms of well-known combinatorial functions, and it was recently proved by van der Hofstad, Pendavingh, and van der Pol that for constant $r\ge 4$ there are $(e^{1-r}n+o(n))^{n^{r-1}/r!}$ rank-$r$ matroids on a ground set of size $n$. In our proof, we introduce a new approach for bounding the number of clique decompositions of a complete graph, using quasirandomness instead of the so-called entropy method that is common in this area.


Introduction
Matroids (also sometimes known as combinatorial geometries) are fundamental objects that abstract the combinatorial properties of linear independence in vector spaces.Specifically, a matroid consists of a ground set E and a collection I of subsets of E called independent sets1 .The defining properties of a matroid are that: • the empty set is independent (that is, ∅ ∈ I); • subsets of independent sets are independent (if A ′ ⊆ A ⊆ E and A ∈ I, then A ′ ∈ I); • if A and B are independent sets, and |A| > |B|, then an independent set can be constructed by adding an element of A \ B to B (there is a ∈ A\B such that B ∪ {a} ∈ I).Observe that any finite set of elements in a vector space (over any field) naturally gives rise to a matroid, though most matroids do not arise this way.The rank of a matroid is the maximum size of an independent set.
Enumeration of matroids is a classical topic, though the state of our knowledge is rather incomplete.Some early upper and lower bounds on the total number of matroids on a ground set of size n were obtained in the 1970s by Piff and Welsh [16], Piff [15] and Knuth [9], and these bounds were improved only recently by Bansal, Pendavingh, and van der Pol [2].It is also of interest to enumerate matroids of fixed rank: let m(n, r) be the number of rank-r matroids on a ground set of size n.It is trivial to see that m(n, 1) = 2 n − 1, and it is also possible to prove the exact identity m(n, 2) = b(n + 1) − 2 n , where b(m) is the mth Bell number (which counts the number of partitions of an m-element set).This identity seems to have been first observed by Acketa [1].
For r ≥ 3, an exact expression for m(n, r) in terms of well-known functions does not seem to be possible 2 , but after some exciting recent developments, rather precise asymptotic expressions have become available.First, Pendavingh and van der Pol [14] observed that (for constant r ≥ 1) the lower bound m(n, r) ≥ (e 1−r n + o(n)) n r−1 /r! follows from Keevash's breakthrough work [7,8] on existence and enumeration of combinatorial designs.They also proved an upper bound of the form m(n, r) ≤ (en + o(n)) n r−1 /r! .Even more recently, van der Hofstad, Pendavingh and van der Pol [18] closed the gap for all r = 3, proving that m(n, r) = (e 1−r n + o(n)) n r−1 /r! for constant r ≥ 4. In the remaining case r = 3 they were able to prove m(n, 3) ≤ (ne 1+β + o(n)) n 2 /6 ≈ (1.4n) n 2 /6 , where −0.67 < β < −0.65 is the solution to a certain variational problem.In this paper, we close the gap completely in this case r = 3.
In fact, Theorem 1.1 is really a corollary of the following theorem, estimating the number of linear spaces on a set of n points.In incidence geometry, a linear space on a point set P is a collection of subsets of at least two points of P (called lines) such that each pair of points lies in a unique line (see for example [3,17] for more on linear spaces).For reasons that will become clear in a moment, we denote the number of linear spaces on a set of n points by p(n, 3).
We remark that one may also be interested in linear spaces in which no line has exactly 2 points (these are called proper linear spaces).It should be possible to adapt our proof to show that the expression in Theorem 1.2 is also a valid estimate for the number of proper linear spaces on a set of n points (though this would require some rather deep machinery due to Keevash [7] and McKay and Wormald [12]).See Remark 3.3 for discussion.
To explain the connection between Theorems 1.1 and 1.2 we need to make a few more definitions.A d-partition (or generalised partition of type d) of a ground set E is a collection of subsets of E (called parts) each having size at least d, such that every subset of d elements of E is contained in exactly one of the parts.So, a 1-partition is an ordinary partition, and a 2-partition is a linear space.For any r ≥ 2, there is a correspondence between the set of (r − 1)-partitions of E and the set of so-called paving matroids of rank r on the ground set E. Namely, a paving matroid of rank r is a matroid for which its set of hyperplanes (maximal subsets with rank r − 1) form an (r − 1)-partition of its ground set.See for example [20,Section 3] for more details.
For r ≥ 2 let p(n, r) be the number of paving matroids of rank r, or equivalently the number of (r − 1)-partitions, on a ground set of size n.Given the above correspondence, we trivially have p(n, r) ≤ m(n, r), and it was proved by Pendavingh and van der Pol [14, Theorem 3] that p(n, r) ≤ m(n, r) ≤ p(n, r) 1+O(1/n) for constant r.So Theorem 1.1 is a direct consequence of Theorem 1.2, and for the rest of the paper we will abandon the language of matroids and focus on Theorem 1.2.
In fact, we find it convenient to use the language of graph theory: note that a linear space on a set of n points is precisely equivalent to a clique-decomposition of the complete graph K n (meaning, a decomposition of the edges of K n into nonempty cliques of arbitrary sizes).
1.1.Discussion of proof techniques.If one is interested in counting the number of decompositions of K n into cliques which each have a fixed number of vertices k, this is a problem about enumerating combinatorial designs.Specifically, such a decomposition corresponds exactly to a design called a (2, k, n)-Steiner system.Such Steiner systems can be enumerated using powerful tools due to Keevash [7,8]: in particular, if n satisfies certain necessary divisibility conditions, the number of such Steiner systems can be written as ).Note that α k is maximised for two different k: namely, when k = 3 and when k = 4.This suggests that decompositions containing mostly 3-cliques and 4-cliques comprise the bulk of the clique-decompositions in p(n, 3).
The above observation motivates our proof strategy, and we believe it also explains why counting (r − 1)-partitions and rank-r matroids is most difficult when r = 3 (if r ≥ 4, then one can do a similar calculation for hypergraph clique-decompositions and see that there is a single maximising value of k).
For the lower bound in Theorem 1.2 (namely, that there are at least about (0.16n) n 2 /6 cliquedecompositions), we proceed in a very similar fashion as in [8]: we consider a random process that builds a clique-decomposition by iteratively removing random 3-cliques and 4-cliques from K n (with a particular carefully chosen ratio between the two), until a very small number of edges remain (these edges are then treated as 2-cliques in our decomposition).We then study the number of possible outcomes of this process 3 .The details of the lower bound appear in Section 3.
The upper bound is more interesting.In [8], Keevash is able to upper-bound the number of Steiner systems by adapting an approach of Linial and Luria [11], using the so-called entropy method.Roughly speaking, the idea is as follows.To prove an upper bound on the number of k-clique decompositions of K n , it suffices to prove an upper bound on the entropy of a uniformly random k-clique decomposition P .In order to specify an outcome of P , it suffices to specify, for each edge e ∈ K n , the clique C e containing e. Therefore, one can upper-bound the entropy of P by considering an ordering e 1 , . . ., e ( n 2 ) of the edges of K n , and upper-bounding the conditional entropy of each C e i , given the previous cliques C e 1 , . . ., C e i−1 .If e 1 , . . ., e ( n 2 ) is a random ordering, then it is possible to upper-bound these conditional entropies by studying the expected number of possible choices for C e i given C e 1 , . . ., C e i−1 .This is possible due to a certain symmetry of k-clique decompositions: namely, Keevash makes crucial use of the fact that in any k-clique decomposition, for any edge e and almost all k-cliques C ⊆ K n containing e there are exactly ( k 2 − 1) 2 edges e ′ / ∈ C such that C e ′ and C share an edge (meaning that after C e ′ is revealed, C can be ruled out as a possible outcome of C e ).
For decompositions of K n into cliques of mixed sizes, an analogous symmetry property does not hold, and the number of edges e ′ whose clique C e ′ intersects a particular clique C depends on the structure of our clique-partition.So, we cannot prove the upper bound in Theorem 1.2 by a straightforward generalisation of Keevash's proof.Instead, we exploit a different symmetry property of clique-decompositions, generalising an observation in [10], as follows.Suppose P is a clique-partition into cliques of bounded size (say, each of the cliques in P has at most 100 vertices).Then, if we take the union of a random subset of the cliques in P , where each clique is included independently with probability p ∈ (0, 1), we are very likely to arrive at a quasirandom graph with density about p (i.e., a graph whose "local statistics" resemble a random subgraph of K n obtained by including each edge with probability p independently).Sweeping some details under the rug, this means that we can give an upper bound on the number of ways to choose a clique-decomposition with a prescribed number of cliques of each size (the precise statement is in Lemma 2.2), by counting in a clique-by-clique manner, where at each step the number of choices for a k-clique is roughly the expected number of k-cliques in a random graph of the appropriate density 4 .We remark that our approach seems to be more flexible than the entropy method, for problems of this type: it is possible to recover all of Keevash's upper bounds in this way (though with slightly weaker quantitative aspects).Also, in our view, our clique-by-clique approach is more naturally in correspondence with the clique-by-clique processes used to prove lower bounds in this area.
Finally, having an upper bound for the number of clique partitions with a prescribed number s k of k-cliques for each k ≤ 100 (and no cliques with more than 100 vertices), it remains to show that the contribution from cliques with more than 100 vertices is negligible, and to optimise our formula over choices of s 1 , . . ., s 100 .For the former, we use a very crude encoding argument (Lemma 2.1).The latter is a simple calculus exercise (essentially, we use the method of Lagrange multipliers; see Lemma 2.5).In agreement with the heuristic mentioned earlier, we find that our formula is maximised when only s 3 , s 4 are non-negligible.
1.2.Notation.We use standard asymptotic notation throughout, as follows.For functions f = f (n) and g = g(n), we write f = O(g) to mean that there is a constant C such that |f | ≤ C|g|, f = Ω(g) to mean that there is a constant c > 0 such that f (n) ≥ c|g(n)| for sufficiently large n, and f = o(g) to mean that f /g → 0 as n → ∞.Also, following [7], the notation We write N G (v) to denote the neighbourhood of a vertex v in a graph G (i.e., the set of vertices adjacent to v).For a real number x, the floor and ceiling functions are denoted ⌊x⌋ = max{i ∈ Z : i ≤ x} and ⌈x⌉ = min{i ∈ Z : i ≥ x}.We will however mostly omit floor and ceiling symbols and assume large numbers are integers, wherever divisibility considerations are not important.Finally, all logarithms in this paper are in base e.

The upper bound
2.1.Removing the contribution from large parts.We first reduce to the case where all cliques have bounded size.Related ideas appeared in [18].Proof.Fix a clique decomposition P ∈ Γ s 2 ,...,s L ,E .Let P 1 be the "truncated" clique decomposition obtained from P by first removing each clique with more than L vertices, and then adding twovertex cliques (i.e., single edges) for each of the edges which are no longer covered by a clique.Then P is uniquely determined by the pair (P 1 , P 2 ), where P 2 contains all the cliques in P with more than L vertices.
There are at most 2 |E|−1 ways to choose a sequence s L+1 , . . ., s n such that n t=L+1 t 2 s t = |E|.Indeed, such a sequence can be interpreted as an integer partition 5 of |E| (where we are only allowed to use parts which have size of the form t 2 for t > L).For each such s L+1 , . . ., s n , the number of possibilities for P 2 which contain exactly s t cliques of each size t > L is at most n t=L+1 (n t ) st = n Σ , where Then, we observe that 2 |E|−1 n Σ ≤ n |E|/5 for L ≥ 11 and n large enough. 5The number of partitions of an integer N is at most its number of compositions, which is 2 N−1 .

2.2.
Counting decompositions into prescribed numbers of bounded-size cliques.We now estimate the number of clique-decompositions with prescribed numbers of cliques of each (bounded) size.We will later optimise over choices of these prescribed numbers.
Lemma 2.2.Fix a constant L ∈ N and integers s 2 , . . ., s L such that Let Γ s 2 ,...,s L be the set of all clique-decompositions of K n whose number of t-cliques is s t for each t.Then 1) .
We define an ordered clique-decomposition of K n to be an ordered list of cliques whose edgedisjoint union is equal to K n .Let Ξ s 2 ,...,s L be the set of all orderings of clique-decompositions in Γ s 2 ,...,s L .First, we need the following modification of [10,Lemma 2.6], showing that for initial segments of a random ordered clique-decomposition, the graph of uncovered edges is "typical"/"quasirandom".
Lemma 2.3.Fix a constant L ∈ N and any integers s 2 , . . ., s L such that 2  2 . Let G m be the random subgraph of K n consisting of the edges not appearing in the first m cliques of our random ordered clique-decomposition. Then with probability 1 − o n→∞ (1), for all 0 ≤ m ≤ N and all sets of vertices A with |A| ≤ L, we have Proof.Fix a particular choice of m and A; we will take a union bound over all such choices.It suffices to consider a uniformly random ordering of a fixed clique-decomposition P ∈ Γ s 2 ,...,s L (i.e., we prove the desired statement conditioned on any outcome of the unordered set of cliques in our random ordered clique-decomposition).
The first m cliques in our random ordering comprise a uniformly random subset R ⊆ P of m cliques in P .Consider the closely related "binomial" random subset R ′ ⊆ P , where each clique is included with probability 1 − m/N independently; let G ′ m contain the edges of K n which do not appear in any of the cliques in R ′ .
Since the cliques in P are edge-disjoint, note that there are at most |A|

2
= O(1) cliques in P that include more than one vertex in A. Let U be the set of vertices in these atypical cliques.Now, for each v / ∈ A and w ∈ A there is exactly one clique e w v in P containing v and w, whose presence in R ′ would prevent v from appearing in v∈A N G ′ m (v).For each fixed v / ∈ U the hyperedges e w v , for w ∈ A, are distinct, so . Now let I be the set of cliques of P which contain a vertex of A. We have |I| = O(n) since the cliques in P are edge-disjoint.Note that Q is entirely determined by I ∩ R ′ , and adding or removing any clique from R ′ affects Q by at most L − 1 = O(1).So by the Azuma-Hoeffding inequality, for t ≥ 0 we have Pr(|Q − EQ| ≥ t) ≤ 2 exp(−Ω(t 2 /n)).It follows that with probability at least 1 − n −10L we have |Q − (1 − m/N ) |A| n| ≤ √ n log n.Recall that we have been considering the "binomial" random subset R ′ ; we can transfer this result to the "uniform" random subset R using a standard inequality (for example, the so-called Pittel inequality; see [6, p. 17]).Then, we take a union bound over choices of m and A.
We also need the fact that the cliques of different sizes are "well-distributed" in a random ordered clique-decomposition. Lemma 2.4.Fix a constant L ∈ N and integers s 2 , . . ., s L such that 2  2 Consider a uniformly random ordered clique-decomposition (of K n ) from Ξ s 2 ,...,s L (which has N := s 2 + • • • + s L cliques).Then with probability 1 − o n→∞ (1), for any 0 ≤ m < m ′ ≤ N and any 0 ≤ k ≤ L, if we consider all the cliques ranging from the (m + 1)-th to the m ′ -th in our random ordered clique-decomposition, the number of such cliques that have exactly k vertices differs from s k (m ′ − m)/N by at most n log n.
Proof.As in the proof of Lemma 2.3, it suffices to consider a uniformly random reordering of a fixed clique-decomposition P ∈ Ξ s 2 ,...,s L .The desired result then follows from a concentration inequality for the hypergeometric distribution (see for example [6, (2.5) and Theorem 2.10]) and the union bound.Now we are ready to prove Lemma 2.2.
Proof of Lemma 2.2.Let N = s 2 + • • • + s L , and let c be a very small constant (c = 1/(10L 2 ) will do).We will count ordered clique decompositions in Ξ s 2 ,...,s L , and then at the end of the proof we will divide by N !.
Partition the interval {1, . . ., N } into sub-intervals I 1 , . . ., I n c by taking Let m i = min I i = ⌊(i − 1)N n −c + 1⌋ be the first index in each I i .Say that an ordered clique decomposition P ∈ Ξ s 2 ,...,s L is ordinary if for each 1 ≤ i ≤ n c , the following hold.
( Almost all ordered clique decompositions in Ξ s 2 ,...,s L are ordinary by Lemmas 2.3 and 2.4, so it suffices to prove an upper bound on the number of ordinary decompositions. For each 1 ≤ i ≤ n c , we consider separately the number of choices for the cliques indexed by indices in I i , for an ordinary ordered clique-decomposition. Let γ i = 1− (i− 1)n −c .Now, (1) implies that for all k ≤ L, the number of k-cliques in (2.1) To see this, we count the number of ways to choose an ordered list of k vertices inducing a clique, in a vertex-by-vertex fashion, then divide by k!.
For any choice of So, given (2), we can multiply these estimates to see that the number of ways to choose the cliques indexed by I i is at most We next take the product of this expression over all 1 ≤ i ≤ n c , and divide by the number of orderings N ! of each clique-decomposition, to obtain the desired result.In particular one obtains We note that this involves an approximation by a Riemann integral:

2.3.
Optimising over prescribed clique numbers.Given Lemmas 2.1 and 2.2, the upper bound in Theorem 1.2 will be a simple consequence of the following lemma.
, and consider the real-valued function f : D → R defined by Then, the maximum value of f (s 2 , . . ., s L ) is 1) .
Proof.Since f is continuous on D and D is compact, our function f attains a maximum.First, we claim that a maximum can only be attained when all s k are strictly positive.Indeed, consider some (s 1 , . . ., s L ) ∈ D for which s k = 0, in which case there must be some s j > 0. We make a slight perturbation: increase s k to j 2 ε and decrease s j by k 2 ε, for some very small ε > 0 (note that we are still in D), and consider the corresponding change to the value of f .Note that the terms containing s j decrease by O(ε) but the terms containing s k increase by Ω(ε log(1/ε)).So, our perturbation has increased the value of f , which proves the claim.
Note that if we increase s k by j 2 ε and decrease s j by − k 2 ε, for some very small ε > 0, then the value of f increases by where we set τ (k) = (log n)/5 when k = 2, and τ (k) = 0 when k = 2. (This essentially follows from taking a derivative).So, a maximum can only occur when each takes a common value λ.For this λ, we see that Now, recalling the definition of D, we have There is a unique λ satisfying this equation, because the left-hand side of the equation is monotonically decreasing in λ.Now, if λ = (log n)/3 + α, for |α| ≤ 1, then we may compute 1) .

The lower bound
The lower bound in Theorem 1.2 is an immediate consequence of the following estimate.Lemma 3.1.Let For c > 0, let Γ c be the collection of clique-decompositions of K n in which there are s 3 − ⌊n 2−c ⌋ cliques with 3 vertices, s 4 cliques with 4 vertices, and the rest are cliques with 2 vertices.If c > 0 is sufficiently small then .
To prove Lemma 3.1 we need a notion of "typicality" (called "quasirandomness" in [10]), closely related to the property in Lemma 2.3.Definition 3.2.For an n-vertex, m-edge graph G, we define its density p(G) = m/ n 2 .We say that G is (ε, h)-typical if for every set A of at most h vertices of G, the vertices in A have (1 ± ε)p(G) |A| n common neighbours.
Note that if an n-vertex graph G with density p is (ε, h)-typical then it has Proof.Given a graph G and an integer k, we define its K k -removal process as follows.Starting from the graph G, at each step we consider the set of all copies of K k in our graph, choose one uniformly at random, and remove its edges.(Eventually we will run out of copies of K k , at which point the process aborts).We will need the following facts about the behaviour of the K 3 -removal process and the K 4removal process.
(1) There is a > 0 such that the following holds.If we run the K 4 -removal process on K n , then with probability 1 − o(1): (a) the process does not abort before s 4 steps, and (b) for each t ≤ s 4 , the graph at step t is (n −a , 3)-typical.(2) For every a > 0 there is c > 0 such that the following holds.Let G be an n-vertex graph with m := n 2 − 6s 4 = (3 − O(1/n))s 3 edges which is (n −a , 2)-typical.If we run the K 3 -removal process on G, then with probability 1 − o(1): (a) the process does not abort before s 3 − n 2−c steps, and (b) for each t ≤ s 3 − n 2−c , the graph at step t is (n −c , 2)-typical.
For a simple proof of Fact (2), see [10,Theorem 4.1].Fact (1) can be proved in basically exactly the same way (in fact it is slightly simpler, because we start from the complete graph instead of a general typical graph).See [8,Section 6] for some discussion of (a generalisation of) the K k -removal process starting from a complete graph, which implies the desired result.Now, we simply concatenate the K 4 -removal process and the K 3 -removal process.Indeed, starting from the complete graph K n , we first run s 4 steps of the K 4 -removal process, then s 3 − n 2−c steps of the K 3 removal process.In this way, either we abort or we produce a clique decomposition in Γ c , in which our set of 4-cliques and our set of 3-cliques are both equipped with an ordering.Let Q be the set of outcomes of our concatenated process for which in each of the first s 4 steps, our graph is (n −b , 3)-typical, and in each of the next s 3 − n 2−c steps, our graph is (n −c , 2)-typical.
The probability of each outcome in Q is at most and by (1-2) above, these probabilities sum up to − n 2−Ω(1) /(s 4 !s 3 !).
(The difference between taking a sum up to s 3 and up to s 3 − n 2−c is easily seen to contribute to the negligible exp(−n 2−Ω(1) ) factor.)Let a 3 = 3s 3 / n 2 and a 4 = 6s 4 / n 2 , and note that a 3 + a 4 = 1 − O(1/n).By Stirling's approximation we compute that log |Γ c | is at least Substituting the values of s 3 and s 4 and simplifying (or alternatively, comparing with the expressions in the proof of Lemma 2.5) yields the desired result.
Remark 3.3.In Lemma 3.1 we consider clique-decompositions that have a small number of "trivial" cliques with two vertices.We believe that it is possible to adapt the proof to avoid such cliques, but this requires some of Keevash's deepest results on clique-decompositions of quasirandom graphs.Namely, for any constant k, Keevash's machinery [8] allows one to estimate the number of K k -decompositions of any dense quasirandom graph satisfying certain divisibility conditions (the number of edges should be divisible by k 2 and every degree should be divisible by k − 1; say such a graph is K k -divisible).So, in order to prove a version of Lemma 3.1 in which no clique has exactly two vertices (thereby proving a version of Theorem 1.2 for proper linear spaces), it suffices to prove a suitable lower bound on the number of ways to partition the edges of K n into a K 3 -divisible quasirandom graph with density ( √ 3 − 1)/2 + o(1), a K 4 -divisible quasirandom graph with density (2 − √ 3)/2 + o(1) and a tiny "remainder graph" with O(1) edges, itself decomposable into cliques which have more than two vertices.A suitable lower bound on the number of such graph partitions can be proved with some elementary number theory and the machinery of McKay and Wormald [12] for enumerating graphs with a given dense degree sequence (the remainder graph is just to handle divisibility issues, and it turns out we can always choose it to be either a copy of K 5 , a copy of K 7 , or a vertex-disjoint union K 5 ∪ K 7 ).

Lemma 2 . 1 .
Fix L ≥ 11 and n sufficiently large as a function of L. Let Γ s 2 ,...,s L ;E denote the set of clique-decompositions of K n , for which there are E edges covered by cliques with more than L vertices, and there are s k cliques with k vertices for each 2 ≤ k ≤ L. Then |Γ s 2 ,...,s L ;E | ≤ n |E|/5 |Γ s 2 +E,s 3 ,...,s L ;0 |.

)
The graph G (i) := G m i −1 consisting of those edges not covered by the first m i − 1 cliques of P satisfies the conclusion of Lemma 2.3.(2)For each 1 ≤ i ≤ n c and 2 ≤ k ≤ L, the number of cliques ranging from the m i -th to the (m i+1 − 1)-th which have exactly k vertices satisfies the conclusion of Lemma 2.4.

2. 4 . 2 L
Deducing the upper bound.We now give the short deduction of the upper bound in Theorem 1.2 using Lemmas 2.1, 2.2, and 2.5.Proof of the upper bound in Theorem 1.2.Let L = 11.The sets Γ s 2 ,...,s L ;E defined in Lemma 2.1 form a partition of the set of all clique-decompositions of K n .There are at most n = e n 2−Ω(1) choices of s 2 , . . ., s L , E, so it suffices to upper-bound the maximum possible value of |Γ s 2 ,...,s L ,E |.By Lemma 2.1 it in fact suffices to upper-bound the maximum possible value of n s 2 /5 |Γ s 2 ,...,s L ,0 |.This is precisely what is accomplished by Lemmas 2.2 and 2.5.
1 − o(1).So, the number of outcomes in Q is at least 1 − o(1) divided by the expression in(3.1).It follows that