Abstract
Fano varieties are basic building blocks in geometry – they are ‘atomic pieces’ of mathematical shapes. Recent progress in the classification of Fano varieties involves analysing an invariant called the quantum period. This is a sequence of integers which gives a numerical fingerprint for a Fano variety. It is conjectured that a Fano variety is uniquely determined by its quantum period. If this is true, one should be able to recover geometric properties of a Fano variety directly from its quantum period. We apply machine learning to the question: does the quantum period of X know the dimension of X? Note that there is as yet no theoretical understanding of this. We show that a simple feedforward neural network can determine the dimension of X with 98% accuracy. Building on this, we establish rigorous asymptotics for the quantum periods of a class of Fano varieties. These asymptotics determine the dimension of X from its quantum period. Our results demonstrate that machine learning can pick out structure from complex mathematical data in situations where we lack theoretical understanding. They also give positive evidence for the conjecture that the quantum period of a Fano variety determines that variety.
Similar content being viewed by others
Introduction
Algebraic geometry describes shapes as the solution sets of systems of polynomial equations, and manipulates or analyses a shape X by manipulating or analysing the equations that define X. This interplay between algebra and geometry has applications across mathematics and science; see e.g., refs. ^{1,2,3,4}. Shapes defined by polynomial equations are called algebraic varieties. Fano varieties are a key class of algebraic varieties. They are, in a precise sense, atomic pieces of mathematical shapes^{5,6}. Fano varieties also play an essential role in string theory. They provide, through their ‘anticanonical sections’, the main construction of the Calabi–Yau manifolds which give geometric models of spacetime^{7,8,9}.
The classification of Fano varieties is a longstanding open problem. The only onedimensional example is a line; this is classical. The ten smooth twodimensional Fano varieties were found by del Pezzo in the 1880s^{10}. The classification of smooth Fano varieties in dimension three was a triumph of 20th century mathematics: it combines work by Fano in the 1930s, Iskovskikh in the 1970s, and Mori–Mukai in the 1980s^{11,12,13,14,15,16}. Beyond this, little is known, particularly for the important case of Fano varieties that are not smooth.
A new approach to Fano classification centres around a set of ideas from string theory called Mirror Symmetry^{17,18,19,20}. From this perspective, the key invariant of a Fano variety is its regularised quantum period^{21}
This is a power series with coefficients c_{0} = 1, c_{1} = 0, and c_{d} = r_{d}d!, where r_{d} is a certain Gromov–Witten invariant of X. Intuitively speaking, r_{d} is the number of rational curves in X of degree d that pass through a fixed generic point and have a certain constraint on their complex structure. In general r_{d} can be a rational number, because curves with a symmetry group of order k are counted with weight 1/k, but in all known cases the coefficients c_{d} in (1) are integers.
It is expected that the regularised quantum period \({\widehat{G}}_{X}\) uniquely determines X. This is true (and proven) for smooth Fano varieties in low dimensions, but is unknown in dimensions four and higher, and for Fano varieties that are not smooth.
In this paper we will treat the regularised quantum period as a numerical signature for the Fano variety X, given by the sequence of integers (c_{0}, c_{1}, …). A priori this looks like an infinite amount of data, but in fact there is a differential operator L such that \(L{\widehat{G}}_{X}\equiv 0\); see e.g., [ref. ^{21}, Theorem 4.3]. This gives a recurrence relation that determines all of the coefficients c_{d} from the first few terms, so the regularised quantum period \({\widehat{G}}_{X}\) contains only a finite amount of information. Encoding a Fano variety X by a vector in \({{\mathbb{Z}}}^{m+1}\) given by finitely many coefficients (c_{0}, c_{1}, …, c_{m}) of the regularised quantum period allows us to investigate questions about Fano varieties using machine learning.
In this paper, we ask whether the regularised quantum period of a Fano variety X knows the dimension of X. There is currently no viable theoretical approach to this question. Instead, we use machine learning methods applied to a large dataset to argue that the answer is probably yes, and then prove that the answer is yes for toric Fano varieties of low Picard rank. The use of machine learning was essential to the formulation of our rigorous results (Theorems 5 and 6 below). This work is, therefore, proofofconcept for a larger programme, demonstrating that machine learning can uncover previously unknown structure in complex mathematical datasets. Thus, the Data Revolution, which has had such impact across the rest of science, also brings important new insights to pure mathematics^{22,23,24,25,26,27}. This is particularly true for largescale classification questions, e.g., refs. ^{28,29,30,31,32}, where these methods can potentially reveal both the classification itself and structural relationships within it.
Results
Algebraic varieties can be smooth or have singularities
Depending on their equations, algebraic varieties can be smooth (as in Fig. 1a) or have singularities (as in Fig. 1b). In this paper, we consider algebraic varieties over the complex numbers. The equations in Fig. 1a, b, therefore, define complex surfaces; however, for ease of visualisation, we have plotted only the points on these surfaces with coordinates that are real numbers.
Most of the algebraic varieties that we consider below will be singular, but they all have a class of singularities called terminal quotient singularities. This is the most natural class of singularities to allow from the point of view of Fano classification^{6}. Terminal quotient singularities are very mild; indeed, in dimensions one and two, an algebraic variety has terminal quotient singularities if and only if it is smooth.
The Fano varieties that we consider
The fundamental example of a Fano variety is projective space \({{\mathbb{P}}}^{N1}\). This is a quotient of \({{\mathbb{C}}}^{N}\setminus \{0\}\) by the group \({{\mathbb{C}}}^{\times }\), where the action of \(\lambda \in {{\mathbb{C}}}^{\times }\) identifies the points (x_{1}, x_{2}, …, x_{N}) and (λx_{1}, λx_{2}, …, λx_{N}). The resulting algebraic variety is smooth and has dimension N − 1. We will consider generalisations of projective spaces called weighted projective spaces and toric varieties of Picard rank two. A detailed introduction to these spaces is given in the Supplementary Notes.
To define a weighted projective space, choose positive integers a_{1}, a_{2}, …, a_{N} such that any subset of size N − 1 has no common factor, and consider
where the action of \(\lambda \in {{\mathbb{C}}}^{\times }\) identifies the points
in \({{\mathbb{C}}}^{N}\setminus \{0\}\). The quotient \({\mathbb{P}}({a}_{1},{a}_{2},\ldots,{a}_{N})\) is an algebraic variety of dimension N − 1. A general point of \({\mathbb{P}}({a}_{1},{a}_{2},\ldots,{a}_{N})\) is smooth, but there can be singular points. Indeed, a weighted projective space \({\mathbb{P}}({a}_{1},{a}_{2},\ldots,{a}_{N})\) is smooth if and only if a_{i} = 1 for all i, that is, if and only if it is a projective space.
To define a toric variety of Picard rank two, choose a matrix
with nonnegative integer entries and no zero columns. This defines an action of \({{\mathbb{C}}}^{\times }\times {{\mathbb{C}}}^{\times }\) on \({{\mathbb{C}}}^{N}\), where \((\lambda,\, \mu )\in {{\mathbb{C}}}^{\times }\times {{\mathbb{C}}}^{\times }\) identifies the points
in \({{\mathbb{C}}}^{N}\). Set a = a_{1} + a_{2} + ⋯ + a_{N} and b = b_{1} + b_{2} + ⋯ + b_{N}, and suppose that (a, b) is not a scalar multiple of (a_{i}, b_{i}) for any i. This determines linear subspaces
of \({{\mathbb{C}}}^{N}\), and we consider the quotient
where S = S_{+} ∪ S_{−}. The quotient X is an algebraic variety of dimension N − 2 and second Betti number b_{2}(X) ≤ 2. If, as we assume henceforth, the subspaces S_{+} and S_{−} both have dimension two or more then b_{2}(X) = 2, and thus X has Picard rank two. In general X will have singular points, the precise form of which is determined by the weights in (2).
There are closed formulas for the regularised quantum period of weighted projective spaces and toric varieties^{33}. We have
where \({\mathbb{P}}= {\mathbb{P}}({a}_{1},\ldots,{a}_{N})\) and a = a_{1} + a_{2} + ⋯ + a_{N}, and
where the weights for X are as in (2), and C is the cone in \({{\mathbb{R}}}^{2}\) defined by the equations a_{i}x + b_{i}y ≥ 0, i \(\in\) {1, 2, …, N}. Formula (4) implies that, for weighted projective spaces, the coefficient c_{d} from (1) is zero unless d is divisible by a. Formula (5) implies that, for toric varieties of Picard rank two, c_{d} = 0 unless d is divisible by gcd{a, b}.
Data generation: weighted projective spaces
The following result characterises weighted projective spaces with terminal quotient singularities; this is [ref. ^{34}, Proposition 2.3].
Proposition 1
Let \(X= {\mathbb{P}}({a}_{1},\,{a}_{2},\ldots,\,{a}_{N})\) be a weighted projective space of dimension at least three. Then X has terminal quotient singularities if and only if
for each k \(\in\) {2, …, a − 2}. Here a = a_{1} + a_{2} + ⋯ + a_{N} and {q} denotes the fractional part q − ⌊q⌋ of \(q\in {\mathbb{Q}}\).
A simpler necessary condition is given by [ref. ^{35}, Theorem 3.5]:
Proposition 2
Let \(X= {\mathbb{P}}({a}_{1},\,{a}_{2},\ldots,\,{a}_{N})\) be a weighted projective space of dimension at least two, with weights ordered a_{1} ≤ a_{2} ≤ … ≤ a_{N}. If X has terminal quotient singularities then a_{i}/a < 1/(N − i + 2) for each i \(\in\) {3, …, N}.
Weighted projective spaces with terminal quotient singularities have been classified in dimensions up to four^{34,36}. Classifications in higher dimensions are hindered by the lack of an effective upper bound on a.
We randomly generated 150,000 distinct weighted projective spaces with terminal quotient singularities, and with dimension up to 10, as follows. We generated random sequences of weights a_{1} ≤ a_{2} ≤ … ≤ a_{N} with a_{N} ≤ 10N and discarded them if they failed to satisfy any one of the following:

1.
for each i \(\in\) {1, …, N}, \(\gcd \{{a}_{1},\ldots,\,{\widehat{a}}_{i},\ldots,\,{a}_{N}\}= 1\), where \({\widehat{a}}_{i}\) indicates that a_{i} is omitted;

2.
a_{i}/a < 1/(N − i + 2) for each i \(\in\) {3, …, N};

3.
\(\mathop{\sum }\nolimits_{i= 1}^{N}\{k{a}_{i}/a\}\in \{2,\ldots,\,N2\}\) for each k \(\in\) {2, …, a − 2}.
Condition 1 here was part of our definition of weighted projective spaces above; it ensures that the set of singular points in \({\mathbb{P}}({a}_{1},{a}_{2},\ldots,{a}_{N})\) has dimension at most N − 2, and also that weighted projective spaces are isomorphic as algebraic varieties if and only if they have the same weights. Condition 2 is from Proposition 2; it efficiently rules out many nonterminal examples. Condition 3 is the necessary and sufficient condition from Proposition 1. We then deduplicated the sequences. The resulting sample sizes are summarised in Table 1.
Data generation: toric varieties
Deduplicating randomlygenerated toric varieties of Picard rank two is harder than deduplicating randomlygenerated weighted projective spaces, because different weight matrices in (2) can give rise to the same toric variety. Toric varieties are uniquely determined, up to isomorphism, by a combinatorial object called a fan^{37}. A fan is a collection of cones, and one can determine the singularities of a toric variety X from the geometry of the cones in the corresponding fan.
We randomly generated 200,000 distinct toric varieties of Picard rank two with terminal quotient singularities, and with dimension up to 10, as follows. We randomly generated weight matrices, as in (2), such that 0 ≤ a_{i}, b_{j} ≤ 5. We then discarded the weight matrix if any column was zero, and otherwise formed the corresponding fan F. We discarded the weight matrix unless:

1.
F had N rays;

2.
each cone in F was simplicial (i.e., has number of rays equal to its dimension);

3.
the convex hull of the primitive generators of the rays of F contained no lattice points other than the rays and the origin.
Conditions 1 and 2 together guarantee that X has Picard rank two, and are equivalent to the conditions on the weight matrix in (2) given in our definition. Conditions 2 and 3 guarantee that X has terminal quotient singularities. We then deduplicated the weight matrices according to the isomorphism type of F, by putting F in normal form^{38,39}. See Table 1 for a summary of the dataset.
Data analysis: weighted projective spaces
We computed an initial segment (c_{0}, c_{1}, …, c_{m}) of the regularised quantum period for all the examples in the sample of 150,000 terminal weighted projective spaces, with m ≈ 100,000. The nonzero coefficients c_{d} appeared to grow exponentially with d, and so we considered \({\{\log {c}_{d}\}}_{d\in S}\) where \(S= \{d\in {{\mathbb{Z}}}_{\ge 0} {c}_{d}\,\ne \,0\}\). To reduce dimension, we fitted a linear model to the set \(\{(d,\log {c}_{d}) d\in S\}\) and used the slope and intercept of this model as features; see Fig. 2a for a typical example. Plotting the slope against the yintercept and colouring datapoints according to the dimension we obtain Fig. 3a: note the clear separation by dimension. A Support Vector Machine (SVM) trained on 10% of the slope and yintercept data predicted the dimension of the weighted projective space with an accuracy of 99.99%. Full details are given in the Supplementary Methods.
Data analysis: toric varieties
As before, the nonzero coefficients c_{d} appeared to grow exponentially with d, so we fitted a linear model to the set \(\{(d,\log {c}_{d}) d\in S\}\) where \(S= \{d\in {{\mathbb{Z}}}_{\ge 0} {c}_{d}\,\ne \,0\}\). We used the slope and intercept of this linear model as features.
Example 3
In Fig. 2b, we plot a typical example: the logarithm of the regularised quantum period sequence for the ninedimensional toric variety with weight matrix
along with the linear approximation. We see a periodic deviation from the linear approximation; the magnitude of this deviation decreases as d increases (not shown).
To reduce computational costs, we computed pairs \((d,\log {c}_{d})\) for 1000 ≤ d ≤ 20,000 by sampling every 100th term. We discarded the beginning of the period sequence because of the noise it introduces to the linear regression. In cases where the sampled coefficient c_{d} is zero, we considered instead the next nonzero coefficient. The resulting plot of slope against yintercept, with datapoints coloured according to dimension, is shown in Fig. 3b.
We analysed the standard errors for the slope and yintercept of the linear model. The standard errors for the slope are small compared to the range of slopes but, in many cases, the standard error s_{int} for the yintercept is relatively large. As Fig. 4 illustrates, discarding data points where the standard error s_{int} for the yintercept exceeds some threshold reduces apparent noise. This suggests that the underlying structure is being obscured by inaccuracies in the linear regression caused by oscillatory behaviour in the initial terms of the quantum period sequence; these inaccuracies are concentrated in the yintercept of the linear model. Note that restricting attention to those data points where s_{int} is small also greatly decreases the range of yintercepts that occur. As Example 4 and Fig. 5 suggest, this reflects both transient oscillatory behaviour and also the presence of a subleading term in the asymptotics of \(\log {c}_{d}\) which is missing from our feature set. We discuss this further below.
Example 4
Consider the toric variety with Picard rank two and weight matrix
This is one of the outliers in Fig. 3b. The toric variety is fivedimensional, and has slope 1.637 and yintercept −62.64. The standard errors are 4.246 × 10^{−4} for the slope and 5.021 for the yintercept. We computed the first 40 000 coefficients c_{d} in (1). As Fig. 5 shows, as d increases the yintercept of the linear model increases to −28.96 and s_{int} decreases to 0.7877. At the same time, the slope of the linear model remains more or less unchanged, decreasing to 1.635. This supports the idea that computing (many) more coefficients c_{d} would significantly reduce noise in Fig. 3b. In this example, even 40,000 coefficients may not be enough.
Computing many more coefficients c_{d} across the whole dataset would require impractical amounts of computation time. In the example above, which is typical in this regard, increasing the number of coefficients computed from 20,000 to 40,000 increased the computation time by a factor of more than 10. Instead we restrict to those toric varieties of Picard rank two such that the yi,ntercept standard error s_{int} is less than 0.3; this retains 67,443 of the 200,000 datapoints. We used 70% of the slope and yintercept data in the restricted dataset for model training, and the rest for validation. An SVM model predicted the dimension of the toric variety with an accuracy of 87.7%, and a Random Forest Classifier (RFC) predicted the dimension with an accuracy of 88.6%.
Neural networks
Neural networks do not handle unbalanced datasets well. Therefore, we removed the toric varieties of dimensions 3, 4, and 5 from our data, leaving 61,164 toric varieties of Picard rank two with terminal quotient singularities and s_{int} < 0.3. This dataset is approximately balanced by dimension.
A Multilayer Perceptron (MLP) with three hidden layers of sizes (10, 30, 10) using the slope and intercept as features predicted the dimension with 89.0% accuracy. Since the slope and intercept give good control over \(\log {c}_{d}\) for d ≫ 0, but not for small d, it is likely that the coefficients c_{d} with d small contain extra information that the slope and intercept do not see. Supplementing the feature set by including the first 100 coefficients c_{d} as well as the slope and intercept increased the accuracy of the prediction to 97.7%. Full details can be found in the Supplementary Methods.
From machine learning to rigorous analysis
Elementary “out of the box” models (SVM, RFC, and MLP) trained on the slope and intercept data alone already gave a highly accurate prediction for the dimension. Furthermore, even for the manyfeature MLP, which was the most accurate, sensitivity analysis using SHAP values^{40} showed that the slope and intercept were substantially more important to the prediction than any of the coefficients c_{d}: see Fig. 6. This suggested that the dimension of X might be visible from a rigorous estimate of the growth rate of \(\log {c}_{d}\).
In the Methods section, we establish asymptotic results for the regularised quantum period of toric varieties with low Picard rank, as follows. These results apply to any weighted projective space or toric variety of Picard rank two: they do not require a terminality hypothesis. Note, in each case, the presence of a subleading logarithmic term in the asymptotics for \(\log {c}_{d}\).
Theorem 5
Let X denote the weighted projective space \({\mathbb{P}}({a}_{1},\ldots,\,{a}_{N})\), so that the dimension of X is N − 1. Let c_{d} denote the coefficient of t^{d} in the regularised quantum period \({\widehat{G}}_{X}(t)\) given in (4). Let a = a_{1} + ⋯ + a_{N} and p_{i} = a_{i}/a. Then c_{d} = 0 unless d is divisible by a, and nonzero coefficients c_{d} satisfy
as d → ∞, where
Note, although it plays no role in what follows, that A is the Shannon entropy of the discrete random variable Z with distribution (p_{1}, p_{2}, …, p_{N}), and that B is a constant plus half the total selfinformation of Z.
Theorem 6
Let X denote the toric variety of Picard rank two with weight matrix
so that the dimension of X is N − 2. Let a = a_{1} + ⋯ + a_{N}, b = b_{1} + ⋯ + b_{N}, and ℓ = gcd{a, b}. Let \([\mu :\nu ]\in {{\mathbb{P}}}^{1}\) be the unique root of the homogeneous polynomial
such that a_{i}μ + b_{i}ν ≥ 0 for all i \(\in\) {1, 2, …, N}, and set
Let c_{d} denote the coefficient of t^{d} in the regularised quantum period \({\widehat{G}}_{X}(t)\) given in (5). Then nonzero coefficients c_{d} satisfy
as d → ∞, where
Theorem 5 is a straightforward application of Stirling’s formula. Theorem 6 is more involved, and relies on a Central Limittype theorem that generalises the De Moivre–Laplace theorem.
Theoretical analysis
The asymptotics in Theorems 5 and 6 imply that, for X a weighted projective space or toric variety of Picard rank two, the quantum period determines the dimension of X. Let us revisit the clustering analysis from this perspective. Recall the asymptotic expression \(\log {c}_{d} \sim Ad\frac{\dim X}{2}\log d+B\) and the formulae for A and B from Theorem 5. Figure 7a shows the values of A and B for a sample of weighted projective spaces, coloured by dimension. Note the clusters, which overlap. Broadly speaking, the values of B increase as the dimension of the weighted projective space increases, whereas in Fig. 3a, the yintercepts decrease as the dimension increases. This reflects the fact that we fitted a linear model to \(\log {c}_{d}\), omitting the subleading \(\log d\) term in the asymptotics. As Fig. 8 shows, the linear model assigns the omitted term to the yintercept rather than the slope. The slope of the linear model is approximately equal to A. The yintercept, however, differs from B by a dimensiondependent factor. The omitted \(\log\) term does not vary too much over the range of degrees (d < 100,000) that we considered, and has the effect of reducing the observed yintercept from B to approximately \(B\frac{9}{2}\dim X\), distorting the clusters slightly and translating them downwards by a dimensiondependent factor. This separates the clusters. We expect that the same mechanism applies in Picard rank two as well: see Fig. 7b.
We can show that each cluster in Fig. 7a is linearly bounded using constrained optimisation techniques. Consider, for example, the cluster for weighted projective spaces of dimension five, as in Fig. 9.
Proposition 7
Let X be the fivedimensional weighted projective space \({\mathbb{P}}({a}_{1},\ldots,\,{a}_{6})\), and let A, B be as in Theorem 5. Then \(B+\frac{5}{2}A\ge \frac{41}{8}\). If in addition a_{i} ≤ 25 for all i then \(B+5A\le \frac{41}{40}\).
Fix a suitable θ ≥ 0 and consider
with \(\dim X= N1=5\). Solving
on the fivesimplex gives a linear lower bound for the cluster. This bound does not use terminality: it applies to any weighted projective space of dimension five. The expression B + θA is unbounded above on the fivesimplex (because B is) so we cannot obtain an upper bound this way. Instead, consider
for an appropriate small positive ϵ, which we can take to be 1/a where a is the maximum sum of the weights. For Fig. 9, for example, we can take a = 124, and in general, such an a exists because there are only finitely many terminal weighted projective spaces. This gives a linear upper bound for the cluster.
The same methods yield linear bounds on each of the clusters in Fig. 7a. As the Figure shows, however, the clusters are not linearly separable.
Discussion
We developed machine learning models that predict, with high accuracy, the dimension of a Fano variety from its regularised quantum period. These models apply to weighted projective spaces and toric varieties of Picard rank two with terminal quotient singularities. We then established rigorous asymptotics for the regularised quantum period of these Fano varieties. The form of the asymptotics implies that, in these cases, the regularised quantum period of a Fano variety X determines the dimension of X. The asymptotics also give a theoretical underpinning for the success of the machine learning models.
Perversely, because the series involved converge extremely slowly, reading the dimension of a Fano variety directly from the asymptotics of the regularised quantum period is not practical. For the same reason, enhancing the feature set of our machine learning models by including a \(\log d\) term in the linear regression results in less accurate predictions. So although the asymptotics in Theorems 5 and 6 determine the dimension in theory, in practice, the most effective way to determine the dimension of an unknown Fano variety from its quantum period is to apply a machine learning model.
The insights gained from machine learning were the key to our formulation of the rigorous results in Theorems 5 and 6. Indeed, it might be hard to discover these results without a machine learning approach. It is notable that the techniques in the proof of Theorem 6 – the identification of generating functions for Gromov–Witten invariants of toric varieties with certain hypergeometric functions – have been known since the late 1990s and have been studied by many experts in hypergeometric functions since then. For us, the essential step in the discovery of the results was the feature extraction that we performed as part of our ML pipeline.
This work demonstrates that machine learning can uncover previously unknown structure in complex mathematical data, and is a powerful tool for developing rigorous mathematical results; cf.^{22}. It also provides evidence for a fundamental conjecture in the Fano classification programme^{21}: that the regularised quantum period of a Fano variety determines that variety.
Methods
In this section, we prove Theorem 5 and Theorem 6. The following result implies Theorem 5.
Theorem 8
Let X denote the weighted projective space \({\mathbb{P}}({a}_{1},\ldots,\,{a}_{N})\), so that the dimension of X is N − 1. Let c_{d} denote the coefficient of t^{d} in the regularised quantum period \({\widehat{G}}_{X}(t)\) given in (4). Let a = a_{1} + … + a_{N}. Then c_{d} = 0 unless d is divisible by a, and
That is, nonzero coefficients c_{d} satisfy
as d → ∞, where
and p_{i} = a_{i}/a.
Proof
Combine Stirling’s formula
with the closed formula (4) for c_{ka}. □
Toric varieties of Picard rank 2
Consider a toric variety X of Picard rank two and dimension N − 2 with weight matrix
as in (2). Let us move to more invariant notation, writing α_{i} for the linear form on \({{\mathbb{R}}}^{2}\) defined by the transpose of the ith column of the weight matrix, and α = α_{1} + ⋯ + α_{N}. Eq. (5) becomes
where C is the cone \(C= \{x\in {{\mathbb{R}}}^{2} {\alpha }_{i}\cdot x\ge 0\,{{{{{{{\rm{for}}}}}}}}\,i= 1,2,\ldots,\,N\}\). As we will see, for d ≫ 0 the coefficients
are approximated by a rescaled Gaussian. We begin by finding the mean of that Gaussian, that is, by minimising
For k in the strict interior of C with α ⋅ k = d, we have that
as d → ∞.
Proposition 9
The constrained optimisation problem
has a unique solution x = x^{*}. Furthermore, setting p_{i} = (α_{i} ⋅ x^{*})/(α ⋅ x^{*}) we have that the monomial
depends on \(k\in {{\mathbb{Z}}}^{2}\) only via α ⋅ k.
Proof
Taking logarithms gives the equivalent problem
The objective function \(\mathop{\sum }\nolimits_{i=1}^{N}({\alpha }_{i}\cdot x)\log ({\alpha }_{i}\cdot x)\) here is the pullback to \({{\mathbb{R}}}^{2}\) of the function
along the linear embedding \(\varphi :{{\mathbb{R}}}^{2}\to {{\mathbb{R}}}^{N}\) given by (α_{1}, …, α_{N}). Note that C is the preimage under φ of the positive orthant \({{\mathbb{R}}}_{+}^{N}\), so we need to minimise f on the intersection of the simplex x_{1} + ⋯ + x_{N} = d, \(({x}_{1},\ldots,\,{x}_{N})\in {{\mathbb{R}}}_{+}^{N}\) with the image of φ. The function f is convex and decreases as we move away from the boundary of the simplex, so the minimisation problem in Eq. (6) has a unique solution x^{*} and this lies in the strict interior of C. We can, therefore, find the minimum x^{*} using the method of Lagrange multipliers, by solving
for \(\lambda \in {\mathbb{R}}\) and x in the interior of C with α ⋅ x = d. Thus
and, evaluating on \(k\in {{\mathbb{Z}}}^{2}\) and exponentiating, we see that
depends only on α ⋅ k. The result follows. □
Given a solution x^{*} to Eq. (7), any positive scalar multiple of x^{*} also satisfies Eq. (7), with a different value of λ and a different value of d. Thus the solutions x^{*}, as d varies, lie on a halfline through the origin. The direction vector \([\mu :\nu ]\in {{\mathbb{P}}}^{1}\) of this halfline is the unique solution to the system
Note that the first equation here is homogeneous in μ and ν; it is equivalent to Eq. (7), by exponentiating and then eliminating λ. Any two solutions x^{*}, for different values of d, differ by rescaling, and the quantities p_{i} in Proposition 9 are invariant under this rescaling. They also satisfy p_{1} + ⋯ + p_{N} = 1.
We use the following result, known in the literature as the “Local Theorem”^{41}, to approximate multinomial coefficients.
Local Theorem
For p_{1}, …, p_{n} \(\in\) [0, 1] such that p_{1} + ⋯ + p_{n} = 1, the ratio
as d → ∞, uniformly in all k_{i}’s, where
and the x_{i} lie in bounded intervals.
Let B_{r} denote the ball of radius r about \({x}^{*}\in {{\mathbb{R}}}^{2}\). Fix R > 0. We apply the Local Theorem with k_{i} = α_{i} ⋅ k and p_{i} = (α_{i} ⋅ x^{*})/(α ⋅ x^{*}), where \(k\in {{\mathbb{Z}}}^{2}\cap C\) satisfies α ⋅ k = d and \(k\in {B}_{R\sqrt{d}}\). Since
the assumption that \(k\in {B}_{R\sqrt{d}}\) ensures that the x_{i} remain bounded as d → ∞. Note that, by Proposition 9, the monomial \(\mathop{\prod }\nolimits_{i=1}^{N}{p}_{i}^{{k}_{i}}\) depends on k only via α ⋅ k, and hence here is independent of k:
Furthermore
where A is the positivedefinite 2 × 2 matrix given by
Thus, as d → ∞, the ratio
for all \(k\in {{\mathbb{Z}}}^{2}\cap C\cap {B}_{R\sqrt{d}}\) such that α ⋅ k = d.
Theorem 10
(This is Theorem 6 in the main text). Let X be a toric variety of Picard rank two and dimension N − 2 with weight matrix
Let a = a_{1} + ⋯ + a_{N} and b = b_{1} + ⋯ + b_{N}, let ℓ = gcd{a, b}, and let \([\mu :\nu ]\in {{\mathbb{P}}}^{1}\) be the unique solution to Eq. (8). Let c_{d} denote the coefficient of t^{d} in the regularised quantum period \({\widehat{G}}_{X}(t)\). Then nonzero coefficients c_{d} satisfy
as d → ∞, where
and \({p}_{i}=\frac{\mu {a}_{i}+\nu {b}_{i}}{\mu a+\nu b}\).
Proof
We need to estimate
Consider first the summands with \(k\in {{\mathbb{Z}}}^{2}\cap C\) such that α ⋅ k = d and \(k\notin {B}_{R\sqrt{d}}\). For d sufficiently large, each such summand is bounded by \(c{d}^{\frac{1+\dim X}{2}}\) for some constant c—see Eq. (9). Since the number of such summands grows linearly with d, in the limit d → ∞ the contribution to c_{d} from \(k\notin {B}_{R\sqrt{d}}\) vanishes.
As d → ∞, therefore
Writing \({y}_{k}=(k{x}^{*})/\sqrt{d}\), considering the sum here as a Riemann sum, and letting R → ∞, we see that
where L_{α} is the line through the origin given by \(\ker \alpha\) and dy is the measure on L_{α} given by the integer lattice \({{\mathbb{Z}}}^{2}\cap {L}_{\alpha }\subset {L}_{\alpha }\).
To evaluate the integral, let
and observe that the pullback of dy along the map \({\mathbb{R}}\to {L}_{\alpha }\) given by t ↦ tα^{⊥} is the standard measure on \({\mathbb{R}}\). Thus
where \(\theta=\mathop{\sum }\nolimits_{i=1}^{N}\frac{1}{{\ell }^{2}{p}_{i}}{({\alpha }_{i}\cdot {\alpha }^{\perp })}^{2}\), and
Taking logarithms gives the result. □
Data availability
Our datasets^{42,43} and the code for the Magma computer algebra system^{44} that was used to generate them are available from Zenodo^{45} under a CC0 license. The data was collected using Magma V2.254.
Code availability
All code required to replicate the results in this paper is available from Bitbucket under an MIT license^{46}.
References
van Lint, J. H. & van der Geer, G. Introduction to Coding Theory and Algebraic Geometry, DMV Sem., Vol. 12 (Birkhäuser Verlag, 1988).
Niederreiter, H. & Xing, C. Algebraic Geometry in Coding Theory and Cryptography. (Princeton University Press, 2009).
Atiyah, M. F., Hitchin, N. J., Drinfeld, V. G. & Manin, Y. I. Construction of instantons. Phys. Lett. A 65, 185–187 (1978).
Eriksson, N., Ranestad, K., Sturmfels, B. & Sullivant, S. Phylogenetic algebraic geometry. In Projective Varieties with Unexpected Properties, 237–255 (Walter de Gruyter, Berlin, 2005).
Kollár, J. The structure of algebraic threefolds: an introduction to Mori’s program. Bull. Amer. Math. Soc. (N.S.) 17, 211–273 (1987).
Kollár, J. & Mori, S. Birational geometry of algebraic varieties. Cambridge Tracts in Mathematics,Vol. 134 (Cambridge University Press, 1998).
Candelas, P., Horowitz, G. T., Strominger, A. & Witten, E. Vacuum configurations for superstrings. Nuclear Phys. B 258, 46–74 (1985).
Greene, B. R. String theory on CalabiYau manifolds. In Fields, strings and duality (Boulder, CO, 1996), 543–726 (World Sci. Publ., 1997).
Polchinski, J.String theory. Vol. II. Cambridge Monographs on Mathematical Physics (Cambridge University Press, 2005). Superstring theory and beyond, Reprint of 2003 edition.
Del Pezzo, P. Sulle superficie dell’n^{mo} ordine immerse nello spazio ad n dimensioni. Rend. del Circolo Mat. di Palermo 1, 241–255 (1887).
Fano, G. Nuove ricerche sulle varietà algebriche a tre dimensioni a curvesezioni canoniche. Pont. Acad. Sci. Comment. 11, 635–720 (1947).
Iskovskih, V. A. Fano threefolds. I. Izv. Akad. Nauk SSSR Ser. Mat. 41, 516–562, 717 (1977).
Iskovskih, V. A. Fano threefolds. II. Izv. Akad. Nauk SSSR Ser. Mat. 42, 506–549 (1978).
Iskovskih, V. A. Anticanonical models of threedimensional algebraic varieties. In Current Problems in Mathematics, Vol. 12 (Russian), 59–157, 239 (loose errata) (VINITI, 1979).
Mori, S. & Mukai, S. Classification of Fano 3folds with B_{2}≥2. Manuscr. Math. 36, 147–162 (1981).
Mori, S. & Mukai, S. Erratum: “Classification of Fano 3folds with B_{2}≥2”. Manuscr. Math. 110, 407 (2003).
Candelas, P., de la Ossa, X. C., Green, P. S. & Parkes, L. A pair of CalabiYau manifolds as an exactly soluble superconformal theory. Nuclear Phys. B 359, 21–74 (1991).
Greene, B. R. & Plesser, M. R. Duality in CalabiYau moduli space. Nuclear Phys. B 338, 15–37 (1990).
Hori, K. & Vafa, C. Mirror symmetry. Preprint at https://arxiv.org/abs/hepth/0002222 (2000).
Cox, D. A. & Katz, S. Mirror symmetry and algebraic geometry. Mathematical Surveys and Monographs Vol. 68 (American Mathematical Society, 1999).
Coates, T., Corti, A., Galkin, S., Golyshev, V. & Kasprzyk, A. M. Mirror symmetry and Fano manifolds. In European Congress of Mathematics, 285–300 (Eur. Math. Soc., 2013).
Davies, A. et al. Advancing mathematics by guiding human intuition with AI. Nature 600, 70–74 (2021).
He, Y.H. Machinelearning mathematical structures. Int. J. Data Sci. Math. Sci. 1, 23–47 (2023).
Wagner, A. Z. Constructions in combinatorics via neural networks. Preprint at https://arxiv.org/abs/2104.14516 (2021).
Erbin, H. & Finotello, R. Inception neural network for complete intersection Calabi–Yau 3folds. Mach. Learn. Sci. Technol. 2, 02LT03 (2021).
Levitt, J. S., Hajij, M. & Sazdanovic, R. Big data approaches to knot theory: understanding the structure of the Jones polynomial. J. Knot Theory Ramif 31, 2250095 (2022).
Wu, Y. & De Loera, J. A. Turning mathematics problems into games: reinforcement learning and Gröbner bases together solve integer feasibility problems. Preprint at https://arxiv.org/abs/2208.12191 (2022).
Kreuzer, M. & Skarke, H. Complete classification of reflexive polyhedra in four dimensions. Adv. Theor. Math. Phys. 4, 1209–1230 (2000).
Conway, J. H., Curtis, R. T., Norton, S. P., Parker, R. A. & Wilson, R. A. \({\mathbb{ATLAS}}\) of finite groups (Oxford University Press, Eynsham, 1985). Maximal subgroups and ordinary characters for simple groups. With computational assistance from J. G. Thackray.
Cremona, J. The Lfunctions and modular forms database project. Found. Comput. Math. 16, 1541–1553 (2016).
Adams, J. et al. Atlas of Lie groups and representations. online http://www.liegroups.org (2016).
Coates, T. & Kasprzyk, A. M. Databases of quantum periods for Fano manifolds. Sci. Data 9, 163 (2022).
Coates, T., Corti, A., Galkin, S. & Kasprzyk, A. M. Quantum periods for 3dimensional Fano manifolds. Geom. Topol. 20, 103–256 (2016).
Kasprzyk, A. M. Classifying terminal weighted projective space. Preprint at https://arxiv.org/abs/1304.3029 (2013).
Kasprzyk, A. M. Bounds on fake weighted projective space. Kodai Math. J. 32, 197–208 (2009).
Kasprzyk, A. M. Toric Fano threefolds with terminal singularities. Tohoku Math. J. 2 58, 101–121 (2006).
Fulton, W. Introduction to toric varieties. Annals of Mathematics Studies Vol. 131 (Princeton University Press, 1993).
Kreuzer, M. & Skarke, H. PALP: a package for analysing lattice polytopes with applications to toric geometry. Comput. Phys. Comm. 157, 87–106 (2004).
Grinis, R. & Kasprzyk, A. M. Normal forms of convex lattice polytopes. Preprint at https://arxiv.org/abs/1301.6641 (2013).
Lundberg, S. M. & Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inform. Process. Syst. 30, 4765–4774 (2017).
Gnedenko, B. V. Theory of Probability (Routledge, 2018).
Coates, T., Kasprzyk, A. M. & Veneziale, S. A dataset of 150000 terminal weighted projective spaces. Zenodo https://doi.org/10.5281/zenodo.5790079 (2022).
Coates, T., Kasprzyk, A. M. & Veneziale, S. A dataset of 200000 terminal toric varieties of Picard rank 2. Zenodo https://doi.org/10.5281/zenodo.5790096 (2022).
Bosma, W., Cannon, J. & Playoust, C. The Magma algebra system. I. The user language. J. Symb. Comput. 24, 235–265 (1997).
European Organization For Nuclear Research & OpenAIRE. Zenodo (2013).
Coates, T., Kasprzyk, A. M. & Veneziale, S. Supporting code. https://bitbucket.org/fanosearch/mldim (2022).
Acknowledgements
T.C. is funded by ERC Consolidator Grant 682603 and EPSRC Programme Grant EP/N03189X/1. A.M.K. is funded by EPSRC Fellowship EP/N022513/1. S.V. is funded by the EPSRC Centre for Doctoral Training in Geometry and Number Theory at the Interface, grant number EP/L015234/1. We thank Giuseppe Pitton for conversations and experiments that began this project, and thank John Aston and Louis Christie for insightful conversations and feedback.
Author information
Authors and Affiliations
Contributions
T.C., A.M.K., and S.V. contributed equally to this work.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Coates, T., Kasprzyk, A.M. & Veneziale, S. Machine learning the dimension of a Fano variety. Nat Commun 14, 5526 (2023). https://doi.org/10.1038/s41467023411571
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467023411571
This article is cited by

AIdriven research in pure mathematics and theoretical physics
Nature Reviews Physics (2024)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.