Let
be an n-dimensional vector of natural numbers (
) with
, then
converges with radius of convergence
with
if and only if
![{\displaystyle \limsup _{||\alpha ||\to \infty }{\sqrt[{||\alpha ||}]{|c_{\alpha }|\rho ^{\alpha }}}=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f9953ad5cbad797a22131d99fc0e34c3ec1514ca)
where
![{\displaystyle f(z)=\sum _{\alpha \geq 0}c_{\alpha }(z-a)^{\alpha }:=\sum _{\alpha _{1}\geq 0,\ldots ,\alpha _{n}\geq 0}c_{\alpha _{1},\ldots ,\alpha _{n}}(z_{1}-a_{1})^{\alpha _{1}}\cdots (z_{n}-a_{n})^{\alpha _{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87139954c4c2d0cd64ac7a3b4d1807e5ffe97380)
Set
, then[1]
![{\displaystyle \sum _{\alpha \geq 0}c_{\alpha }(z-a)^{\alpha }=\sum _{\alpha \geq 0}c_{\alpha }\rho ^{\alpha }t^{||\alpha ||}=\sum _{\mu \geq 0}\left(\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }\right)t^{\mu }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/10f8cad65a553cf2b2baca310dabe775da64ea15)
This is a power series in one variable
which converges for
and diverges for
. Therefore, by the Cauchy-Hadamard theorem for one variable
![{\displaystyle \limsup _{\mu \to \infty }{\sqrt[{\mu }]{\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }}}=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/902c0cf2b74280d0995d40bc95e57175733883df)
Setting
gives us an estimate
![{\displaystyle |c_{m}|\rho ^{m}\leq \sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }\leq (\mu +1)^{n}|c_{m}|\rho ^{m}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/61c94b8a139504799d2bfbf8c2800c5a08e13d76)
Because
as
![{\displaystyle {\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}\leq {\sqrt[{\mu }]{\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }}}\leq {\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}\implies {\sqrt[{\mu }]{\sum _{||\alpha ||=\mu }|c_{\alpha }|\rho ^{\alpha }}}={\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}\qquad (\mu \to \infty )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/104bce860c902bf14f734c0369b27254e4cf1d9a)
Therefore
![{\displaystyle \limsup _{||\alpha ||\to \infty }{\sqrt[{||\alpha ||}]{|c_{\alpha }|\rho ^{\alpha }}}=\limsup _{\mu \to \infty }{\sqrt[{\mu }]{|c_{m}|\rho ^{m}}}=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ff8d149af14cbba57811ba517e50e151be55d483)
For the central diagonal of our example,
:
![{\displaystyle \limsup _{n\to \infty }{\sqrt[{n}]{|f_{n,n}|x^{n}y^{n}}}=1\implies \limsup _{n\to \infty }|f_{n,n}|={\frac {1}{(xy)^{n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4119b2e87e0d61d0f4130797404f23cb2f77a114)
is at its largest when
so that
.
We know by Stirling's approximation that this is a good estimate.
But what about a diagonal along an arbitrary ray, like the above example
?
![{\displaystyle \limsup _{|n{\textbf {r}}|\to \infty }{\sqrt[{|n{\textbf {r}}|}]{|f_{2n,n}|x^{2n}y^{n}}}=1\implies \limsup _{|n{\textbf {r}}|\to \infty }|f_{2n,n}|={\frac {1}{(x^{2}y)^{n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/62bec93f776350efbc2ec837b73cf2fe6e9d41d6)
If we keep
then
This isn't a good estimate.
Better to use
then
In the below, the function we are interested in is
.
We therefore want to find the
on the domain of convergence of
that minimises
.
The subject of convex optimisation already has the tools for this, but in order to use it we need to transform the domain of convergence to be a convex set and
to be a convex function.
Give examples of how useful convex is...
Fortunately, the logarithmic image of the domain of convergence of a power series of a complex function is convex.[2]
Therefore, we define[3]
![{\displaystyle Relog({\textbf {z}})=(\log |z_{1}|,\cdots ,\log |z_{d}|)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8e8807dc2d1938ae12eb130750007c5e8de72824)
![{\displaystyle amoeba(H)=\{Relog({\textbf {z}}):H({\textbf {z}})=0\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b4823b13def6439dee0d7f92c375c4bcd692d49b)
The domain of convergence of our function can now be defined as the complement of this amoeba[4]
![{\displaystyle amoeba(H)^{c}=\mathbb {R} ^{d}\setminus amoeba(H)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b76268a4204b5066ce418500041e0e4b2b0240ef)
This may leave us with multiple unconnected components, each one for a different Laurent series expansion. Denote the component we are interested in as
![{\displaystyle {\mathcal {D}}=Relog^{-1}(B)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e3bae0e98ae5dd72564486cf36d8ba078b8deb6)
The logarithmic image of
is
. Because
is a concave function,
is convex.
So we now have a problem of minimising a convex function over a convex set.
We want to find the supporting hyperplane to
with outward-facing normal
.
This happens when the supporting hyperplane defined above coincides with the tangent plane with normal
.
This means they are not linearly independent and therefore the matrix
![{\displaystyle {\begin{pmatrix}{\frac {\partial H}{\partial z_{1}}}({\textbf {w}})&\cdots &{\frac {\partial H}{\partial z_{d}}}({\textbf {w}})\\r_{1}/w_{1}&\cdots &r_{d}/w_{d}\end{pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/771a22b283d20258f149d95eb94c8ca5a0470710)
is rank deficient, or its 2 x 2 submatrices have zero determinants. This is equivalent to a system of equations referred to as the critical point equations[5][6]
![{\displaystyle H({\textbf {w}})=0\quad r_{j}w_{1}{\frac {\partial H}{\partial z_{1}}}({\textbf {w}})-r_{1}w_{j}{\frac {\partial H}{\partial z_{j}}}({\textbf {w}})=0\quad (2\leq j\leq d).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1e543738b068cf4472e45bc1ee7507b49aaa77f6)
- ↑ Shabat 1992, pp. 32-33.
- ↑ Shabat 1992, pp. 31.
- ↑ Pemantle, Wilson and Melczer 2024, pp. 151, 157.
- ↑ Melczer 2021, pp. 116.
- ↑ Melczer 2021, pp. 203.
- ↑ Pemantle, Wilson and Melczer 2024, pp. 200.
As of 29th June 2024, this article is derived in whole or in part from Wikipedia. The copyright holder has licensed the content in a manner that permits reuse under CC BY-SA 3.0 and GFDL. All relevant terms must be followed.