Does chaos imply period 3?

Sharkovskii’s theorem says that if a continuous map f from an interval I to itself has a point with period 3, then it has a point with period 5. And if it has a point with period 5, then it has points with order 7, etc. The theorem has a chain of implications that all begins with 3. If you have points with period 3, you have points of all periods.

Now suppose you want to go “upstream” in Sharkovskii’s chain of implications. If you have a point with period 5, do you have a point with period 3? The answer is no: a map can have points with period 5 without having points of period 3, though it necessarily has points with all other positive integer periods except 3.

There’s an example illustrating the claim above that I’ve seen in multiple places, but I haven’t seen it presented graphically. You could work through the example analytically, but here I present it graphically.

This is the example function written in Python.

    def f(x):
        assert(1 <= x <= 5)
    
        if x < 2:
            return 1 + 2*x
        if x < 3:
            return 7 - x
        if x < 4:
            return 10 - 2*x
        if x <= 5:
            return 6 - x

Here’s a graphical demonstration that f has a fixed point, but no points of period 3.

The only point fixed under applying f three times is the point that was already fixed under applying f once.

This graph shows that f has points with period 5:

By Sharkovskii’s theorem f must have points with all other periods, except 3. Here’s a demonstration that it has points with period 6.

The map f is chaotic, but it does not have a point with period 3.

Logistic map

Let’s look at the most famous chaotic map, the logistic map.

f(x) = rx (1 – x)

where x is in [0, 1] and r is in [0. 4].

logistic bifurcation diagram

The images above shows orbits as r ranges over [0, 4]. Clearly f has points with period 2. There’s a whole interval of values of r that lead to points with period 2, roughly for r between 3 and 3.5. And we can see for r a little bigger there are points of period 4. But is there any point with period 3?

We can look for points of period 3 at the end of the plot, where r = 4, using Mathematica.

Define

    f[x_] := 4 x (1 - x)

and look for points where f³(x) = x using

    Factor[f[f[f[x]]] - x]

This shows that the solutions are the roots of

x (-3 + 4 x) (-7 + 56x – 112x² + 64 x³) (-3 + 36x – 96x² + 64x³)

The first two roots are fixed points, points of period 1, but the roots of the two cubic factors are points with period 3.

The cubics clearly have all their roots in the interval [0,1] and we could find their numerical values with

    NSolve[f[f[f[x]]] == x, x]

Although the roots are fixed points, they are unstable fixed points, as demonstrated at the bottom the next post.

Related posts

Fourier uncertainty principle

Heisenberg’s uncertainty principle says there is a limit to how well you can know both the position and momentum of anything at the same time.  The product of the uncertainties in the two quantities has a lower bound.

There is a closely related principle in Fourier analysis that says a function and its Fourier transform cannot both be localized. The more concentrated a signal is in the time domain, the more spread out it is in the frequency domain.

There are many ways to quantify how localized or spread out a function is, and corresponding uncertainty theorems. This post will look at the form closest to the physical uncertainty principle of Heisenberg, measuring the uncertainty of a function in terms of its variance. The Fourier uncertainty principle gives a lower bound on the product of the variance of a function and the variance of its Fourier transform.

Variance

When you read “variance” above you might immediately thing of variance as in the variance of a random variable. Variance in Fourier analysis is related to variance in probability, but there’s a twist.

If f is a real-valued function of a real variable, its variance is defined to be

V(f) = \int_{-\infty}^\infty x^2 \, f(x)^2 \, dx

This is the variance of a random variable with mean 0 and probability density |f(x)|². The twist alluded to above is that f is not a probability density, but |f(x)|² is.

Since we said f is a real-valued function, we could leave out the absolute value signs and speak of f(x)² being a probability density. In quantum mechanics applications, however, f is complex-valued and |f(x)|² is a probability density. In other words, we multiply f by its complex conjugate, not by itself.

The Fourier variance defined above applies to any f for which the integral converges. It is not limited to the case of |f(x)|² being a probability density, i.e. when |f(x)|² integrates to 1.

Uncertainty principle

The Fourier uncertainty principle is the inequality

V(f) \, \, V(\hat{f}) \geq C\, ||f||_2^2 \,\,||\hat{f}||_2^2

where the constant C depends on your convention for defining the Fourier transform [1]. Here ||f||2² is the integral of f², the square of the L² norm.

Perhaps a better way to write the inequality is

\frac{V(f)}{||\phantom{\hat{f}}\!\!f\,||_2^2} \; \frac{V(\hat{f})}{||\phantom{\hat{.}}\hat{f}\,\,||_2^2} \geq C

for non-zero f. Rather than look at the variances per se, we look at the variances relative to the size of f. This form is scale invariant: if we multiply f by a constant, the numerators and denominators are multiplied by that constant squared.

The inequality is exact when f is proportional to a Gaussian probability density. And in this case the uncertainty is easy to interpret. If f is proportional to the density of a normal distribution with standard deviation σ, then its Fourier transform is proportional to the density of a normal distribution with standard deviation 1/σ, if you use the radian convention described in [1].

Example

We will evaluate both sides of the Fourier uncertainty principle with

    h[x_] := 1/(x^4 + 1)

and its Fourier transform

    g[w_] := FourierTransform[h[x], x, w]

We compute the variances and the squared norms with

    v0 = Integrate[x^2 h[x]^2, {x, -Infinity, Infinity}] 
    v1 = Integrate[w^2 g[w]^2, {w, -Infinity, Infinity}]
    n0 = Integrate[    h[x]^2, {x, -Infinity, Infinity}]
    n1 = Integrate[    g[w]^2, {w, -Infinity, Infinity}]

The results in mathematical notation are

\begin{align*} V(h) &= \frac{\pi}{4 \sqrt{2}} \\ V(\hat{h}) &= \frac{5\pi}{8\sqrt{2}} \\ || h ||_2^2 &= \frac{3\pi}{4 \sqrt{2}} \\ ||\hat{h}||_2^2 &= \frac{3\pi}{4 \sqrt{2}} \end{align*}

From here we can calculate that the ratio of the left side to the right side of the uncertainty principle is 5/18, which is larger than the lower bound C = 1/4.

By the way, it’s not a coincidence that h and its Fourier transform have the same norm. That’s always the case. Or rather, that is always the case with the Fourier convention we are using here. In general, the L² norm of the Fourier transform is proportional to the L² norm of the function, where the proportionality constant depends on your definition of Fourier transform but not on the function.

Here is a page that lists the basic theorems of Fourier transforms under a variety of conventions.

Related posts

[1] In the notation of the previous post C = 1/(4b²). That is, C = 1/4 if your definition has a term exp(± i ω t) and C = 1/16π² if your definition has a exp(±2 π i ω t) term.

Said another way, if you express frequency in radians, as is common in pure math, C = 1/4. But if you express frequency in Hertz, as is common in signal processing, C = 1/16π².

Fourier transform definitions also have varying constants outside the integral, e.g. possibly dividing by √2π, but this factor effects both sides of the Fourier uncertainty principle equally.

Fourier transforms in Mathematica

Unfortunately there are many slightly different ways to define the Fourier transform. So the first two questions when using Mathematica (or any other software) to compute Fourier transforms are what definition of Fourier transform does it use, and what to do if you want to use a different definition.

The answer to the first question is that Mathematica defines the Fourier transform of f as

\hat{f}(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp(i\omega x)\, f(x) \, dx

The answer to the second question is that Mathematica defines a parameterized Fourier transform by

\hat{f}(\omega) = \sqrt{\frac{|b|}{(2\pi)^{1-a}}} \, \int_{-\infty}^\infty \exp(ib\omega x)\, f(x) \, dx

where a defaults to 0 and b defaults to 1.

Examples

For example, if φ(x) = exp(-x²/2), then we can compute Mathematica’s default Fourier transform with

    φ[x_] := Exp[-x^2/2]
    FourierTransform[φ[x], x, ω]

This computes

 \hat{\varphi}(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp(i\omega x)\, \varphi(x) \, dx = \exp(-\omega^2/2)

But if we set (a, b) to (0, 2π) with

    FourierTransform[φ[x], x, ω FourierParameters -> {0, 2 Pi}]

we compute

\hat{\varphi}(\omega) = \int_{-\infty}^\infty \exp(2\pi i\omega x)\, \varphi(x) \, dx = \sqrt{2\pi} \exp(-2\pi^2\omega^2/2)

Theorems under various conventions

Several years ago I got frustrated enough with the multitude of Fourier transform conventions that I made a sort of Rosetta stone, giving several of the most basic theorems under each of eight definitions. I used the parameterization

\hat{f}(\omega) = \frac{1}{\sqrt{m}} \int_{-\infty}^\infty \exp(\sigma q i\omega x)\, f(x) \, dx

where m is either 1 or 2π, σ is either 1 or -1, and q is either 1 or 2π.

My σ and q are the sign and absolute value of Mathematica’s b. The relation between my m and Mathematica’s a is slightly more complicated and given in the table below.

\begin{tabular}{lllll} \hline \(\sigma\) & m & q & a & b\\ \hline \textpm{} 1 & 1 & 1 & 1 & \textpm{} 1\\ \textpm{} 1 & 1 & 2\(\pi\) & 0 & \textpm{} 2\(\pi\)\\ \textpm{} 1 & 2\(\pi\) & 1 & 0 & \textpm{} 1\\ \textpm{} 1 & 2\(\pi\) & 2\(\pi\) & -1 & \textpm{} 2\(\pi\)\\ \hline \end{tabular}

Adding tubes to knots

Several months ago I wrote a blog post about Lissajous curves and knots that included the image below.

Lissajous knot

Here’s an improved version of the same knot.

Lissajous knot plotted in Mathematica with Tube function

The original image was like tying the knot in thread. The new image is like tying it in rope, which makes it easier to see. The key was to use Mathematica’s Tube function to thicken the curve and to see the crossings. I also removed the bounding box in the image.

This was the original code:

    x[t_] := Cos[3 t]
    y[t_] := Cos[4 t + Sqrt[2]]
    z[t_] := Cos[5 t + Sqrt[3]]
   
    ParametricPlot3D[{x[t], y[t], z[t]}, {t, 0, 2 Pi}]

The new code keeps the definitions of x, y, and z but creates a plot using Tube.

    
    Graphics3D[
        Tube[
             Table[{x[i], y[i], z[i]}, {i, 0, 2 Pi, 0.01}], 
             0.05
        ], 
        Boxed -> False
    ]

I went back and changed out the plots in my post Total curvature of a knot with plots similar to the one above.

Pythagorean dates

The numbers that make up today’s date—12, 16, and 20—form a Pythagorean triple. That is, 12² + 16² = 20².

There won’t be any such dates next year. You could confirm this by brute force since there are only 365 days in 2021, but I hope to do something a little more interesting here.

The Mathematica function

    PowersRepresentations[n, k, p]

lists the ways to represent a number n as the sum of k pth powers. Calling PowersRepresentations[21^2, 2, 2] shows that there is no non-trivial way to write 21² as the sum of two powers; the only way is 0² + 21². You might try using the whole year rather than the last two digits, but there’s no non-trivial way to write 2021 as the sum of two powers either.

The paragraph above says that no Pythagorean triangle can have hypotenuse 21 or 2021. But could there be a Pythagorean date next year with 21 on one of the sides?

If the components of a date correspond to sides of a Pythagorean triangle, and one of the sides is 21, then the day must be on the hypotenuse because all the month numbers are less than 21, and that day must be greater than 21.

Let’s look for Pythagorean triangles with a side equal to 22, 23, …, 31 and see whether any of them have a 21 on the side.

The Mathematica command

    For[d = 22, d <= 31, d++, Print[PowersRepresentations[d*d, 2, 2]]]

returns

    {{0,22}}
    {{0,23}}
    {{0,24}}
    {{0,25},{7,24},{15,20}}
    {{0,26},{10,24}}
    {{0,27}}
    {{0,28}}
    {{0,29},{20,21}}
    {{0,30},{18,24}}
    {{0,31}}

This tells us that the only numbers from 22 to 31 that can be on the side of a Pythagorean triangle are 25, 26, 29, and 30.

The number 21 does appear in the list: (20, 21, 29) is a Pythagorean triple. But if the 29th of a month were a Pythagorean date next year, it would have to be the 20th month. Since we only have 12 months, there won’t be any Pythagorean dates next year.

The number 22, 23, and 24 can’t be written as sums of squares in a non-trivial way, so if there’s a Pythagorean date in the next few years it will have to be with the day of the month on the side. We can see from the output above that (7, 24, 25) is a Pythagorean triple, so the next Pythagorean date is 7/25/24, and the next after that is 7/24/25. And the next after that will be 10/24/26.

Incidentally, there is an algorithm for counting how many ways a number can be written as the sum of k squares, and it is implemented in Mathematica as Squares[k, n]. If you run SquaresR[2, 441] it will tell you that there are four ways to write 441 as the sum of two squares. But all four of these are trivial: (0, 21), (21, 0), (0, -21), and (-21, 0). Because every square can be written as the sum of two squares analogously, a return value of 4 means there are no non-trivial solutions.

New Asymptotic function in Mathematica 12.1

One of the new features in Mathematica 12.1 is the function Asymptotic. Here’s a quick example of using it.

Here’s an asymptotic series for the log of the gamma function I wrote about here.

\log \Gamma(z) \sim (z - \frac{1}{2}) \log z - z + \frac{1}{2} \log(2\pi) + \frac{1}{12z} - \frac{1}{360z^3} + \cdots

If we ask Mathematica

    Asymptotic[LogGamma[z], z -> Infinity]

we get simply the first term:

z Log[z]

But we can set the argument SeriesTermGoal to tell it we’d like more terms. For example

    Asymptotic[LogGamma[z], z -> Infinity, SeriesTermGoal -> 4]

yields
-1/(360*z^3) + 1/(12*z) - z + Log[2*Pi]/2 - Log[z]/2 + z*Log[z]

This doesn’t contain a term 1/z4, but it doesn’t need to: there is no such term in the asymptotic expansion, so it is giving us the terms up to order 4, it’s just that the coefficient of the 1/z4 term is zero.

If we ask for terms up to order 5

    Asymptotic[LogGamma[z], z -> Infinity, SeriesTermGoal -> 5]

we do get a term 1/z5, but notice there is no 4th order term.

1/(1260*z^5) - 1/(360*z^3) + 1/(12*z) - z + Log[2*Pi]/2 - Log[z]/2 + z*Log[z]

A note on output forms

The Mathematica output displayed above was created by using

Export[filename, expression]

to save images as SVG files. The alt text for the images was created using

InputForm[expression].

More Mathematica posts

Chebyshev approximation

In the previous post I mentioned that Remez algorithm computes the best polynomial approximation to a given function f as measured by the maximum norm. That is, for a given n, it finds the polynomial p of order n that minimizes the absolute error

|| f – p ||.

The Mathematica function MiniMaxApproximation minimizes the relative error by minimizing

|| (fp) / f ||.

As was pointed out in the comments to the previous post, Chebyshev approximation produces a nearly optimal approximation, coming close to minimizing the absolute error. The Chebyshev approximation can be computed more easily and the results are easier to understand.

To form a Chebyshev approximation, we expand a function in a series of Chebyshev polynomials, analogous to expanding a function in a Fourier series, and keep the first few terms. Like sines and cosines, Chebyshev polynomials are orthogonal functions, and so Chebyshev series are analogous to Fourier series. (If you find it puzzling to hear of functions being orthogonal to each other, see this post.)

Here is Mathematica code to find and plot the Chebyshev approximation to ex over [-1, 1]. First, here are the coefficients.

    weight[x_] := 2/(Pi Sqrt[1 - x^2])
    c = Table[ 
        Integrate[ Exp[x] ChebyshevT[n, x] weight[x], {x, -1, 1}], 
        {n, 0, 5}]

The coefficients turn out to be exactly expressible in terms of Bessel functions, but typically you’d need to do a numerical integration with NIntegrate.

Now we use the Chebyshev coefficients to evaluate the Chebyshev approximation.

    p[x_] := Evaluate[c . Table[ChebyshevT[n - 1, x], {n, Length[c]}]] 
             - First[c]/2

You could see the approximating polynomial with

    Simplify[N[p[x]]]

which displays

    1.00004 + 1.00002 x + 0.499197 x^2 + 0.166489 x^3 + 0.0437939 x^4 + 
 0.00868682 x^5

The code

    Plot[Exp[x] - p[x], {x, -1, 1}]

shows the error in approximating the exponential function with the polynomial above.

Note that the plot has nearly equal ripples; the optimal approximation would have exactly equal ripples. The Chebyshev approximation is not optimal, but it is close. The absolute error is smaller than that of MiniMaxApproximation by a factor of about e.

There are bounds on how different the error can be between the best polynomial approximation and the Chebyshev series approximation. For polynomials of degree n, the Chebyshev error is never more than

4 + 4 log(n + 1)/π

times the Chebyshev series approximation error. See Theorem 16.1 in Approximation Theory and Approximation Practice by Lloyd N. Trefethen.

More Chebyshev posts

Remez algorithm and best polynomial approximation

The best polynomial approximation, in the sense of minimizing the maximum error, can be found by the Remez algorithm. I expected Mathematica to have a function implementing this algorithm, but apparently it does not have one. (But see update below.)

It has a function named MiniMaxApproximation which sounds like Remez algorithm, and it’s close, but it’s not it.

To use this function you first have to load the FunctionApproximations package.

    << FunctionApproximations`

Then we can use it, for example, to find a polynomial approximation to ex on the interval [-1, 1].

    MiniMaxApproximation[Exp[x], {x, {-1, 1}, 5, 0}]

This returns the polynomial

    1.00003 + 0.999837 x + 0.499342 x^2 + 0.167274 x^3 + 0.0436463 x^4 + 
 0.00804051 x^5

And if we plot the error, the difference between ex and this polynomial, we see that we get a good fit.

minimax error

But we know this isn’t optimal because there is a theorem that says the optimal approximation has equal ripple error. That is, the absolute value of the error at all its extrema should be the same. In the graph above, the error is quite a bit larger on the right end than on the left end.

Still, the error is not much larger than the smallest possible using 5th degree polynomials. And the error is about 10x smaller than using a Taylor series approximation.

    Plot[Exp[x] - (1 + x + x^2/2 + x^3/6 + x^4/24 + x^5/120), {x, -1, 1}]

minimax error

Update: Jason Merrill pointed out in a comment what I was missing. Turns out MiniMaxApproximation finds an approximation that minimizes relative error. Since ex doesn’t change that much over [-1, 1], the absolute error and relative error aren’t radically different. There is no option to minimize absolute error.

When you look at the approximation error divided by ex you get the ripples you’d expect.

minimax error

See the next post for a way to construct near optimal polynomial approximations using Chebyshev approximation.

Analogies between Weierstrass functions and trig functions

If you look at the Wikipedia article on Weierstrass functions, you’ll find a line that says “the relation between the sigma, zeta, and ℘ functions is analogous to that between the sine, cotangent, and squared cosecant functions.” This post unpacks that sentence.

Weierstrass p function

First of all, what is ℘? It’s the Weierstrass elliptic function, which is the mother of all elliptic functions in some sense. All other elliptic functions can be constructed from this function and its derivatives. As for the symbol itself, ℘ is the traditional symbol for the Weierstrass function. It’s U+2118 in Unicode, &weierp; in HTML, and \wp in LaTeX.

The line above suggests that ℘(x) is analogous to csc²(x). Indeed, the plots of the two functions are nearly identical.

Here’s Weierstrass’ function:

Weierstrass p plot

And here’s csc²:

plot of csc^2

The two plots basically agree to within the thickness of a line.

The Weierstrass function ℘ has two parameters that I haven’t mentioned. Elliptic functions are periodic in two directions in the complex plane, and so their values everywhere are determined by their values over a parallelogram. The two parameters specify the fundamental parallelogram, or at least that’s one way of parametrizing the function. The WeierstrassP function in Mathematica takes two other parameters, called the invariants, and these invariants were chosen in the plot above to match the period of the cosecant function.

    Plot[Sin[x]^-2, {x, -10, 10}]
    Plot[WeierstrassP[
            x, 
            WeierstrassInvariants[{Pi/2, Pi I/2}]
         ], 
        {x, -10, 10}
    ]

The fundamental parallelogram is defined in terms of half periods, usually denoted with ω’s, and the invariants are denoted with g‘s. The function WeierstrassInvariants converts from half periods to invariants, from ω’s to g‘s.

Note that ℘ and cosecant squared are similar along the real axis, but they’re very different in the full complex plane. Trig functions are periodic along the real axis but grow exponentially along the complex axis. Elliptic functions are periodic along both axes.

Weierstrass zeta function

The Weierstrass zeta function is not an elliptic function. It is not periodic but rather quasiperiodic. The derivative of the Weierstrass zeta function is negative of the Weierstrass elliptic function, i.e.

ζ ‘(x) = -℘(x)

which is analogous to the fact that the derivative of cot(x) is -csc²(x). So in that sense ζ is to ℘ as cotangent is to cosecant squared.

The plots of ζ(x) and cot(x) are similar as shown below.

Plot of Weierstrass zeta and cotangent

The Mathematica code to make the plot above was

    Plot[
        {WeierstrassZeta[x, WeierstrassInvariants[{Pi/2, Pi I/2}]], 
         Cot[x]}, 
        {x, -10, 10}, 
        PlotLabels -> {"zeta", "cot"}
    ]

Weierstrass sigma function

The Weierstrass sigma function is also not an elliptic function. It is analogous to the sine function as follows. The logarithmic derivative of the Weierstrass sigma function is the Weierstrass zeta function, just as the logarithmic derivative of sine is cotangent. That is,

(log(σ(x))’ = ζ(x).

The logarithmic derivative of a function is the derivative of its log, and so the logarithmic derivative of a function f is f ‘ / f.

However, the plot of the sigma function, WeierstrassSigma in Mathematica, hardly looks line sine.

Plot of Weierstrass sigma and sine

So in summary, logarithmic derivative takes Weierstrass sigma to Weierstrass zeta just as it takes sine to cotangent. Negative derivative takes Weierstrass zeta to Weierstrass ℘ just as it takes cotangent to negative cosecant squared.

Related elliptic function posts

Total curvature of a knot

Tie a knot in a rope and join the ends together. At each point in the rope, compute the curvature, i.e. how much the rope bends, and integrate this over the length of the rope. The Fary-Milnor theorem says the result must be greater than 4π. This post will illustrate this theorem by computing numerically integrating the curvature of a knot.

If p and q are relatively prime integers, then the following equations parameterize a knot.

x(t) = cos(pt) (cos(qt) + 2)
y(t) = sin(pt) (cos(qt) + 2)
z(t) = -sin(qt)

This is in fact a torus knot because the curve stays on the surface of a torus (doughnut) [1]. We can get a trefoil knot, for example, by setting p = 2 and q = 3.

Trefoil knot

We’ll use Mathematica to compute the total curvature of this knot and other examples. First the parameterization:

    x[t_, p_, q_] := Cos[p t] (Cos[q t] + 2)
    y[t_, p_, q_] := Sin[p t] (Cos[q t] + 2) 
    z[t_, p_, q_] := -Sin[q t]

We can plot torus knots as follows.

    k[t_, p_, q_] := { 
        x[t, p, q], 
        y[t, p, q], 
        z[t, p, q]
    }
    Graphics3D[Tube[Table[k[i, p, q], {i, 0, 2 Pi, 0.01}], 
        0.08], Boxed -> False]

Here’s a more complicated knot with p = 3 and q = 7.

(3, 7) knot

Before we can integrate the curvature with respect to arc length, we need an expression for an element of arc length as a function of the parameter t.

    ds[t_, p_, q_] :=  Sqrt[ 
        D[x[t, p, q], t]^2 + 
        D[y[t, p, q], t]^2 + 
        D[z[t, p, q], t]^2
    ]

Now we can compute the total curvature.

    total[p_, q_] := NIntegrate[
        ArcCurvature[k[t, p, q], t] ds[t, p, q], 
        {t, 0, 2 Pi}
    ]

We can use this to find that the total curvature of the (2,3) torus knot, the trefoil, is 17.8224, whereas 4π is 12.5664. So the Fary-Milnor theorem holds.

Let’s do one more example, this time a (1,4) knot.

unknot

You can see that this is not actually knot. This doesn’t contradict what we said above because 1 divides 4. When p or q equal 1, we get an unknot.

When we compute its total curvature we get 24.2737, more than 4π. The Fary-Milnor theorem doesn’t say that total curvature in excess of 4π is a sufficient condition for a loop to be knotted; it says it’s necessary. Total curvature less than 4π proves that something isn’t a knot, but curvature greater than 4π doesn’t prove anything.

More on curvature and knots

[1] If you change out the 2s in the parameterization with a larger number, it’s easier to see from the graphs that the curves are on a torus. For example, if we plot the (3,7) knot again, replacing 2’s in the definition of x(t) and y(t) with 5’s, we can kinda see a torus inside the knot.

(3, 7) knot torus