The $p$-adic Number System and the Artin-Hasse Exponential

The $p$-adic Number System and the Artin-Hasse Exponential

Achyut Bharadwaj, Lex Harie Pisco, Krittika Garg, Swayam Chaulagain, Counsellor: Sanskar Agrawal, Mentor: Nischay Reddy
June 2023
$p$-adic numbers, Artin-Hasse exponential, radius of convergence, power series, integral power series, discs, leftist numbers

Introduction #

In this paper, we explore the $p$-adic system, by defining it in multiple ways: as an extension of the $p$-adic integers, as well as an extension of the rationals. We then proceed to perform analysis in the $p$-adics, by defining convergence, continuity and discs. We then describe exponentiation and logarithmic functions over the $p$-adics as functions derived from power series. We explore the radius of convergence and other properties of these functions. We then explore the Artin-Hasse exponential, which, though seemingly random, turns out to be an integral power series.

Formal Power Series #

The Ring $R[[x]]$ #

Prove that $R[[x]]$ is a ring under the natural operations of addition and multiplication.

Addition : Let $f,g \in R[[x]]$ such that $f(x)= \sum_{i=0}^\infty a_i x^i$ and $g(x)= \sum_{j=0}^\infty b^j x^j$. Then $f(x) + g(x) = \sum_{i=0}^\infty a_i x^i + \sum_{j=0}^\infty b^j x^j = \sum_{i,j=0}^\infty (a_i + b_j) x^i$. Since ${a_i + b_j }$ is closed under addition as $a_i$and $b_j \in R$ we can say, $R[[x]]$ is a ring under addition.

Multiplication : For proving the multiplication, we need to prove the commutative property in a formal power series. Let $f,g \in R[[x]]$ such that $f(x)= \sum_{i=0}^\infty a_i x^i$ and $g(x)= \sum_{j=0}^\infty b^j x^j$. Then $$f(x)g(x) = \sum_{i=0}^\infty a_i x^i \times \sum_{j=0}^\infty b^j x^j = \sum_{k=0}^\infty\left(\sum_{i = 0}^ka_ib_{k-i}\right)x^k$$

Since the sum $\sum_{i=0}^ka_ib_{k-i}$ is a sum of products of elements of $R$, the sum itself is in $R$ since $R$ is closed under addition and multiplication. Therefore, the above sum is a power series in $R$, that is the products $fg \in R[[x]]$.
 
Now, since the ring is commutative over both $+$ and $\cdot$, it follows that $R[[x]]$ is also commutative. Moreover, it is clearly associative over $+$ since the ring $R$ is itself associative over $+$. Next, the ring has a zero element, i.e. $0 \in R[[x]]$ since for any $f\in R[[x]]$, we have $f + 0 = f$. Similarly, it also has an identity element, i.e. $1$. Also, the additive inverse of $f$ is the power series with the coefficients each being the additive inverse of the coefficients of $f$. Thus, every element of $R[[x]]$ has an inverse element. Distributivity can also be easily seen from the fact that $R$ is itself distributive.
 
We will now prove associativity over multiplication. Let $$a = \sum_{i=0}^\infty a_ix^i, b = \sum_{i=0}^\infty b_ix^i, c = \sum_{i=0}^\infty c_ix_i$$ Now, $$ab = \sum_{k=0}^\infty\left(\sum_{i=0}^ka_ib_{k-i}\right)x^k = \sum_{i=0}^\infty p_ix^i$$ So, $$(ab)c = \sum_{k=0}^\infty \left(\sum_{i=0}^k p_ic_{k-i}\right)x^k = \sum_{k=0}^\infty\left(\sum_{i=0}^k \left(\sum_{j=0}^ia_jb_{i-j}\right)c_{k-i}\right)x^k$$ The above can be expanded as $$\sum_{k=0}^\infty (c_0a_0b_0 + c_1(a_0b_1 + a_1b_0) + \cdots c_k(a_0b_k + \cdots a_kb_0)x^k$$ Now, $a(bc) = (bc)a$ by commutativity. Therefore, we may use a similar approach as above to show that $$a(bc) = \sum_{k=0}^\infty (a_0b_0c_0 + a_1(b_0c_1 + b_1c_0) + \cdots + a_k(b_0c_k + \cdots b_kc_0))x^k$$ By rearranging the terms above, we get $$a(bc) = \sum_{k=0}^\infty (c_0a_0b_0 + c_0(a_0b_1 + a_1b_0) + \cdots + c_k(a_0b_k + \cdots a_kb_0))x^k = (ab)c$$ This proves associativity over multiplication.

Units in $R[[x]]$ #

Definition 1. $f(x) \in R[[x]]$ then $f(0) = a_0$

Definition 2. Units in $R[[x]]$ are all $f$ and $g$ in $R[[x]]$ such that $[f \cdot g](x) = 1$.

Examples of these units are:

  1. $f(x) = -(x-1)$

  2. $f(x) = \sum_{i=0}^{\infty} x^i$

  3. $f(x) = 1 + \sum_{i=1}^n a_ix^i$ for $a_i \in R$

  4. $f(x) = (\pm (x-1) \cdot \sum_{i=0}^{\infty} x^i)^n$ for $n \in \mathbb{N}$

Proposition 1. If $f(x) \in R[[x]]$ and $f(0) = 1$, then $f(x)$ is a unit in $R[[x]]$.

Proof. Let $f(x), g(x)$ be in $R[[x]]$ such that $f \cdot g = 1$. We need to prove that such a power series, $g$, exists if and only if we have $f(0) = 1$. Let $$f(x) = a_0 + a_1x + \cdots$$ and $$g(x) = b_0 + b_1x + \cdots.$$ Thus, proving the existence of $g$ is equivalent to proving that there exists a sequence $\lbrace b_i \rbrace$ such that $$(a_0 + a_1x + \cdots)(b_0 + b_1x + \cdots) = 1.$$ Expanding this product, and combining the terms of equal degree, we get: $$a_0b_0 + (a_0b_1 + a_1b_0)x + \cdots + (a_0b_n + \cdots + a_nb_0)x^n + \cdots = 1$$ Since the RHS is simply $1$, we need all terms $(a_0b_n + \cdots a_nb_0) = 0$ for all $n\geq1$ and $a_0b_0 = 1$.
 
Now, if $f(0) = a_0$ is not a unit, then we cannot find a $b_0$ in $R$ such that $a_0b_0 = 1$. This shows that we cannot find the desired sequence $\lbrace b_i \rbrace$. Thus, if $f(x)$ is a unit in $R[x]$, it follows that $f(0)$ is a unit.
 
We now prove the converse. That is, if $f(0)$ is a unit in $R$, then there exists some sequence $\lbrace b_i\rbrace$ so that $$g(x)=\sum_{i\geq0}b_ix^i$$ and $f(x)g(x) = 1$. We use induction in order to do so. We will first prove that there exists some sequence $\lbrace b_i\rbrace_{i=0}$ (i.e. a single term $b_0$) so that $a_0b_0 = 1$. This finishes our base case. Now, assume that there exists some sequence $\lbrace b_i\rbrace_{i=0}^n$ so that $a_0b_0 = 1$ and for all $0 < k \leq n$ we have $$(a_0b_k + \cdots a_kb_0) = 0.$$ We prove that there exists a sequence $\lbrace b_i\rbrace_{i=0}^{n+1}$ so that the same holds for $k = n+1$ as well. We have $$a_{n+1}b_0 + \cdots a_0b_{n+1} = 0.$$ Therefore, we have $$b_{n+1} = -\frac{a_{n+1}b_0 + \cdots + a_1b_n}{a_0}$$ Since we know $b_i$ exist for all $i$ less than or equal to $n$, we have that $b_{n+1}$ also exists, completing our inductive step. Thus, if $f(0)$ is a unit, then $f(x)$ has an inverse. ◻

Compositions of formal power series #

We will now generalize as to when $f(g(x))$ where $g(x) \in R[[x]]$ is an element of $R[[x]]$ itself.

Proposition 2. Let $f, g \in R[[x]]$. Then, $f(g(x)) \in R[[x]]$ if and only if $g(0) = 0$.

Proof. Suppose $g(0) = 0$. Then, we can write $g(x) = x^k + h(x)$ where $k$ is the smallest power of $x$ in $g(x)$, and $k \neq 0$ since $g(0)=0$. Moreover, the smallest power of $x$ in $h(x)$ is more than $k$. Also, let $$f(x) = \sum_{i=0}^\infty a_ix^i$$ Therefore, $$f(g(x)) = a_0 + a_1g(x) + a_2g(x)^2 + \cdots = a_0 + a_1(x^k + h(x)) + a_2(x^k + h(x))^2 + \cdots$$ Also let $$f(g(x)) = c_0 + c_1x + c_2x^2 + \cdots$$ In order to prove that $f(g(x)) \in R[[x]]$ it suffices to prove that $c_n \in R$ for all $n$. So, we will work on finding the coefficient of the $x^n$ term, i.e. $c_n$.
 
Consider the term $a_{n+1}(x^k + h(x))^{n+1}$ in the expansion for $f(g(x))$. The term with the lowest power in this expansion is $x^{(n+1)k}$. Since $k \neq 0$, it follows that $(n+1)k > n$. Therefore, the coefficient $c_n$ of $x^n$ is independent of $a_{n+1}$. By a similar argument, $c_n$ is independent of $a_j$ for all $j > n$. In other words, it follows that $$c_n = \sum_{i = 0}^n a_it_i$$ where $t_i \in R$. Note that there will be no powers of $a_i$ in the expansion since the coefficients $a_i$ are not raised to a power in the expansion of $f(g(x))$. Since $a_i, t_i \in R$, it follows that $a_it_i \in R$. Thus, $\sum_{i=0}^na_it_i \in R$. Therefore, for all $n$, we have $c_n \in R$, where $$f(g(x)) = c_0 + c_1x + \cdots$$ Therefore, by definition of a formal power series, $f(g(x)) \in R[[x]]$.
 
Now, suppose $g(0) = c \neq 0$. Let $g(x) = c + h(x)$. Then, $$f(g(x)) = a_0 + a_1(c + h(x)) + a_2(c+h(x))^2 + \cdots = c_0 + c_1x + \cdots$$ In the $n$th term of the above expansion, we have a constant $c^n$. Since there are infinitely many terms in the expansion, $c_0$ is an infinite sum, which is not defined in $R$. Therefore, $c_0 \notin R$. It follows that $f(g(x)) \notin R[[x]].$ ◻

Multivariate Power Series #

We can define the system $R[[x,y]]$ to $(R[[x]])[[y]]$. Inductively, we may define $$R[[x_1, \dots, x_k]] = (R[[x_1, \dots, x_{k-1}]])[[x_k]]$$ Since $R$ is a ring implies $R[[x]]$ is a ring (base case) Then, we assume that $R[[x_1,x_2, x_3, … x_k]]$ is a ring Now, $R[[x_1, x_2, X_3, … x_k+1]]$ is $(R[[x_1,x_2, x_3, … x_k]])[[x_k+1]]$ which is a ring (inductive case). Thus, by induction we can say that $R[[x_1, x_2, X_3, … x_n+1]]$ is a ring.

We will now generalize. For which power series $g \in R[[x,y]]$ can we define $f(g(x,y))$ for any $f \in R[[t]]$?

Proposition 3. For $f, g \in \mathbb{R}[[x, y]]$, we have that $f(g(x, y))$ is defined in $R[[x, y]]$ if and only if $g(0, 0) \neq 0$.

Proof. Suppose $g(0, 0) = 0$. Thus, the smallest power of $x$ and $y$ can be represented as $b_{mn}x^my^n$ where both $m$ and $n$ are not $0$ at the same time. So, we let $g(x, y) = b_{mn}x^my^n + h(x, y)$. Thus, $$ \begin{aligned} f(g(x, y)) &= f(b_{mn}x^my^n + h(x, y)) \\ &= a_0 + a_1(b_{mn}x^my^n + h(x, y))+a_2(b_{mn}x^my^n + h(x, y))^2+\cdots \\ &= c_{00} + c_{10}x + c_{01}y + \cdots \end{aligned} $$ Now, consider the coefficient, $c_{pq}$. We know that the term $$a_{p+q+1}(b_{mn}x^m+y^n + h(x, y)^{p+q+1})$$ has smallest power $x^{m(p+q+1)}y^{n(p+q+1)}$ which is clearly greater than the power of $x^py^q$. Hence, $c_{pq}$ is independent of $a_k$ for all $k> p+q$. Therefore, we have that $$c_{pq} = \sum_{i=0}^{p+q}a_id_i$$ which is clearly an element of $R$. Therefore, $f(g(x, y))$ makes sense in $R[[x, y]]$.
 
Now, if $g(0, 0) = c$ which is a constant, then let $g(x, y) = c + h(x, y)$. Therefore, we have $$f(g(x, y)) = a_0 + a_1(c+h(x, y)) + a_2(c+h(x, y))^2 + \cdots$$ The constant term of the above expansion will be $a_0 + a_1c + a_2c^2 + \cdots$ which is an infinite sum. This does not make sense in $R$ since we can only compute finite sums in $R$. Therefore, if $c\neq 0$, then there is no way to make sense of $f(g(x, y))$ as an element of $R[[x, y]]$. ◻

Polynomial Fields and Rational Functions #

For some field $k$, we write $k[x]$ to denote the set of polynomials with coefficients in $k$. We write $k(x)$ to denote the set $$\left\lbrace \left .\frac{g(x)}{h(x)} \right \rvert g(x), h(x) \in R[[x]]\right \rbrace$$ We will now explore the relation between $k(x), k[x]$ and $k[[x]]$. Firstly, note that $k[x] \in k(x)$ by letting $g(x) = 1$. Also, clearly $k[x] \in k[[x]]$. This is because a polynomial is a formal power series with the coefficients equal to $0$ for all $x^n$ where $n$ is greater than the degree of the polynomial. We now check when an element of $k(x)$ can be written as a formal power series. Consider an example of the power series expansion of a rational function $g(x)/h(x)$, $t(x)$. Let

$g(x) = 2x+1$ and $h(x) = x^1 + 1$. Then, $$\frac{g(x)}{h(x)} = \frac{2x+1}{x^2 + 1} = 1 + x + 2x -x^3 -2x^4 + x^5 + 2x^6 - \cdots$$ Notice how the terms of the power series are recursive. The $n$th term of $t(x)$ can be found in terms of the previous terms of $t(x)$. In fact, this holds in general too!

Proposition 4. We claim that a rational function $g(x)/h(x)$, i.e. an element of $k(x)$ with $h(0) \neq 0$ can be written as a power series $t(x)$.

Proof. If $h(0) \neq 0$, then we have already proved that $h(x)$ is a unit. Thus, $1/h(x) \in R[[x]]$. So, since formal power series are closed under multiplication, $g(x). (1/h(x)) \in R[[x]]$. Thus, $g(x)/h(x)$ can be written as a power series $t(x)$. If $h(0)=0$, then $h(x)$ is not a unit which implies $1/h(x) \notin R[[x]]$ and thus, $g(x)/h(x)$ cant be expressed as a formal power series. ◻

Proposition 5. Let $t(x) = g(x)/h(x)$ be the power series expansion of a rational function. Then, the terms $a_n$ of $t(x)$ satisfy a linear recursion. Formally, this means that there exists a number $m\geq 1$ and constants $c_1, \dots, c_m \in k$ such that for all $n$ that is sufficiently large, $$a_n = \sum_{i=1}^mc_ia_{n-i}$$

Proof. Firstly, let the degree of $h(x)$ be $m$ and the degree of $g(x)$ be $k$. Let $h(x) = b_0 + b_1x + \cdots b_mx^m$ and $t(x) = a_0 + a_1x + \cdots$. Now, since $t(x) = g(x)/h(x)$, we may write $g(x) = t(x)h(x)$.
 
Since the degree of $g(x)$ is $k$, it follows that the coefficients of $x^n$ in the expansion of the product on the RHS is $0$ for all $n$ that is greater than $k$.
 
Now, consider some $n > \max(m, k)$. Clearly, $n>k$. Therefore, the coefficient of $x^n$ in the expansion will be $0$. Let us find the coefficient using the fact that $t(x) = a_0 + a_1x + \cdots$ and $h(x) = b_0 + b_1x + \cdots + b_mx^m$. This will be $$a_nb_0 + a_{n-1}b_1 + \cdots + a_{n-m}b_m$$ Since $n>k$, the above must be $0$. Rearranging the terms, we get $$a_nb_0 = -(a_{n-1}b_1 + \cdots a_{n-m}b_m)$$ Since $k$ is a field and $b_0 = h(0) \neq 0$, we have $$a_n = -b_0^{-1}\left(a_{n-1}b_1 + \cdots a_{n-m}b_m\right) = \sum_{i=1}^mc_ia_{n-i}$$ where $c_i = -b_0^{-1}b_i$. This proves that the coefficients of the power series $t(x)$ satisfy a linear recursion. ◻

Proposition 6. The converse of the previous lemma also holds true. That is, if $t(x) = \sum_{n\geq}a_nx^n\in k[[x]]$ is such that the coefficients $a_n$ satisfy a linear recursion, then there exists some rational function $g(x)/h(x)$ which has a power series expansion of $t(x)$.

Proof. We know that for $t(x)$, we have $$a_n = c_1a_{n-1} + c_2a_{n-2} + \cdots c_ma_{n-m}$$ for $n$ that is sufficiently large enough, since we are given that the coefficients of $t$ satisfy a linear recursion. Therefore, we have $$a_n - c_1a_{n-1} - \cdots - c_ma_{n-m} = 0$$ Let $b_0 = 1$ and $b_i = -c_i$ when $i\geq1$. Therefore, we may write $$a_nb_0 + a_{n-1}b_1 + \cdots a_{n-m}b_m = 0$$ Now, consider $h(x) = b_0 + b_1x + \cdots b_mx^m$. Consider the product $t(x)h(x)$. The coefficient of $x^n$ in this product will be $$b_0a_n + b_1a_{n-1} + \cdots b_ma_{n-m}$$ which is $0$ by our assumption. Therefore, all terms of $t(x)h(x)$ have a coefficient of $0$ for $n$ sufficiently large. Suppose for all $n>k$ we have that the coefficient of $x^k$ is $0$ (and not $0$ for $n$ not more than $k$). Therefore, $t(x)h(x) = g(x)$ is a polynomial of degree $k$. Therefore, $g(x)/h(x)$ is a rational function that is represented by $t(x)$. Therefore, we have found some rational function that is represented by $t(x)$ given that the coefficients of $t(x)$ satisfy a linear recursion. This completes our proof. ◻

$p$-adic numbers #

The $p$-adic numbers are defined as follows:

Definition 3. The set of $p$-adic numbers $\mathbb{Z_p}$ is defined as:

$$\mathbb{Z_p} = \lbrace (a_1, a_2, \dots) \mid a_i \in \mathbb{Z}/p^i \mathbb{Z} \text{ and } a_{i+1} \equiv a_i \pmod{p^i} $$

One can prove that the set of $p$-adic numbers, $\mathbb{Z_p}$ forms a ring under term-wise addition and multiplication.

Proposition 7. The ring $\mathbb{Z_p}$ is an integral domain, i.e., if for some $a, b \in \mathbb{Z_p}$ we have $ab = 0$, then either $a=0$ or $b=0$.

Proof. In $\mathbb{Z_p}$, we have $0 = (0, 0, \dots)$. Thus, we need to prove that either $a = (0, 0, \dots)$ or $b = (0, 0, \dots)$ given that $ab = (0, 0, \dots)$. First, let $a = (a_1, a_2, \dots)$ and $b = (b_1, b_2, \dots)$. Then, $$ab = (a_1b_1, a_2b_2, \dots) = (0, 0, \dots)$$ This implies that we must have $a_ib_i \equiv 0 \pmod{p^i}$ for all $i$. Clearly, we must also have $a_1b_1 \equiv 0 \pmod{p}$. Since $\mathbb{Z_p}$ is an integral domain, we must have $a_1 \equiv 0 \pmod p$ or $b_1 \equiv 0 \pmod{p}$. Assume without loss of generality that $a_1 \equiv 0 \pmod{p}$. We will prove that either $a_i = 0$ for all $i$ or $b_i = 0$ for all $i$.
 
We will prove this by contradiction. Suppose that $a_i \neq 0$ and $b_i \neq 0$. Then, there exists some $a_k$ such that $a_k \equiv 0 \pmod {p^k}$. Also assume without loss of generality that $b_i \neq 0$ for some $i \leq k$ (if the smallest such $i$ is greater than $k$, then we just exchange $a$ and $b$). If $a_k \equiv 0 \pmod{p^k}$, then by the definition of a $p$-adic number, we must have $a_{k+1} \equiv a_k \not \equiv 0 \pmod{p^k}$. Thus, $a_{k+1} \neq 0$. It follows that for any $i \geq k$, we must have $a_i \neq 0$ as well as $b_i \neq 0$.
 
Now, since $ab = 0$, we must have $a_ib_i \equiv 0 \pmod{p^i}$ for all $i \geq k$. Now, we have $a_{k-1} \equiv 0 \pmod{p^{k-1}}$ but $a_k \equiv 0 \pmod{p^k}$. Moreover, by definition, we have $a_k \equiv a_{k-1} \pmod{p^{k-1}}$. Therefore, it follows that $$a_k \equiv p^{k-1}m_1 \pmod{p^k}$$ where $0 < m_1 < p$. Next, we have $$a_{k+1} \equiv p^km_0 + p^{k-1}m_1 \pmod{p^{k+1}}$$ where $0 < m_0 < p$. In general, we therefore have $$a_{k+i} \equiv p^{k+i - 1}m_{i-1} + \cdots + p^km_0 \pmod{p^{k+i}}$$ Let us say the smallest value of $i$ for which $b_i \neq 0$ is $i = r$. We assumed earlier (WLOG) that $r < k$. Now, just as before, we may write: $$b_{r+j} \equiv p^{r+j-1}n_{j-1} + \cdots + p^rn_0 \pmod{p^{r+j}}$$ Since $k > r$, let $k = r + l$ where $l$ is some natural number. Also, let $i = r - k + j$. Therefore, $$b_{r+j} = b_{k+i} \equiv p^{k+i - 1}n_{j-1} + \cdots + p^rn_0 \pmod{p^{k+i}}$$ Now, $v_p(a_{k+i}b_{k+i}) = v_p(a_{k+i}) + v_p(b_{k+i})$. Since the smallest power of $p$ in the expansion of $a_{k+i}$ is $k$. Therefore, we have $v_p(a_{k+i}) = k$. Since the smallest power of $p$ in the expansion of $b_{k+i}$ is $r$, we have that $v_p(b_{k+i}) = r < k$. Thus, $v_p(a_{k+i}b_{k+i}) = kr \leq k^2$. Now, consider the case when $i = k^2 - k + 1$. We therefore have $v_p(a_{k^+1}b_{k^2+1}) \leq k^2$. This implies that $$a_{k^2+1}b_{k^2 +1} \not \equiv 0 \pmod{p^{k^2+1}}$$ However, since $ab = 0$, we must have $a_{k^2+1}b_{k^2+1} \equiv 0 \pmod{p^{k^2+1}}$. A contradiction. Thus, we cannot have both $a \neq 0$ as well as $b \neq 0$. ◻

Let us explore what the units in $\mathbb{Z_p}$ are. We first take an example. Consider $\mathbb{Z_3}$. Consider $a = (1, 4, 13, 40, \dots)\in \mathbb{Z_p}$. Also, let $b = (1, 7, 25, 79, \dots)$. Clearly, $b$ also is an element of $\mathbb{Z_p}$. Moreover, $$ab = (1\cdot1, 4\cdot7, 13\cdot25, 40\cdot 79, \dots) = (1, 1, 1, 1, \dots)$$ Thus, $b$ is the inverse of $a$ in $\mathbb{Z_p}$. Since $a$ has an inverse in $\mathbb{Z_p}$, it follows that $a$ is a unit in $\mathbb{Z_p}$. Note that however, $a = (0, 3, 12, \dots)$ is not a unit in $\mathbb{Z_p}$. It seems that in general, $a\in \mathbb{Z_p}$ is a unit if and only if the first term of $a$ is non-zero. In other words, we claim that $a \in \mathbb{Z_p}$ is a unit if and only if $a \equiv 0 \pmod{p}$.

Proposition 8. In $\mathbb{Z_p}$, a $p$-adic number $u$ is a unit if and only if $u_1 \not \equiv 0 \pmod{p}$

Proof. Let $u = (u_1, u_2, \dots)$. Suppose $u_1 = 0$. Then, there is clearly no inverse for $u$ since there is no $b_1 \in \mathbb{Z}/p\mathbb{Z}$ such that $u_1b_1 \equiv 1 \pmod{p}$ ($0$ is not a unit modulo $p$). Hence, if $u$ is a unit, then it follows that $u_1 \not \equiv 0$ or equivalently, $u \equiv 0 \pmod{p}$.
 
We now prove the other direction. Suppose $u \not \equiv 0 \pmod{p}$. Then, we must prove that $u$ is a unit in $\mathbb{Z_p}$. Since $p$ is prime, if $u \neq 0 \pmod{p}$, it means that $u_1 \not \equiv 0 \pmod{p}$. Since $p$ is a prime, it follows that $(u_1, p) = 1$. Thus, $u_1$ is a unit in $\mathbb{Z}/p\mathbb{Z}$.
 
Now, suppose that $(u_k, p) = 1$. Now, by definition of a $p$-adic number, we have $u_{k+1} \equiv u_k \pmod{p^k}$. This in turn implies that $u_{k+1} \equiv u_k \pmod{p}$. Thus, $(u_{k+1}, p) = 1$. Hence, $(u_{k+1}, p^{k+1}) = 1$. Thus, $u_{k+1}$ is a unit in $\mathbb{Z}/p^{k+1}\mathbb{Z}$.
 
By induction, this implies that for all $k$, we have $u_k$ is a unit in $\mathbb{Z}/p^k\mathbb{Z}$. Since the multiplication operation is termwise in $\mathbb{Z_p}$, it follows that $u$ is a unit in $\mathbb{Z_p}$. ◻

This leads to another important result:

Proposition 9. Let $a\neq 0$ be a $p$-adic number. Then, there exists a unique pair $(u, k)$ such that $a = p^ku$ and $u$ is a unit.

Proof. We will first prove existence. Let $a = (a_1, a_2, \dots)$. If $a_1 \neq 0$, then by the previous proposition, we have that $a$ itself is a unit. Thus, $a = p^0u$ where $u=a$.
 
Now, suppose that $a_k = 0$ for some $k$ (this automatically implies that $a_1, a_2, \dots, a_k = 0$ by the construction of $p$-adic numbers) such that $a_{k+1} \neq 0$. Now, consider the $p$-adic number $$b = (a_{k+1}/p^k, a_{k+2}/p^k, \dots)$$ We must first prove that $b$ is indeed a $p$-adic integer.
 
For any $i>k$, by the definition of $p$-adics, we know that $a_i \equiv a_k \equiv 0 \pmod{p^k}$. Thus, $a_i/p^k$ is an integer for all $a_i$.
 
Now, for some $i$, we have $a_{k+i+1} \equiv a_{k+i} \pmod{p^{k+i}}$ by the definition of $p$-adics. Thus, $a_{k+i+1} = p^in + a_{k+i}$ where $0<n<p$. Dividing both sides by $p^k$, we get $$\frac{a_{k+i+1}}{p^k} = p^{i}n + \frac{a_{k+i}}{p^k}$$ Hence, taking mod $p^i$ on both sides, we get $$\frac{a_{k+i+1}}{p^k} \equiv \frac{a_{k+i}}{p^k} \pmod{p^i}$$ Hence, we conclude that $b\in \mathbb{Z_p}$. Now, consider the $p$-adic integer $p^kb$. We get $$\begin{aligned} p^kb &= p^k(a_{k+1}/p^k, a_{k+2}/p^k, \dots, a_{k+k}/p^k, a_{k+k+1}/p^k, \dots) \\ &= (a_{k+1}, a_{k+2}, \dots, a_{k+k}, a_{k+k+1}, \dots) \end{aligned}$$ Now, $a_{i+1} \equiv a_k \pmod{p^i}$, or in general, $a_{k+i} \equiv a_i \pmod{p^i}$ by induction on $k$. Thus, $$p^kb = (a_1, a_2, \dots) = a$$ Now, clearly, $a_{k+1}/p^k \not \equiv 0 \pmod{p}$ since $a_{k+1}$ is nonzero. Thus, $b$ is a unit. Thus, for arbitrary $a$, we have found a specific unit $b$ and power $k$ such that $a = p^kb$, which completes our proof for existence.
 
Now, we prove uniqueness. To do so, suppose that $a = p^{k_1}u_1 = p^{k_2}u_2$. Assume WLOG that $k_1 \geq k_2$. Then, $$p^{k_1}u_1 - p^{k_2}u_2 = p^{k_2}(p^{k_1-k_2}u_1 - u_2) = 0$$ Since $\mathbb{Z_p}$ is an integral domain, it follows that either $p^{k_2} = 0$ or $p^{k_1-k_2}u_1 - u_2 = 0$. Since $p^{k_1}$ is clearly not $0$, it follows that $$p^{k_1-k_2}u_1 = u_2$$ If $k_1>k_2$, then the first term of $u_2$ will be $0$, which is a contradiction since $u_2$ is a unit. Therefore, $k_1 = k_2$. Thus, $u_2 = p^0u_1 = u_1$. Hence, the pair $(u, k)$ is unique. ◻

Now, we will explore some properties related to $\mathbb{Z_p}[x]$, i.e. polynomials with coefficients in $\mathbb{Z_p}$.

Theorem 1. Let $f(x) \in \mathbb{Z_p}[x]$. Consider some $a_1$ (if there exists one) such that $f(a_1) \equiv 0 \pmod{p}$ such that $f’(a_1) \not \equiv 0 \pmod{p}$. Then, there exists a unique $a\in \mathbb{Z_p}$ such that $f(a) = 0$ and $a \equiv a_1 \pmod{p}$.

Proof. If we prove that if there exists a root, $a_k$ to $f(x)$ in $\mathbb{Z}/p^k\mathbb{Z}$ with $f’(a_k) \neq 0$, then there exists a unique $a_{k+1}$ such that $f(a_{k+1}) = 0$ in $\mathbb{Z}/p^{k+1}\mathbb{Z}$, it will by induction imply that there exists some unique $p$-adic number $a=(a_1, a_2, \dots)$ such that $f(a) = 0$ in $\mathbb{Z_p}$. However, note the condition that $a_{k+1} \equiv a_k \pmod{p^k}$. Let us first prove this result for $k=1$.
 
Suppose $f(a_1) \equiv 0 \pmod{p}$ and $f’(a_1) \not \equiv 0 \pmod{p}$. We will try to find some $a_2$ such that $f(a_2) \equiv 0 \pmod{p^2}$ and $a_2 \equiv a_1 \pmod{p}$. $$f(x) = c_0 + c_1x + \cdots + c_nx^n$$ in $\mathbb{Z}/p\mathbb{Z}$. Then, we have $f(a_1) = c_0 + c_1a_1 + \cdots + c_na_1^n$. Consider $a_2 = a_1 + mp$ for some integer $0<m<p$. Then, we have $$\begin{aligned} f(a_2) &= f(a_1 + mp) = c_0 + c_1(a_1+mp) + \cdots + c_n(a_1 + mp)^n \\ &= c_0 + c_1(a_1 + mp) + \cdots + c_n(a_1^n + na_1^{n-1}mp + \cdots + (mp)^n) \end{aligned}$$ Now, if we take mod $p^2$ on both sides of the above, all terms with powers of $p$ over $2$ get cancelled to $0$. Thus, we get: $$\begin{aligned} f(a_2) &\equiv c_0 + c_1(a_1 + mp) + c_2(a_1^2 + 2a_1mp) + \cdots + c_n(a_1^n + na_1^{n-1}mp) \\ &\equiv (c_0 + c_1a_1 + \cdots c_na_n) + (c_1(mp) + 2c_2(mp)a_1 + \cdots nc_n(mp)a_1^{n-1}) \\ & \equiv f(a_1) + mpf’(a_1) \pmod{p^2} \end{aligned}$$ Now, we know that $a_2 \equiv a_2 \pmod{p}$ since $a_2 = mp + a_1$. If $a_2$ is a root modulo $p^2$, it implies that $f(a_2) \equiv 0 \pmod{p^2}$. Therefore, $f(a_1) + mpf’(a_1) \equiv 0 \pmod{p^2}$. Since $f(a_1) \equiv 0 \pmod{p}$, let $f(a_1) = kp$. So, we must have $kp + mpf’(a_1)\equiv 0 \pmod{p}$. So, $m \equiv -k(f’(a_1))^{-1} \pmod{p}$. We can do this step since $f’(a_1) \not \equiv 0 \pmod{p}$. Thus, we have found a unique $m$ such that $f(a_1 + mp) \equiv 0 \pmod{p^2}$. In other words, given a value $a_1$, we have proved that there exists a unique $a_2$ such that $f(a_2) \equiv 0 \pmod{p^2}$.
 
Now, suppose we are given some $a_k$ such that $f(a_k) \equiv 0 \pmod{p^k}$. We will prove that there exists a unique $a_{k+1}$ such that $f(a_{k+1}) \equiv 0 \pmod{p^{k+1}}$ and $a_{k+1} \equiv a_k \pmod{p^k}$.
 
Let $f(x) = c_0 + \cdots + c_nx^n$. Then, $$f(a_k) \equiv c_0 + c_1a_k + \cdots c_na_n^n \equiv0 \pmod{p^k}$$ Now, since $a_{k+1} \equiv a_k \pmod{p^k}$, we have $a_{k+1} \equiv p^km + a_k$ where $0<m<p$. Therefore, $$\begin{aligned} f(a_{k+1}) &= c_0 + c_1a_{k+1} + \cdots + c_na_{k+1}^{n} \\ &= c_0 + c_1(a_k + p^km) + \cdots + c_n(a_k + p^km)^n \\ &= c_0 + c_1(a_k + p^km) + \cdots + c_n(a_k^n + na_k^{n-1}mp + \cdots (mp)^n) \end{aligned}$$ If we take mod $p^{k+1}$ on both sides, all terms apart from the $p^k$ and constant terms cancel out since $2k > \geq k+1$ for all positive $k$. Thus, $$\begin{aligned} f(a_{k+1}) &\equiv c_0 + c_1(a_k + p^km) + c_2(a_k^2 + 2a_kp^km) + \cdots + c_n(a_k^{n} + na_k^{n-1}p^km) \\ &\equiv (c_0 + c_1a_k + \cdots c_na_k^n) + p^km(c_1 + 2c_2a_k + \cdots nc_na_k^{n-1}) \\ &\equiv f(a_k) + p^kmf’(a_k) \equiv 0 \pmod{p^{k+1}} \end{aligned}$$ Since $f(a_k) \equiv 0 \pmod{p^k}$, we have $f(a_k) = p^kt$. So, $p^kt + p^kmf’(a_k) \equiv 0 \pmod{p^{k+1}}$. Hence, $$-t + mf’(a_k) \equiv 0 \pmod{p}$$ Hence, $m \equiv t(f’(a_k))^{-1} \pmod{p}$. Since $f’(a_k) \not \equiv 0 \pmod{p^k}$, its inverse exists and is unique. Thus, there exists a unique $m$, given as above, such that $a_{k+1} = mp^k + a_k$ is a root of $f(x)$ in $\mathbb{Z}/p^{k+1}\mathbb{Z}$ and $a_{k+1} \equiv a_k \pmod{p^{k}}$.
 
We are given the value $a_1$ that is a root of $f(x)$ in $\mathbb{Z}/p\mathbb{Z}$. Thus, we can find the corresponding value of $a_2$ and hence $a_3$ and so on. Thus, by induction, there exists a unique sequence $a=(a_1, a_2, \dots)$ that satisfies $f(a_k) \equiv 0 \pmod{p^k}$ and $a_{k+1} \equiv a_k \pmod{p^k}$. By definition of a $p$-adic number, $a\in \mathbb{Z_p}$. Moreover, since $f(a_k) \equiv 0 \pmod{p^k}$ it follows that $f(a) = 0$ in $\mathbb{Z_p}$. Thus, there exists a unique $p$-adic number $a\in \mathbb{Z_p}$ such that $f(a) = 0$ given $a_1$. ◻

The $p$-adic Numbers $\mathbb{Q_p}$ #

As an Extension of $\mathbb{Z_p}$ #

Definition 4. We define $\mathbb{Q_p} = \mathbb{Z_p}[\frac{1}{p}]$.

For example, $$\frac{(1, 4, 13, \dots)}{1} + \frac{(3, 3, 3, \dots)}{3} + \frac{(2, 5, 14, \dots)}{9}\in\mathbb{Q_3}$$ Note that the above is NOT equal to $$\left(1, 4, 13, \dots\right) + \left(1, 1, 1, \dots\right) + \left(\frac{2}{9}, \frac{5}{9}, \frac{13}{9}, \dots\right)$$ The divided by sybmol is merely used as a notation and does not translate to the above. More generally, any element of $\mathbb{Q_p}$ is: $$a_0 + \frac{a_1}{p} + \cdots + \frac{a_k}{p^k}$$ By taking $p^k$ as the common denominator, the above can be rewritten as: $$\frac{a_0p^k + a_1p^{k-1} + \cdots + a_k}{p^k} = \frac{a}{p^k}$$ where $a_i, a \in \mathbb{Z_p}$. Moreover, we have that two $p$-adic numbers $\frac{a}{p^k}$ and $\frac{b}{p^m}$ are equal if and only if $ap^m = bp^k$.

Proposition 10. $\mathbb{Q_p}$ is a field. Moreover, $\mathbb{Q}\in \mathbb{Q_p}$.

Proof. Consider some $p$-adic number $\alpha = \frac{a}{p^k}$. We will prove that there exists some $p$-adic number $\beta = \frac{b}{p^m}$ such that $\alpha\beta = 1$ for every $\alpha \in \mathbb{Q_p}$ that is non-zero, i.e. $a\neq 0$.
 
Firstly, we know that since $a\in \mathbb{Z_p}$, we have that there exists some $a’$ that is a unit in $\mathbb{Z_p}$ and $n_1$ a non-negative integer so that $a = a’p^{n_1}$. Let $\beta\in \mathbb{Q_p}$ be such that $$\beta = \frac{b}{1} = \frac{(a’)^{-1}p^{k-n_1}}{1}$$ The inverse of $a’$ exists since $a’$ is a unit as defined. Then, $$\alpha\beta = \frac{a’p^{n_1}}{p^k}\cdot \frac{(a’)^{-1}p^{k-n_1}}{1} = \frac{p^k}{p^k}$$ Now since $a/p^k = b/p^m \iff ap^m = bp^k$, it follows that the above equals $1$ if and only if $p^k=p^k$. Thus, $\alpha\beta = 1$. Thus, we have found a $\beta\in \mathbb{Q_p}$ given $\alpha \in \mathbb{Q_p}\setminus\lbrace 0 \rbrace$ such that $\alpha\beta = 1$. Hence, $\mathbb{Q_p}$ is a field.
 
We will now prove that $\mathbb{Q}\in \mathbb{Q_p}$. Consider an element of $\mathbb{Q}$, say $a/b$. Let $a = p^{k_1}a’$ and $b = p^{k_2}b’$ where $a’$ and $b’$ are co-prime with $p$. Thus, $$\frac{a}{b} = p^{k_1}\frac{\frac{a’}{b’}}{p^{{k_2}}}$$ where $\alpha \in \mathbb{Z_p}$. Thus, any rational number is also a $p$-adic number. Hence $\mathbb{Q_p}$ contains $\mathbb{Q}$. ◻

$\mathbb{Q_p}$ as Completion of $\mathbb{Q}$ #

We will first define metric spaces.

Definition 5. We call a set $X$ along with a function (metric) $d\colon X\times X \to \mathbb{R}$, $(X, d)$, a metric space if the following properties are satisfied for any $x, y, z \in X$:

  1. $d(x, x) = 0$

  2. If $x \neq y$ then $d(x, y) > 0$

  3. $d(x, z) \leq d(x, y) + d(y, z)$

  4. $d(x, y) = d(y, x)$

For example, $(\mathbb{Q}, |\cdot|)$ is a metric space.

Definition 6. For a metric space $(X, d)$, we define a Cauchy sequence of $X$ wrt $d$ as follows: A sequence $(x_n)_{n\geq0}$ where $x_n \in X$ is called a Cauchy sequence if for all $\epsilon\in \mathbb{R}^+$, there exists some $N_\epsilon \in \mathbb{N}$ such that for all integers $m, n\geq N$, we have $d(x_n, x_m) < \epsilon$.

In simple terms, any sequence that should converge under the given metric is called a Cauchy sequence. We now define what a completion with respect to a metric is.

Definition 7. Let $S_{(X, d)}$ be the set of Cauchy sequences in $X$ with respect to the metric $d$. Then, given two elements $x_n, y_n$ of $S_{(X, d)}$, we say that $x_n$ is equivalent to $y_n$ iff for every real number $\epsilon$, there exists a natural number $N_\epsilon$ such that for all integers $n\geq N$, we have $d(x_n, y_n)<\epsilon$. We denote this equivalence by $x_n \sim y_n$.

Definition 8. We define the completion of $X$ with respect to $d$ as the set $S_{(X, d)}/\sim$.

In more intuitive terms, the completion of a metric space $X$ with respect to $d$ is the set of limits of the Cauchy sequences in $X$ with respect to $d$.
 
As an example, $\mathbb{R}$ is a completion of $\mathbb{Q}$ with respect to the usual metric, the absolute value.
 
Now, we will define $\mathbb{Q_p}$ as a completion of $\mathbb{Q}$. In order to do so, we must first define the metric on $\mathbb{Q}$ that yields $\mathbb{Q_p}$.

Definition 9. Let $p$ be a prime and $a$ be an integer. Then, we define $v_p(a)$ to be the largest $n$ such that $p^n|a$ when $a\neq 0$ and to be $\infty$ when $a=0$. Now, we extend this definition to $\mathbb{Q}$. Let $q = a/b\in Q$ where $a$ and $b$ are integers where $b\neq 0$. Then, we define $v_p(q) = v_p(a)-v_p(b)$.

Lemma 1. The above definition of $v_p()$ is well-defined. That is, if $a/b = c/d = q$, then we have $v_p(a/b) = v_p(c/d)$.

Proof. Suppose $a/b = c/d$ such that $v_p(a/b) = v_p(c/d)+m$. Then, $v_p(a) -v_p(b) = v_p(c) - v_p(d)+m$. Let $a = p^{k_1}a’, b = p^{k_2}b’, c = p^{k_3}c’, d = p^{k_4}d’$ with each of $a’, b’, c’, d’$ relatively prime to $p$. Therefore, $k_1 - k_2 k_3 - k_4+m$. Moreover, we have $a/b = p^{k_1-k_2}a’/b’$ and $c/d = p^{k_3-k_4}c’/d’$. Since $a/b = c/d$, we have $$p^{k_1-k_2}\frac{a’}{b’} = p^{k_3-k_4}\frac{c’}{d’}$$ We can rewrite this as $$p^{k_1-k_2}a’d’ = p^{k_3-k_4}b’c’$$ Now, we assumed that $k_1-k_2 = k_3-k_4+m$. Let $k_3-k_4 = n$. So, $$p^{n+m}a’d’ = p^nb’c’$$ which implies that $p^ma’d’ = b’c’$. Taking the $v_p$ of both sides, we have $m = 0$. Thus ◻

Definition 10. We define the $p$-adic absolute value to be as follows. Let $q\in \mathbb{Q}$. Then, we define $\lvert q \rvert_p = p^{-v_p(q)}$ when $q\neq0$ and $0$ when $q=0$.

We will first explore some properties of $v_p(a)$ which will help us find a metric on $\mathbb{Q}$ that gives $\mathbb{Q_p}$.

Proposition 11. For any $a, b \in \mathbb{Z}$, the following are true:

  1. $v_p(ab) = v_p(a) + v_p(b)$

  2. $v_p(a+b) \geq \min(v_p(a), v_p(b))$

  3. If $v_p(a) \neq v_p(b)$, then $v_p(a+b) = \min(v_p(a), v_p(b))$

Proof. Let $v_p(a) = k_1$ and $v_p(b) = k_2$. WLOG, assume that $k_1 \geq k_2$. Then, clearly $a = p^{k_1}a’$ and $b = p^{k_2}b’$ where $(a’, p) = 1$ and $(b’, p) = 1$. Then:

  1. We have $ab = a’b’p^{k_1+k_2}$ where $(a’b’, p) = 1$ since each of $a’$ and $b’$ is co-prime to $p$. Hence, $v_p(ab) = k_1 + k_2 = v_p(a) + v_p(b)$.

  2. We have $a+b = a’p^{k_1} + b’p^{k_2}$. Since $k_1 \geq k_2$, we can write $$a+b = p^{k_2}(a’p^{k_1-k_2} + b’)$$ Hence, $$v_p(a+b) = k_2 + v_p(a’p^{k_1-k_2} + b’)$$ Clearly, the above is at least $k_2$ since $v_p(a’p^{k_1-k_2} + b’) \geq 0$. Hence, $v_p(a+b)\geq k_2 = \min(v_p(a), v_p(b))$ since we assumed that $v_p(a) \leq v_p(b)$.

  3. Since we have that $v_p(a) \neq v_p(b)$, we know $k_1 > k_2$. Thus, $a’p^{k_1-k_2} + b’ \equiv 0 + b’ \equiv b’ \pmod{p}$. Since $b’$ is co-prime with $p$, we have $a’p^{k_1-k_2} + b’ \not \equiv 0 \pmod{p}$. Hence, $p\nmid(p^{k_1-k_2}a’ + b’)$. Thus, $v_p(a’p^{k_1-k_2} + b’) = 0$. Hence, $v_p(a+b) = k_2 = v_p(b) = \min(v_p(a), v_p(b))$ since we assume that $v_p(a) \leq v_p(b)$.

 ◻

Proposition 12. The above properties all hold for any $r, s \in \mathbb{Q}$.

Proof. Let $r = a_1/b_1$ and $s = a_2/b_2$ where $a_i, b_i$ are integers with $b_i \neq 0$. We know by definition that $v_p(r) = v_p(a_1)-v_p(b_1)$ and similarly $v_p(s) = v_p(a_2) - v_p(b_2)$. Hence:

  1. We have $$v_p(rs) = v_p\left(\frac{a_1a_2}{b_1b_2}\right) = v_p(a_1a_2)-v_p(b_1b_2)$$ Since $a_i\in \mathbb{Z}$ we have $v_p(a_1a_2) = v_p(a_1) + v_p(a_2)$ and similarly $v_p(b_1b_2) = v_p(b_1) + v_p(b_2)$. Thus, $$v_p(rs) = (v_p(a_1)-v_p(b_1)) + (v_p(a_2)-v_p(b_2)) = v_p(r)+v_p(s)$$

  2. We can write $v_p(r+s)$ as $$v_p\left(\frac{a_1}{b_1}+\frac{a_2}{b_2}\right)= v_p\left(\frac{a_1b_2+a_2b_1}{b_1b_2}\right)=v_p(a_1b_2+a_2b_1)-v_p(b_1b_2)$$ Now, $a_1b_2$ and $a_2b_1$ are integers. So, $$v_p\left(a_1b_2 + a_2b_1\right)\leq \min(v_p(a_1b_2), v_p(a_2b_1))$$ Thus, $$ \begin{aligned} v_p(r+s) &\geq \min(v_p(a_1)+v_p(b_2), v_p(a_2)+v_p(b_1)) - (v_p(b_1)+v_p(b_2)) \\ &=\min(v_p(a_1)-v_p(b_1), v_p(a_2)-v_p(b_2)) = \min(v_p(r), v_p(s)) \end{aligned} $$ We can legally perform the above step since $v_p(b_1b_2)$ is a constant.

  3. If $v_p(a_1b_2) = v_p(a_2b_1)$, then we have $$v_p(a_1) + v_p(b_2) = v_p(a_2) v_p(b_1).$$ Thus, by rearranging the terms, we have $v_p(r) = v_p(s)$. Thus, if $v_p(r) \neq v_p(s)$, it follows that $$v_p(a_1b_2) \neq v_p(a_2b_1).$$ Hence, $$v_p(a_1b_2 + a_2b_1) = \min(v_p(a_1b_2), v_p(a_2b_1)).$$ It follows directly that $$v_p(r+s) = \min(v_p(r), v_p(s)).$$

We have thus extended all the properties that hold for integers under $v_p$ to rationals. ◻

We will now use these properties to prove some properties regarding $\left\lvert \cdot \right\rvert_p$, which will in turn prove that $(\mathbb{Q}, \left\lvert \cdot \right\rvert_p)$ is a metric space.

Proposition 13. The following properties are true regarding $\left\lvert \cdot \right\rvert_p$:

  1. $\left\lvert q \right\rvert_p \geq 0$ and equality holds iff $q=0$

  2. $\left\lvert q+r \right\rvert_p \leq \max(\left\lvert q \right\rvert_p, \left\lvert r \right\rvert_p)$

  3. $\left\lvert qr \right\rvert_p = \left\lvert q \right\rvert_p \left\lvert r \right\rvert_p$

Proof.

  1. We have $\left\lvert q \right\rvert_p = p^{-v_p(q)}$. Clearly this is at least $0$ for all $v_p(q)$ since $v_p(q) \in \mathbb{Q}$. We have $p^{-v_p(q)} = 0$ iff $v_p(q) = \infty$ which happens only when $q=0$.

  2. We have $$\left\lvert q+r \right\rvert_p = p^{-v_p(q+r)} \leq p^{-\min(v_p(q), v_p(r))} = p^{\max(-v_p(q)), -v_p(r)}$$ since $v_p(q+r) \geq \min(v_p(q), v_p(r)))$. Thus, $$\left\lvert q+r \right\rvert_p \leq \max(\left\lvert q \right\rvert_p, \left\lvert r \right\rvert_p)$$ Clearly, equality holds when $\left\lvert q \right\rvert_p \neq \left\lvert r \right\rvert_p$, which follows from the property of $v_p$.

  3. We have $$\left\lvert qr \right\rvert_p = p^{-v_p(qr)} = p^{-v_p{q} - v_p(r)} = p^{-v_p(q)}p^{-v_p(r)} = \left\lvert q \right\rvert_p\left\lvert r \right\rvert_p$$

 ◻

With these properties, we may now define a new metric on $\mathbb{Q}$. Consider the metric $d_p$ which is defined as follows:

Definition 11. We define the function $d_p \colon Q\times \mathbb{Q}\to \mathbb{R}$ as follows: $d_p(x, y) = \left\lvert x-y \right\rvert_p$.

Proposition 14. $\mathbb{Q}$ is a metric space over $d_p$.

Proof. In order to prove this, we must prove all properties listed in the definition of a metric space.

  1. $d_p(x, x) = \left\lvert x-x \right\rvert_p = \left\lvert 0 \right\rvert_p = 0$

  2. $d_p(x, y) = \left\lvert x-y \right\rvert_p > 0$ when $x\neq y$ by the previous proposition.

  3. $d_p(x, z) = \left\lvert x-z \right\rvert_p = \left\lvert (x-y)+(y-z) \right\rvert_p$. By property $2$ above, $\left\lvert (x-y)+(y-z) \right\rvert_p \leq \max(\left\lvert x-y \right\rvert_p, \left\lvert y-z \right\rvert_p) \leq \left\lvert x-y \right\rvert_p + \left\lvert y-z \right\rvert_p$ since $\left\lvert q \right\rvert_p \geq 0$. Hence, $d_p(x, z) \leq d_p(x, y) + d_p(y, z)$.

  4. $d_p(x, y) = \left\lvert x-y \right\rvert_p = \left\lvert y-x \right\rvert_p = d_p(y, x)$

Thus, $\mathbb{Q}$ is a metric space over $d_p$. ◻

We can thus finally define $\mathbb{Q_p}$ in terms of $\mathbb{Q}$ as follows:

Definition 12. We define $\mathbb{Q_p}$ to be the completion of $\mathbb{Q}$ with respect to $d_p$. Thus, $\mathbb{Q_p} = \lbrace [a_n]\mid a_n \in S_p \rbrace$ where $S_p$ denotes the set of Cauchy sequences in $\mathbb{Q}$ with respect to $d_p$.

Operations in $\mathbb{Q_p}$ #

Now, we know that $\mathbb{Q}$ is a field with operations $(+, \cdot)$. We will now try to find a pair of binary operations $(+, \cdot)$ on $\mathbb{Q_p}$ such that $(\mathbb{Q_p}, +, \cdot)$ is a field.
 
Let $\alpha \in \mathbb{Q_p}$. Therefore, we can write $\alpha = [a]$ for some $a\in S_p$ by definition of $Q_p$. Similarly, let $\beta = [b] \in \mathbb{Q_p}$. We define the addition of two elements in the p-adics as $$\alpha + \beta = [a] + [b] = [a+b]$$ and their product as $$\alpha\cdot \beta = [a]\cdot[b] = [a\cdot b]$$ However, we have not yet defined what addition and multiplication are in $S_p$ ($a$ and $b$ are elements of $S_p$). We define the sum of two sequences $a=(a_i)$ and $b = (b_i)$ to be $a+b = (a_i+b_i)$, i.e. the termwise sum of the terms of the sequence. Similarly, we define their product to be $a\cdot b = (a_ib_i)$. Since both $a$ and $b$ converge, it follows that both $a+b$ as well as $ab$ converge. Thus, $S_p$ is closed under multiplication.

Definition 13. Let $S$ be the set of all Cauchy sequences of $\mathbb{Q}$, we define a set of equivalence classes in $S$ such that $$S/\sim \hspace{0.5 mm}= \lbrace [a] | a \in S \rbrace$$ $$[a] = \lbrace (a_i) \sim y | \exists (y_i) \in S \lim\limits_{i \to \infty}{|a_i - y_i|_p} = 0 \rbrace$$

Lemma 2. The set $S/\sim$ has a well-defined addition and multiplication $$ \begin{aligned} [a] + [b] \sim [a + b] \\ [a][b] \sim [ab] \end{aligned} $$

Proof. We need to show that, $|(a_i + b_i) - (a^{’}_i + b^{’}_i)|_p = 0$. To prove this, we can observe the following,

$$|(a_i + b_i) - (a^{’}_i + b^{’}_i)|_p = |a_i - a^{’}_i + b_i - b^{’}_i|_p$$

It follows that, $$|a_i - a^{’}_i + b_i - b^{’}_i|_p \le |a_i - a^{’}_i|_p + |b_i - b^{’}_i|_p$$ the RHS converges to $0$ as $i \rightarrow \infty$. For multiplication, we have, $$\begin{aligned} |a_ib_i - a^{’}_ib^{’}_i| = |a_ib_i - a_ib^{’}_i + a_ib^{’}_i + a^{’}_ib^{’}_i| \\ |a_i(b_i - b^{’}_i) + b^{’}_i(a_i - a^{’}_i)|_p \le |a_i(b_i - b^{’}_i)|_p + |b^{’}_i(a_i - a^{’}_i)|_p \\ |a_i(b_i - b^{’}_i)|_p + |b^{’}_i(a_i - a^{’}_i)|_p = |a_i|_p|(b_i - b^{’}_i)|_p + |b^{’}_i|_p|(a_i - a^{’}_i)|_p \end{aligned}$$ Since $(a_i)$ and $(b^{’}_i)$ are bounded, then we can proceed: $$|a_i|_p\lim\limits_{i \to \infty}|(b_i - b^{’}_i)|_p + |b^{’}_i|_p|\lim\limits_{i \to \infty}(a_i - a^{’}_i)|_p = \epsilon \cdot 0 + \epsilon \cdot 0 = 0$$ ◻

Now, we will prove that $\mathbb{Q_p}$ is a field.

Definition 14. If $\lambda$ is an element of $\mathbb{Q_p}$ and $(x_n) \in S/\sim$ is any Cauchy sequence representing $\lambda$, we define $$|\lambda|_p = \lim\limits_{n \to \infty}|x_n|_p$$

Proposition 15. $\mathbb{Q_p} = S/\sim$ is a field

Proof. Let $(1/x_i)$ denote the multiplicative inverse of $(x_i)$ in $\mathbb{Q_p}$. To know that $1/x_n$ is Cauchy, observe the following $$\left |\frac{1}{x_i} - \frac{1}{x_j} \right|_p = \left|\frac{x_j - x_i}{x_jx_i}\right|_p$$ Since $(x_n)$ is Cauchy, we have that $(x_j - x_i)$ is bounded by $\epsilon$. It follows that for a fix large $N$ and for every $i, j > N$, we have that for every $\epsilon > 0$, the following holds $$\left |\frac{1}{x_i} - \frac{1}{x_j} \right|_p < \epsilon$$ ◻

Sequences and Series in $\mathbb{Q_p}$ #

Till now, we only talked about sequences in $\mathbb{Q}$, and used them to define the system $\mathbb{Q_p}$. We will now talk about sequences in $\mathbb{Q_p}$ itself. They can be thought of as sequences of sequences, since each element of $\mathbb{Q_p}$ is itself a sequence of elements in $\mathbb{Q}$. We will now extend the $p$-adic absolute value to even $p$-adic numbers. There are two things we need to take care of here. Firstly, we only know what $\left\lvert q \right\rvert_p$ where $q$ is a rational number is, not a sequence of rational numbers. So, we need to define the notion of $p$-adic absolute value for Cauchy sequences. Next, $[a]$ is not a single sequence. It is a set of sequences. Therefore, we need to show that for any sequence in $[a]$, the result of $\left\lvert [a] \right\rvert_p$ is the same. In other words, if $x\sim y$ are two sequences, then we must show that $\left\lvert x \right\rvert_p = \left\lvert y \right\rvert_p$.

Definition 15. Let $(x_n)$ be a Cauchy sequence. Then, the sequence $(\left\lvert x_n \right\rvert_p)$ converges to some real number, say $y \in \mathbb{R}$. Then, we define $\left\lvert (x_n) \right\rvert_p$ to be $y$. In other words, $\left\lvert (x_n) \right\rvert_p = \lim_{n\to\infty}\left\lvert x_n \right\rvert_p$.

Proof. We will prove that if $(x_n)$ is a Cauchy sequence, then $(\left\lvert x_n \right\rvert_p)$ converges in $\mathbb{R}$. So, we must prove that for all $\epsilon \in \mathbb{R}^+$, we have that there exists some $N_\epsilon \in \mathbb{N}$ such that for all $m,n\geq N_\epsilon$ we have $$\lvert \left\lvert x_n \right\rvert_p-\left\lvert x_m \right\rvert_p \rvert<\epsilon$$ By the definition of a Cauchy sequence with respect to $\left\lvert \cdot \right\rvert_p$, we have that for every $\epsilon \in \mathbb{R}^+$, there exists some natural number $N_\epsilon$ such that for all $m,n\geq N_\epsilon$, we have $\left\lvert x_n - x_m \right\rvert_p < \epsilon$. Let us now keep $\epsilon$ and $N_\epsilon$ fixed. Now, consider some integers $m, n \geq N_\epsilon$ such that $\left\lvert x_n \right\rvert_p \neq \left\lvert x_m \right\rvert_p$. Then, we have $$\left\lvert x_n - x_m \right\rvert_p = \max(\left\lvert x_n \right\rvert_p, \left\lvert x_m \right\rvert_p)< \epsilon$$ Therefore, we have $0<\left\lvert x_m \right\rvert_p<\left\lvert x_n \right\rvert_p<\epsilon$ (WLOG). Thus, $\left\lvert x_n \right\rvert_p-\left\lvert x_m \right\rvert_p<\epsilon$. In general, $\lvert \left\lvert x_n \right\rvert_p-\left\lvert x_m \right\rvert_p \rvert < \epsilon$.
 
Now, if $\left\lvert x_n \right\rvert_p = \left\lvert x_m \right\rvert_p$, this implies that $\lvert \left\lvert x_n \right\rvert_p-\left\lvert x_m \right\rvert_p \rvert = 0 < \epsilon$ since $\epsilon \in \mathbb{R}^+$. Thus, for all $m,n \geq N_\epsilon$ we have $\lvert \left\lvert x_m \right\rvert_p-\left\lvert x_n \right\rvert_p \rvert<\epsilon$. This holds true for every value of $\epsilon$ that is a positive real and its corresponding $N_\epsilon$. Therefore, the sequence $(\left\lvert x_n \right\rvert_p)$ converges. ◻

We will now show that $\left\lvert [a] \right\rvert_p$ is well defined where $[a]$ is the equivalence class of a Cauchy sequence $a$ of $\mathbb{Q}$. Recall that this is equivalent to proving that if $x_n \sim y_n$ then $\left\lvert x_n \right\rvert_p = \left\lvert y_n \right\rvert_p$.

Proposition 16. If $(x_n), (y_n) \in S_p$ are equivalent (where equivalence is as defined before), then $\left\lvert (x_n) \right\rvert_p = \left\lvert (y_n) \right\rvert_p$. In other words, $$\lim_{n\to\infty}\left\lvert x_n \right\rvert_p = \lim_{n\to \infty}\left\lvert y_n \right\rvert_p$$

Proof. Given that $(x_n)\sim (y_n)$, by definition, we have that for all $\epsilon \in \mathbb{R}^+$, there exists some $N_\epsilon \in \mathbb{N}$ such that for all $n\geq N_\epsilon$, we have $\left\lvert x_n-y_n \right\rvert_p<\epsilon$. In other words, if $(x_n) \sim (y_n)$, then $$\lim_{n\to\infty}\left\lvert x_n-y_n \right\rvert_p = 0$$ Let us assume for the sake of contradiction that $\lim_{n\to\infty}{\left\lvert x_n \right\rvert_p} \neq \lim_{n\to\infty}\left\lvert y_n \right\rvert_p$. Also, let us assume WLOG that $\lim_{n\to\infty}{\left\lvert x_n \right\rvert_p} > \lim_{n\to\infty}\left\lvert y_n \right\rvert_p$. Since $\left\lvert x_n - y_n \right\rvert_p = \max(\left\lvert x_n \right\rvert_p, \left\lvert y_n \right\rvert_p)$ when $\left\lvert x_n \right\rvert_p\neq \left\lvert y_n \right\rvert_p$, it follows that $$\lim_{n\to\infty}\left\lvert x_n-y_n \right\rvert_p = \lim_{n\to\infty}\left\lvert x_n \right\rvert_p = 0$$ However, we assumed that $\lim_{n\to \infty}\left\lvert y_n \right\rvert_p<\lim_{n\to\infty}\left\lvert x_n \right\rvert_p = 0$. A contradiction since the absolute value is always positive. It follows that $\lim_{n\to\infty}\left\lvert x_n \right\rvert_p = \lim_{n\to \infty}\left\lvert y_n \right\rvert_p$. Hence, $\left\lvert (x_n) \right\rvert_p = \left\lvert (y_n) \right\rvert_p$. ◻

From this, one may conclude that $\left\lvert \alpha \right\rvert_p$ is well defined for all $\alpha \in \mathbb{Q_p}$. Moreover, the same properties hold for $\left\lvert \alpha \right\rvert_p$ for $\alpha \in \mathbb{Q_p}$ as in $\mathbb{Q}$, which can be easily seen from the fact that $\left\lvert (x_n) \right\rvert_p = \lim_{n\to\infty}\left\lvert x_n \right\rvert_p$.
 
With the new extension of the $p$-adic absolute value in the $p$-adic numbers, we may now extend the notion of convergence to $\mathbb{Q_p}$ as well.

Definition 16. Consider a sequence $(x_n)$ where $x_n \in \mathbb{Q_p}$. We say that $(x_n)$ converges to a limit $L\in \mathbb{Q_p}$, iff for all $\epsilon \in\mathbb{R}^+$, there is some natural number $N_\epsilon$ such that for all $n \geq N_\epsilon$ we have $\left\lvert x_n- L \right\rvert_p<\epsilon$. Note that here, each $x_k$ is itself a $p$-adic number. So, $(x_n)$ is essentially a sequence of sequences of rationals.

We now give the condition of convergence for a sequence in $\mathbb{Q_p}$.

Definition 17. A sequence $(a_n)$ converges iff there exists an element of $\mathbb{Q_p}$ that it converges to.

Proposition 17. In $\mathbb{Q_p}$, a sequence $(x_n)$ converges then $(\left\lvert x_n \right\rvert_p)$ converges in $\mathbb{R}$. Moreover, $\lim_{n\to\infty}{\left\lvert x_n \right\rvert_p} = \left\lvert \lim_{n\to\infty}x_n \right\rvert_p$

Proof. Suppose $(x_n)$ converges to $L\in \mathbb{Q_p}$. Then, by definition, we have that $$\forall\epsilon \in \mathbb{R}^+\exists N_\epsilon \in \mathbb{N}\text{ st } \forall n \geq N_\epsilon, \left\lvert x_n-L \right\rvert_p<\epsilon$$ Now, $\left\lvert x_n-L \right\rvert_p = \max(\left\lvert x_n \right\rvert_p, \left\lvert L \right\rvert_p)<\epsilon$ if $\left\lvert x_n \right\rvert_p\neq \left\lvert L \right\rvert_p$. Therefore, we have $0 \leq \left\lvert x_n \right\rvert_p<\epsilon$ and $0 \leq \left\lvert L \right\rvert_p < \epsilon$ for all real numbers $\epsilon > 0$. Thus, we have $\lvert \left\lvert x_n \right\rvert_p-\left\lvert L \right\rvert_p \rvert<\epsilon$. If $\left\lvert x_n \right\rvert_p= \left\lvert L \right\rvert_p$ the previous statement still holds. Therefore, $$\lim_{n\to\infty}\left\lvert x_n \right\rvert_p = \left\lvert L \right\rvert_p=\left\lvert \lim_{n\to\infty}x_n \right\rvert_p$$ which proves the desired result. ◻

Power series in $\mathbb{Q_p}$ #

Definition 18. A series $\sum_{i\geq0}a_i$ is said to converge if and only if the sequence $(S_n)$ given by $S_n = \sum_{i=0}^na_i$ converges.

We will now give the condition of convergence for a series in $\mathbb{Q_p}$. The condition turns out to be much simpler than that in $\mathbb{R}$, in which there are multiple convergence tests for series.

Proposition 18. The series $\sum_{n\geq0}a_n$ converges in $\mathbb{Q_p}$ $\iff$ the sequence $(a_n)_{n\geq0}$ converges to $0$ in $\mathbb{Q_p}$.

Proof. Consider the sequence $(S_n)$ given by $S_n = \sum_{i=0}^na_i$. We need to show that $(S_n)$ converges if and only if $(a_n)$ converges.
 
Suppose $(S_n)$ converges. We know that, by definition, $(S_n)$ converges if and only if for all $\epsilon \in \mathbb{R}^+$ we have that there exists some natural number $N_\epsilon$ such that for all $m,n\geq N_\epsilon$ we have $\left\lvert S_m-S_n \right\rvert_p < \epsilon$. Consider some $n\geq N_\epsilon$. Then, we clearly have that $$\left\lvert S_{n+1}-{S_n} \right\rvert_p < \epsilon$$ Now, by definition of $S_n$, we have $S_{n+1}-S_n = a_{n+1}$. Thus, $\left\lvert a_{n+1} \right\rvert_p<\epsilon$ for all $n\geq N_\epsilon$. Therefore, in general, we have that for every $\epsilon \in \mathbb{R}^+$, there is some $M_\epsilon = N_\epsilon + 1$ such that for all $n \geq M_\epsilon$ we have $\left\lvert a_n \right\rvert_p<\epsilon$. It follows that $(a_n)$ converges to $0$, by definition. Note that this direction of the proof holds in $\mathbb{R}$ as well. The opposite direction is what makes convergence in $\mathbb{R}$ more difficult.
 
Now, we will prove that if $(a_n)$ converges to $0$, then $(S_n)$ converges. By definition, we have that for every $\epsilon \in \mathbb{R}^+$, there exists some natural number $N_\epsilon$ such that for all $n\geq N_\epsilon$ we have $\left\lvert a_n \right\rvert_p < \epsilon$. Now, consider some integers $m, n$ such that $m>n\geq N_\epsilon$. We have $$S_m - S_n = a_{n+1} + \cdots + a_m$$ Thus, $$\begin{aligned} \left\lvert S_m - S_n \right\rvert_p = \left\lvert a_{n+1} + \cdots + a_m \right\rvert_p \leq \max(a_{n+1}, \dots, a_m) \end{aligned}$$ Since for all $n\geq N_\epsilon$, we have that $a_n < \epsilon$, it follows that $\max(a_{n+1}, \dots, a_m)<\epsilon$. Hence $\left\lvert S_m - S_n \right\rvert_p < \epsilon$ for all $m, n > N_\epsilon$.
 
Thus, in general, for all $\epsilon \in \mathbb{R}^+$, we have that there exists some $N_\epsilon \in \mathbb{N}$ such that for all $m, n\geq N_\epsilon$, $\left\lvert S_m - S_n \right\rvert_p$. Hence, $(S_n)$ converges. ◻

Note that the above proof holds entirely because of the fact that $\left\lvert a+b \right\rvert_p\leq \max(\left\lvert a \right\rvert_p, \left\lvert b \right\rvert_p)$.

Radius of Convergence #

We define the radius of convergence of a power series $\sum_{n\geq0}a_nx^n$ to be the value $r$ so that the sequence $\left\lvert a_n \right\rvert_pc^n$ converges to $0$ for all $c<r$ and does not converge for $c>r$. The following result is fundamental:

Proposition 19. The radius of convergence of $\sum_{n\geq0}a_nx^n$ is given by $r = \left(\lim\sup\left\lvert a_n \right\rvert_p^{1/n}\right)^{-1}$

Discs #

Definition 19. For any $a\in \mathbb{Q_p}$ and $r\in \mathbb{R}^+$, we define a closed disc of radius $r$, centered at $a$ to be the set $D(a;r) \vcentcolon=\lbrace z\in \mathbb{Q_p}\colon \left\lvert z-a \right\rvert_p\leq r \rbrace$ and an open disc of radius $r$, centered at $a$ to be the set $D(a; r^-)\vcentcolon=\lbrace z\in \mathbb{Q_p} \colon \left\lvert z-a \right\rvert_p<r \rbrace$.

Now, consider $f(x) = \sum_{n\geq 0}a_nx^n \in \mathbb{Q_p}[[x]]$, a power series, and suppose that its radius of convergence is $r$. Therefore, we can define a function $f\colon D(0; r^-) \to \mathbb{Q_p}$ so that for any $t\in D(0; r^-)$, we have $$f(t) = \lim_{n\to\infty}\left(\sum_{k=0}^n a_kt^k\right)$$ Since $t\in D(0; r^-)$, the above sum indeed converges, and therefore, $f$ is well defined.
 
We now defined continuity in $\mathbb{Q_p}$.

Definition 20. We say that a function $f\colon S\to \mathbb{Q_p}$ is continuous at a point $x \in S$ if for all $\epsilon \in \mathbb{R}^+$, there exists some positive real $\delta$ such that $\left\lvert x-y \right\rvert_p<\delta$ implies $\left\lvert f(x)-f(y) \right\rvert_p<\epsilon$.

Definition 21. We say that a function $f\colon S\to\mathbb{Q_p}$ is continuous, if it is continuous at every point in $S$.

Proposition 20. Every function $f\colon D(0; r^-) \to \mathbb{Q_p}$ such that $$f(x) = \lim_{n\to\infty}\sum_{k=0}^n a_kx^k$$ for all $x \in D(0; r^-)$ is continuous in $\mathbb{Q_p}$.

Proof. We first prove a lemma:

Lemma 3. Let $S_n = \sum_{k=0}^na_k$ and let $S = \lim_{n\to\infty}S_n$. Then, $\left\lvert S \right\rvert_p \leq \lim_{n\to\infty}\max(\left\lvert a_1 \right\rvert_p, \dots, \left\lvert a_n \right\rvert_p)$. This can also be written as $\max(\left\lvert a_1 \right\rvert_p, \left\lvert a_2 \right\rvert_p, \dots)$. Notationally, we shall express this as $\max(\left\lvert a_k \right\rvert_p)_{n\geq0}$.

Proof. We know that $$S = \lim_{n\to \infty}S_n$$ Thus, $$\left\lvert S \right\rvert_p = \left\lvert \lim_{n\to\infty}S_n \right\rvert_p$$ As we saw earlier, the above is equal to $\lim_{n\to\infty}\left\lvert S_n \right\rvert_p$, which is at most $\lim_{n\to\infty}\max(\left\lvert a_1 \right\rvert_p, \dots, \left\lvert a_n \right\rvert_p)$. ◻

Let $x\in D(0; r^-)$ be any point in $D(0; r^-)$. Let $y$ be another point in $D(0; r^-)$. Therefore, we have $\left\lvert x \right\rvert_p<r$ and $\left\lvert y \right\rvert_p < r$. Consider some positive real $\epsilon$. Now, suppose that there is some $\delta \in \mathbb{R}^+$ such that $\left\lvert x-y \right\rvert_p<\delta$. Now, we have $$\begin{aligned} \left\lvert f(x)-f(y) \right\rvert_p &= \left\lvert \lim_{n\to\infty}\left(\sum_{k=0}^n a_kx^k\right)-\lim_{n\to\infty}\left(\sum_{k=0}^n a_ky^k\right) \right\rvert_p = \left\lvert \lim_{n\to\infty}\left(\sum_{k=0}^na_k(x^k-y^k)\right) \right\rvert_p \\ &= \left\lvert \lim_{n\to\infty}\left((x-y)\sum_{k=0}^na_k(x^{k-1}+x^{k-2}y+\cdots+ xy^{k-2}+y^{k-1})\right) \right\rvert_p \\ &= \left\lvert x-y \right\rvert_p\left\lvert \lim_{n\to\infty}\left(\sum_{k=0}^na_k(x^{k-1}+x^{k-2}y + \cdots + xy^{k-2}+ y^{k-1})\right) \right\rvert_p \end{aligned}$$ By the Lemma, we may simplify the above to get: $$\left\lvert f(x) - f(y) \right\rvert_p\leq \left\lvert x-y \right\rvert_p\max\left(\left\lvert a_n(x^{n-1}+x^{n-2}y + \cdots + xy^{n-2}+y^{n-1}) \right\rvert_p\right)_{n\geq0}$$ Now, $$\left\lvert x^{n-1}+x^{n-2}y + \cdots xy^{n-2} + y^{n-1} \right\rvert_p\leq \max\left(\left\lvert x^{n-k}y^{k-1} \right\rvert_p\right)_{k=1}^{n-1}$$ Since $\left\lvert x \right\rvert_p<r$ and $\left\lvert y \right\rvert_p<r$, it follows that $\left\lvert x^{n-k}y^{k-1} \right\rvert_p<r^{n-k}r^{k-1} = r^{n-1}$. Therefore, $$\left\lvert x^{n-1}+x^{n-2}y + \cdots xy^{n-2} + y^{n-1} \right\rvert_p<r^{n-1}$$ Thus, $$\left\lvert f(x)-f(y) \right\rvert_p< \left\lvert x-y \right\rvert_p\max\left(\left\lvert a_n \right\rvert_pr^{n-1}\right)_{n\geq0}$$ Now, by the definition of continuity, we must prove that for all $\epsilon \in \mathbb{R}^+$, there exists some $\delta \in \mathbb{R}^+$ such that $\left\lvert x-y \right\rvert_p<\delta$ implies $\left\lvert f(x)-f(y) \right\rvert_p<\epsilon$. We can let $$\delta = \frac{\epsilon}{\max\left(\left\lvert a_n \right\rvert_pr^{n-1}\right)}_{n\geq0}$$ for any given positive real $\epsilon$. Then, clearly $\left\lvert x-y \right\rvert_p<\delta$ implies $\left\lvert f(x)-f(y) \right\rvert_p$. Therefore, there always exists such a real number $\delta$ and hence $f(x)$ is continuous. ◻

For an alternative proof, we will use the notion of continuity between the mapping of two topological spaces.

Definition 22. Let $X$ and $Y$ be topological spaces. The map $f\colon X \to Y$ is continuous $\iff$ the preimage of the open set is open.

In other words, if you have a function $f$ mapping from one topological space $X$ to another topological space $Y$, and for any open set $U \subset Y$, the set of all points in $X$ that map to points in $U$ (i.e., the preimage of $U$) is open in $X$, then $f$ is a continuous map.

Proof. To begin, let’s prove first the following lemma:

Lemma 4. Let $a, b \in \mathbb{Q_p}$ and $r, s \in \mathbb{R}^+$, we have the following properties of $D(a;r^{-})$:

  1. If $b \in D(a; r^{-})$, then $D(a; r^{-}) = D(b; r^{-})$.

  2. The open disc $D(a; r^{-})$ is also a closed set.

  3. $D(a;r^{-}) \cap D(b; s^{-}) \neq \emptyset \iff D(a; r^{-}) \subset D(b; s^{-})$ or $D(a; r^{-}) \supset D(b; s^{-})$

Proof.

  1. Observe the following, we can rewrite $x \in D(a; r^{-})$ as, $$\begin{aligned} |x - a |_p < r \\ |x - a |_p = (|x - b + b - a|_p) \le \max(|x-b|_p, |b-a|_p) < r \end{aligned}$$ We have that $\max(|x-b|_p, |b-a|_p)$ is contained $D(b; r^{-})$, but then $|x-a|_p \le \max(|x-b|_p, |b-a|_p)$, thus $D(a; r^{-}) \subset D(b; r^{-})$, and since it is given that $b \in D(a; r^{-})$ we also have $D(b; r^{-}) \supset D(a; r^{-})$. Hence, $D(a; r^{-}) = D(b; r^{-})$ as claimed.

  2. By definition, $D(a; r^{-})$ is an open set. We will show that it is also a closed set. Pick a boundary point in $D(a; r)$, and call it $x$, and also choose $s \le r$. Since $x$ is a boundary point, we have $D(a; r) \cap D(x; s) \neq \emptyset$, then $\exists y \in D(a; r) \cap D(x, s)$, this means that $|y - a| < r$ and $|y - x| < s \le r$. Using the non-archimedean inequality, we have: $$|x - a| < \max(|x - y|, |y - a|) < \max(s, r) \le r$$ thus $x \in D(a; r)$ such that $D(a; r)$ contains each of its boundary points, making $D(a; r)$ a closed set by definition.

  3. Assume W.L.O.G that $r \le s$. If the intersection is non-empty then there exists a $c$ in $D(a; r) \cap D(b; s)$. Then we know, from $(i)$, that $D(a; r) = D(c; r)$ and $D(b; s) = D(c; s)$. Hence $$D(a; r) = D(c; r) \subset D(c; s) = D(b; s)$$

 ◻

Now to begin the proof, we define the preimage of $D(y; s^{-})$ under $f$ as, $$f^{-1}(D(y; s^{-})) = \lbrace a \in D(0; r^{-}) | f(a) \in D(y; s^{-}) \rbrace$$ For a sketch-proof, when these set of points in $D(0; r^{-})$ that is in the preimage of $f$ are open then $f$ is continuous. Now, fix an element of the preimage of $D(y; s^{-})$ under $f$, and call it $t$ such that $|t|_p < r$. By definition, $f(t)$ converges in $D(y; s^{-})$. By Proposition 16, it follows that $|a_nt^n|$ converges to $0$ in $D(y; s^{-})$. We have that $|a_nt^n|_p$ is a Cauchy sequence. Since $a_n \in \mathbb{Q_p}$, it is Cauchy, then we have $|a_nt^n|_p = |a_n|_p|t^n|_p < \epsilon\cdot |t^n|_p$. For which it follows that $(t^n)$ is Cauchy since we have $$|a_mt^m|_p < \epsilon$$

for every $m > M$ (for a fixed large $M$). Then it follows that $t$ converges in $D(y; s^{-})$. Next, we have that $t \in D(y; s^{-})$, and also $t \in D(0; r^{-})$ then $D(y; s^{-}) \cap D(0; r^{-}) = \lbrace t \rbrace$. By Lemma 4, we know that $D(0; r^{-}) = D(t; r^{-}) \subset D(t; s^{-}) = D(y; s^{-})$. Then we have a union of open disks which are the preimage of $D(y; s^{-})$ under $f$, $$f^{-1}(D(y; s^{-})) = \bigcup_{|t|_p < r} D(t; r^{-})$$ Hence it follows that $f$ is continuous as claimed. ◻

Remarks 1. The characterization all power series $f(x) \in \mathbb{Q_p}[[x]]$ such that $f(x)$ converges at every point of the closed unit disk $D(0; 1)$ are as follows:

  • The function $f: D(0; r^{-}) \to D(0; 1^{-})$ must be continuous.

  • Given $t \in D(0; r^{-})$, the sequence of coefficient of $f$, given by $(a_n)$ satisfy that $|a_nt^n|_p$ converges to 0 in $D(0; 1)$.

  • For a power series to converge to $1$, it needs $f(0) = 1$. Since there exists $g \in \mathbb{Q_p}[[x]]$ such that $f \cdot g = 1$ for which multiplication is closed in $\mathbb{Q_p}[[x]]$. In other words, $f(x) \in 1 + x\mathbb{Q_p}[[x]]$.

Exponentiation in $p$-adics #

We define the exponentiation of $x$ in the $p$-adics to be the following element of $\mathbb{Q_p}[[x]]$: $$\exp(x) = \sum_{n\geq0}\frac{x^n}{n!}$$ We see that the $\exp$ function has the following properties:

Proposition 21. We have $\exp(x+y) = \exp(x)\cdot\exp(y)$ in $\mathbb{Q_p}[[x, y]]$.

Proof. We have $$\exp(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!}+\cdots$$ and $$\exp(y) = 1 + x + \frac{y^2}{2!} + \frac{y^3}{3!}+\cdots$$ Thus, $$\exp(x)\cdot\exp(y) = \left(1+x+\frac{x^2}{2!}+\cdots\right)\left(1+y+\frac{y^2}{2!}+\cdots\right)$$ Expanding the above product and combining all terms with the same degree, we get: $$\exp(x)\cdot\exp(y) = 1+(x+y) + \left(\frac{x^2}{2!}+ xy + \frac{y^2}{2!}\right) + \cdots + \sum_{k=0}^n\left(\frac{x^ky^{n-k}}{k!(n-k)!}\right)+\cdots$$ Writing the above as a summation with respect to $n$, we get: $$\exp(x)\cdot\exp(y) = \sum_{n\geq0}\left(\sum_{k=0}^n\frac{x^ky^{n-k}}{k!(n-k)!}\right) = \sum_{n\geq0}\left(\frac{\sum_{k=0}^n\binom{n}{k}x^ky^{n-k}}{n!}\right) = \sum_{n\geq0}\frac{(x+y)^n}{n!}=\exp(x+y)$$ by the binomial theorem, and we are done. Hence, we have a homomorphism $\phi : \mathbb{Q_p}^2 \to \mathbb{Q_p}$ such that $\exp(x)$ satisfies $\phi(x \circ y) = \phi(x) \circ \phi(y)$ ◻

Proposition 22. The radius of convergence of $\exp(x)$ is $p^{-\frac{1}{p-1}}$

Proof. We know that the radius, $r$, of convergence of any power series in the $p$-adics is given by $r=(\lim\sup\left\lvert a_n \right\rvert_p^{1/n})^{-1}$. Therefore, we know that the radius of convergence, $r$ of $\exp(x)$ is given by $(\lim\sup\left\lvert \frac{1}{n!} \right\rvert_p)^{-1}$. We need to therefore prove that $$\left(\lim\sup\left\lvert \frac{1}{n!} \right\rvert_p\right)^{-1} = p^{-1/(p-1)}$$ In order to do so, we need the following lemma

Lemma 5. $v_p(n!) = \frac{n - S_p(n)}{p-1}$ ($S_p(n)$ is the sum of all digits of $n$ over base $p$)

Proof. Notice that $v_p(n!) = v_p(n) + v_p(n-1) + \cdots = \sum_{l \le n} v_p(l)$. We take $v_p(l)$ ($l \le n$), and expand $l$ over base $p$. Take $l = l_mp^m + \cdots + l_rp^r$ ($m \le r$, $l_m \neq 0$), where we have $v_p(l) = m$. Using telescoping techniques, observe that: $$\begin{aligned} - 1 = (p - 1) + (p - 1)p + (p - 1)p^2 + \cdots + (p - 1)p^m - p^m \\ l - 1 = (p - 1) + (p - 1)p + (p - 1)p^2 + \cdots + (p - 1)p^{m-1} + (l^m - 1) p^m + \cdots + l_rp^r \end{aligned}$$ We have that the sum of the digits (over base $p$) of $l - 1$ is, $$S_p(l - 1) = m(p - 1) + S_p(l) - 1$$ The reason why there is $-1$ on the RHS since we have $l_m - 1$ from the previous equation, thus it follows we have $S_p(l) - 1$. We know that $v_p(l) = m$, then solving for $m$, we have $$m = \frac{1}{p - 1}\left[ S_p(l - 1) - S_p(l) + 1 \right]$$ Then we have, $$v_p(n!) = \sum_{l \le n} v_p(l) = \frac{1}{p-1}\sum_{l \le n} [S_p(l - 1) - S_p(l) + 1]$$ Since this is a telescoping series, we have that: $$v_p(n!) = \frac{1}{p-1}(-S_p(n) + n) = \frac{n - S_p(n)}{p - 1}$$ ◻

Lemma 6. $\lim\sup(\left(v_p(n!)/n\right)$ converges to $1/(p-1)$.

Proof. Consider some non-negative integer $k$. Consider the sequence of all reals $v_p(n!)/n$ where $n$ is so that $p^k \leq n < p^{k+1}$. The maximum value of this sequence occurs when $n = p^k$. Thus, the supremum of the sequence $v_p(n!)/n$ when $p^k \leq n < p^{k+1}$ is $v_p((p^k)!)/p^k$. By Legendre’s formula $$v_p(n!) = \left\lfloor\frac{n}{p}\right\rfloor + \left\lfloor\frac{n}{p^2}\right\rfloor+\cdots$$ When $n=p^k$, this simplifies to: $$v_p(n!) = p^{k-1}+\cdots + 1 = \frac{p^k-1}{p-1}$$ Thus, $$\frac{v_p(n!)}{n} = \frac{p^{k}-1}{p^k(p-1)} = \frac{1}{p-1}\cdot\frac{p^k-1}{p^k}$$ The above gives the maximum value of $v_p(n!)$ for $n$ between $p^k$ and $p^{k+1}$. Thus, one may conclude that $$\lim\sup(v_p(n!)/n) = \lim_{k\to\infty}\left(\frac{1}{p-1}\cdot\frac{p^k-1}{p_k}\right)$$ The $1/(p-1)$ term is constant. Moreover, the sequence $\frac{p^x-1}{(p-1)p^x}$ where $x$ is a real number converges to the same real number as $(p^k-1)/(p^k(p-1))$ where $k$ is an integer since $\mathbb{Z}\in \mathbb{R}$. Therefore $$\lim_{k\to\infty}\frac{1}{p-1}\cdot\frac{p^k-1}{p^k} = \lim_{x\to\infty}\frac{1}{p-1}\cdot\frac{p^x-1}{p^x} = \frac{1}{p-1}$$ Hence $\lim\sup(v_p(n!)/n) = 1/(p-1)$. ◻

Now, our goal is to find $\lim\sup$ of the sequence $\left\lvert 1/n! \right\rvert_p^{1/n}$. The sequence $(\left\lvert 1/n! \right\rvert_p^{1/n})$ can be rewritten as $$\left(p^{-v_p(1/n!)/n}\right) = \left(p^{v_p(n!)/n}\right)$$ Now, $$\lim\sup\left(p^{v_p(n!)/n}\right) = p^{\lim\sup(v_p(n!)/n})$$ since $p^k$ is a strictly increasing function with respect to $k$. By our lemma, we therefore have $$\lim\sup(\left\lvert 1/n! \right\rvert_p^{1/n}) = p^{1/(p-1)}$$ Thus, $r = p^{-1/(p-1)}$ and we are done. ◻

Proposition 23. For all $a, b \in D(0; \left(p^{-1/(p-1)}\right)^-)$, we have $a+b \in D(0; \left(p^{-1/(p-1)}\right)^-)$. Therefore, we have $\exp(a+b) = \exp(a)\cdot\exp(b)$.

Proof. By definition, from $a,b\in D\left(0;\left(p^{-1/(p-1)}\right)^-\right)$ we have $\left\lvert a \right\rvert_p, \left\lvert b \right\rvert_p<p^{-1/(p-1)}$. Thus, we have $\max(\left\lvert a \right\rvert_p, \left\lvert b \right\rvert_p)<p^{-1/(p-1)}$. Hence, $\left\lvert a+b \right\rvert_p\leq \max{\left\lvert a \right\rvert_p, \left\lvert b \right\rvert_p}<p^{-1/(p-1)}$. Thus, we have $a+b\in D\left(0;\left(p^{-1/(p-1)}\right)^-\right)$.
 
Now, we know $$\exp(a) = \lim_{n\to\infty}\left(\sum_{k=0}^n\frac{a^k}{k!}\right)$$ and $$\exp(b) = \lim_{n\to\infty}\left(\sum_{k=0}^n\frac{b^k}{k!}\right)$$ Thus, $$\exp(a)\cdot\exp(b) = \left(\lim_{n\to\infty}\left(\sum_{k=0}^n\frac{a^k}{k!}\right)\right)\cdot\left(\lim_{n\to\infty}\left(\sum_{k=0}^n\frac{b^k}{k!}\right)\right) = \lim_{n\to\infty}\left(\sum_{k=0}^n\frac{a^k}{k!}\cdot\sum_{k=0}^n\frac{b^k}{k!}\right)$$ We can expand the product of the summations and combine the terms of the same degree to get: $$\sum_{k=0}^n\frac{a^k}{k!}\cdot\sum_{k=0}^n\frac{b^k}{k!} = \sum_{k=0}^n\frac{(a+b)^k}{k!}+f_n(a, b)$$ where $f_n(x, y)$ is some polynomial in $\mathbb{Q_p}[x,y]$ such that the smallest degree of the terms is $n+1$. Therefore, $$\lim_{n\to\infty}\left(\sum_{k=0}^n\frac{a^k}{k!}\cdot\sum_{k=0}^n\frac{b^k}{k!}\right) = \lim_{n\to=\infty}\left(\sum_{k=0}^n\frac{(a+b)^k}{k!}\right)+\lim_{n\to\infty}f_n(a, b)=\exp(a+b)+\lim_{n\to\infty}f_n(a, b)$$ by definition of $\exp(a+b)$. We know that the degree of the term with lowest degree in $f(x, y)$ is $n+1$. Moreover, we have $\left\lvert x \right\rvert_p, \left\lvert y \right\rvert_p<p^{-1/(p-1)}$. Assume WLOG that $\left\lvert x \right\rvert_p\leq \left\lvert y \right\rvert_p<p^{-1/(p-1)}$. Hence, $$\left\lvert x^ky^{n+1-k} \right\rvert_p<\left\lvert y^{n+1} \right\rvert_p<p^{-n/(p-1)}$$ for any $k$. Moreover, for any term in $f(x, y)$ with the power of $x$ and $y$ being $s, t$ respectively, we must have $\left\lvert x^sy^t \right\rvert_p<p^{-n/(p-1)}$ since $s+t\geq n+1$ for $f(x,y)$. But, each term of $f(x, y)$ also has coefficients. We know that the coefficients of $f(x)$ are at least $(1/n!)^2$. Thus, the $p$-adic absolute value of a term of $f(x, y)$ is at most $$p^{2v_p(n!)}p^{-n/(p-1)}$$ Now, we know that $v_p(n!)\leq 1/(p-1)$ as $n$ goes to infinity as we saw earlier. Thus, the maximum value of the $p$-adic absolute value of the individual terms of $f(x, y)$ is $$p^{\frac{2-n}{p-1}}$$ as $n$ approaches $\infty$. As $n$ approaches infinity, the above approaches $0$. Thus, $\lim_{n\to\infty}\left\lvert f_n(a, b) \right\rvert_p = 0$. Hence, $f_n(a, b)$ itself approaches $0$ in $\mathbb{Q_p}$ as we have seen earlier. Hence, $\exp(a+b) = \exp(a)\exp(b)$. ◻

Corollary 1. $\exp(na) = \exp(a^n)$ for all integers $n$ and $a \in D(0; \left(p^{-1/(p-1)}\right)^-)$.

Proposition 24. We have $\left\lvert \exp(x)-\exp(y) \right\rvert_p = \left\lvert x-y \right\rvert_p$.

Proof. We first prove that the statement is true when $y=0$. When $y=0$, we have $\exp(y) = 0$. Therefore, we need to prove that $\left\lvert \exp(x) - 1 \right\rvert_p = \left\lvert x \right\rvert_p$. We have $$\exp(x)-1 = \sum_{n\geq1}\frac{x^n}{n!}$$ As we saw earlier, we have $$\left\lvert \exp(x) - 1 \right\rvert_p \leq \max\left(\left\lvert \frac{x^n}{n!} \right\rvert_p\right)_{n\geq1}$$ We claim that the maximum absolute value is $\left\lvert x \right\rvert_p$. In order to do so, let us suppose for the sake of contradiction that there is some term $x^n/n!$ with an absolute value that is more than or equal to the absolute value of $x$. So, we have $\left\lvert x^n/n! \right\rvert_p \geq \left\lvert x \right\rvert_p$, which happens if and only if $\left\lvert x \right\rvert_p^{n-1}\left\lvert 1/n! \right\rvert_p\geq1$ or $\left\lvert x \right\rvert_p^{n-1} \geq \left\lvert n! \right\rvert_p$. By the definition of the $p$-adic absolute value, this happens if and only if $$ \begin{aligned} p^{-(n-1)v_p(x)} \geq p^{-v_p(n!)} &\iff -(n-1)v_p(x) \geq -v_p(n!) \\ &\iff (n-1)v_p(x)\leq v_p(n!) \\ &\iff v_p(n!)/(n-1) \geq v_p(x) \end{aligned} $$

Now, $\left\lvert x \right\rvert_p < p^{-1/(p-1)}$. Therefore, $v_p(x) > 1/(p-1)$. Thus, we finally get $v_p(n!)/(n-1) > 1/(p-1)$ which is a contradiction. Therefore, we must have that $\left\lvert x \right\rvert_p$ is the unique maximum value in the sequence $\left\lvert x^n/n! \right\rvert_p$. Since the maximum value is unique, we have $$\left\lvert \sum_{n\geq 1}\frac{x^n}{n!} \right\rvert_p = \max\left(\left\lvert \frac{x^n}{n!} \right\rvert_p\right)_{n\geq1} = \left\lvert x \right\rvert_p$$ This proves that $\left\lvert \exp(x) - 1 \right\rvert_p = \left\lvert x \right\rvert_p$.
 
Now, consider any $y$ in $D\left(0;\left(p^{-1/(p-1)}\right)^-\right)$. Therefore, we have $$\left\lvert \exp(x) - \exp(y) \right\rvert_p = \left\lvert (\exp(x) - 1) - (\exp(y) - 1) \right\rvert_p\leq \max(\left\lvert \exp(x) -1 \right\rvert_p, \left\lvert \exp(y)-1 \right\rvert_p)$$ Since $\left\lvert \exp(x) - 1 \right\rvert_p = \left\lvert x \right\rvert_p$, we have the above may be written as $\max(\left\lvert x \right\rvert_p, \left\lvert y \right\rvert_p)$. If $\left\lvert x \right\rvert_p \neq \left\lvert y \right\rvert_p$, we have $\left\lvert \exp(x) - 1 \right\rvert_p \neq \left\lvert \exp(y) - 1 \right\rvert_p$. Therefore, we have $\left\lvert \exp(x) - \exp(y) \right\rvert_p = \max(\left\lvert x \right\rvert_p, \left\lvert y \right\rvert_p) = \left\lvert x-y \right\rvert_p$. ◻

Artin-Hasse Exponential Function #

Theorem 2. $E(x) \in \mathbb{Z_p}[[x]]$

To prove this theorem, we need to prove the following Lemmas first:

Lemma 7. Let $f(x) \in 1 + x\mathbb{Q_p}[[x]]$ be a power series with $p$-adic rational coefficients. Then $f(x) \in 1 + x\mathbb{Z_p}[[x]] \iff \frac{f(x^p)}{f(x)^p} \in 1 +px\mathbb{Z_p}[[x]]$

Proof. We will begin with the assumption that $f(x) \in 1 + x\mathbb{Z_p}[[x]]$. We can see that the constant term is given by, $$F(0) = f(0)^p - f(0^p) = \left( 1 + \sum_{i \ge 1}a_i0^i\right)^p - \left(1 + \sum_{i \ge 1} a_i0^pi\right) = 1^p - 1 = 0$$ Thus we have that $f(0)^p = f(0^p) = 1$, then it follows that $f(x)^p$ and $f(x^p)$ are both invertible formal power series. Then there exists $t(x) \in 1 + px\mathbb{Z_p}[[x]]$ such that $\frac{f(x^p)}{f(x)^p} = t(x)$. The reason why $t(x) \in 1 + px\mathbb{Z_p}[[x]]$ is because that the coefficients of $t(x)$ satisfy a linear recursion that is derived from the product $f(x)^p \cdot t(x) = f(x^p)$. By construction, the coefficients of $t(x)$ are $a_n = \sum_{i = 1}^{m} c_ia_{n-1}$ ($\exists m \ge 0$, $n > 0$), and $(c_i)$ are the coefficients of $f(x)^p$. By the multinomial theorem, the coefficients $(c_i)$ is given by $p!/(r_1!r_2!r_3!(\cdots))$ such that $\sum_{i \ge 0} r_i = p$. From this, we know that the coefficients (except the constant term) of $f(x)^p$ is a multiple of $p$. Hence it follows that $t(x) \in 1 + px\mathbb{Z_p}[[x]]$ exists. Now, supposed that $f(x^p) = f(x)^p\cdot g(x)$ with $g(x) \in 1 + px\mathbb{Z_p}[[x]]$. Let $f(x) = \sum_{n \ge 0} a_nx^n$, $g(x) = \sum_{n \ge 0} b_nx^n$. By assumption, $a_0 = 1$, and suppose that we have the required integrality for $a_n$ with $0 \le n < N -1$. We will show that the $N$th coefficient of $f(x)^p\cdot g(x)$ is equal to the $N$th coefficient of $(\sum_{n \le N} a_nx^n)^p + \sum_{n \le N} b_nx^n$ for which this sum is just a redefinition of $f(x^p)$ if we take $N \to \infty$. The proof will proceed by induction. To begin, let’s expand these power series to get a sense of the terms, $$f(x)^p = \left( \sum_{i = 0}^{N-1} a_ix^i + a_Nx^N + \sum_{k \ge N + 1}a_kx^k\right)^p = \sum_{t=0}^{p} \binom{p}{t}\left(\sum_{i = 0}^{N-1} a_ix^i + \sum_{k \ge N + 1}a_kx^k\right)^{p - t}(a_Nx^N)^t$$ $$\left(\sum_{n \le N} a_nx^n\right)^p =\sum_{i = 0}^{p} \binom{p}{i} \left(\sum_{n \le N - 1}a_nx^n \right)^{p-i}(a_Nx^N)^i$$

We have two cases, $p \not\mid N$ and $p \mid N$. Suppose $p \not\mid N$, then there is no $m \in \mathbb{Z}$ such that $N = pm$ (that would give us another $N$th coefficient which is $a_{\frac{N}{p}}^p$). Thus we have that the $N$th coefficient of $(\sum_{n \le N} a_nx^n)^p + \sum_{n \le N} b_nx^n$ is $\binom{p}{1}a_Na_0 + b_N = p(1)(a_N) + b_N = pa_N + b_N$, while we have that the $N$th coefficient of $f(x)^p\cdot g(x)$ is $g(0)\binom{p}{1}a_N + a_0b_N$. Since we have assumed that $a_0 = 1$, then by the linear recursion definition of the coefficient of $g(x)$, we have that $a_0b_0 = 1$ then $b_0 = 1$. It follows that the $N$th term of $f(x)^p\cdot g(x)$ is $(1)pa_N + (1)b_N = pa_N + b_N$. The intuition behind the multiplication of $f(x)^p\cdot g(x)$ is just termwise multiplication, thus we can find the pairs such that they’re the $N$th term. Since by definition, $g(x) \in 1 + p\mathbb{Z_p}[[x]]$, it follows that $b_N \in p\mathbb{Z_p}$. Now, it follows that we can always construct the $N$th coefficient using the linear-recursion definition of the coefficient of $g(x)$ then the construction is followed by induction. Hence both of them have the same $N$th coefficient such that $p \not\mid N$. Also, we can conclude that $a_N \in \mathbb{Z_p}$, since the $N$th coefficient is not divisible by $p^2$ but by $p$ only, thus it follows that the $N$th coefficient is in $p\mathbb{Z_p}$ and $a_N \in \mathbb{Z_p}$. Now for our second case, suppose that $p \mid N$, then there exists $m \in \mathbb{Z}$ such that $N = pm$. We have that the $N$th coefficient of $(\sum_{n \le N} a_nx^n)^p + \sum_{n \le N} b_nx^n$ is $a_{\frac{N}{p}}^p + \binom{p}{1}a_N + b_N = a_{\frac{N}{p}}^p + (1)a_N + b_N = a_{\frac{N}{p}}^p + a_N + b_N$. To find $a_{\frac{N}{p}}^p$ in $(\sum_{n \le N} a_nx^n)^p$, consider the $p$th term and set $i = 0$, observe the following:

$$\left(\sum_{n \le N - 1}a_nx^n \right)^{p} = \left(\sum_{n \le \frac{N}{p} - 1}a_nx^n + a_{\frac{N}{p}}x^{\frac{N}{p}} + \sum_{n = \frac{N}{p} - 1}^{N - 1}a_nx^n \right)^{p}$$

$$\left(\sum_{n \le \frac{N}{p} - 1}a_nx^n + a_{\frac{N}{p}}x^{\frac{N}{p}} + \sum_{n = \frac{N}{p} - 1}^{N - 1}a_nx^n \right)^{p} = \left(\left( \sum_{n \le \frac{N}{p} - 1}a_nx^n + \sum_{n = \frac{N}{p} - 1}^{N - 1}a_nx^n \right) + a_{\frac{N}{p}}x^{\frac{N}{p}} \right)^{p}$$

Then by the binomial theorem, it follows that we have $a_{\frac{N}{p}}^p$ as the additional term for the $N$th coefficient of $(\sum_{n \le N} a_nx^n)^p + \sum_{n \le N} b_nx^n$. While for the $N$th coefficient of $f(x)^p\cdot g(x)$, we have $a_{\frac{N}{p}}^p + g(0)\binom{p}{1}a_N + a_0b_N$. To find the value of $a_{\frac{N}{p}}^p$, consider the $p$th term and set $t = 0$, observe the following:

$$\begin{aligned} \left(\sum_{i = 0}^{N-1} a_ix^i + \sum_{k \ge N + 1}a_kx^k \right)^{p} = \left(\sum_{i = 0}^{(N/p) - 1} a_ix^i + a_{\frac{N}{p}}x^{\frac{N}{p}} + \sum_{k \ge \frac{N}{p} + 1}a_kx^k \right)^{p}\\ \left(\sum_{i = 0}^{(N/p) - 1} a_ix^i + a_{\frac{N}{p}}x^{\frac{N}{p}} + \sum_{k \ge \frac{N}{p} + 1}a_kx^k \right)^{p} = \left( \left( \sum_{i = 0}^{(N/p) - 1} a_ix^i + \sum_{k \ge \frac{N}{p} + 1}a_kx^k \right) + a_{\frac{N}{p}}x^{\frac{N}{p}} \right)^{p} \end{aligned}$$ Then by the binomial theorem, it follows that we have $a_{\frac{N}{p}}^p$ as the additional term for the $N$th coefficient of $f(x)^p\cdot g(x)$. Now, it follows that we can always construct the $N$th coefficient using the linear-recursion definition of the coefficient of $g(x)$ then the construction is followed by induction. Hence both of them have the same $N$th coefficient such that $p \mid N$. Also in our second case, we can conclude that $a_N \in \mathbb{Z_p}$, since the $N$th coefficient is not divisible by $p^p$ but by $p$ only, thus it follows that the $N$th coefficient is in $p\mathbb{Z_p}$ and $a_N \in \mathbb{Z_p}$. Since we have successfully concluded that $a_N \in \mathbb{Z_p}$, it follows that $f(x) \in 1 + x\mathbb{Z_p}[[x]]$. ◻

Lemma 8. $\exp(-px) \in 1 + p\mathbb{Z_p}[[x]]$

Lemma 9. $\frac{E(x^p)}{E(x)^p} = \exp(-px)$

Now we are ready to prove Theorem 2.

Proof. It follows that $E(x) \in 1 + x\mathbb{Z_p}[[x]]$ by Lemma 7 due to Lemma 9, thus it follows that the coefficients of $E(x)$ is in $\mathbb{Z_p}$ by Lemma 7. ◻