Introduction

In recent years, complex networks have received attention from numerous researchers1. As a fundamental tool developed to describe the dynamic process of networks, random walks on a graph has been widely employed to evaluate dynamical properties of different complex networks2, such as chemical kinetics in molecular systems3, page search in the World Wide Web4, and lighting harvesting in antenna systems5,6. In addition to the applications in network science7,8,9, random walks has also been applied to other subjects (e.g., image segmentation10, the normalized Laplacian spectrum of a graph11,12 and signal propagation in proteins13).

One of the main foci in studies of random networks is the mean first passage time (MFPT)14,15,16,17,18, which has been defined as the expected steps that a walker starting in node i will require to reach node j for the first time19. When the target node j is given or assumed as the trap node, the MFPT is regarded as the mean first passage time to the trap node, which is also called the average trapping time (ATT). The ATT for a given trap has been used as a quantitative indicator of trapping efficiency to evaluate the process of random walks16,20,21. Researchers have studied the construction and dynamic process of differing undirected fractal networks, such as T-fractals22,23,24, Sierpinski fractals7,25,26, and Caley trees27,28,29,30 and other structures31,32,33.

Previous works have provided the framework for calculating ATT on different undirected scale-free networks. However, many networks in real life are directed and the directed edges linking pairs of nodes may have considerable influences on dynamic processes34,35. In comparison, the understanding of random walks process in directed weighted networks is still inadequate. In particular, for random walks in fractal networks or non-fractal networks36,37, a significant issue is to explore the effect of weighted and directed edges on the average trapping time (ATT) with a given target node. Although some efforts have been made with several weighted and directed models38,39,40,41, additional work is needed to deepen the understanding of the impact of the weight and direction on random walks. Below, we first propose a weighted directed Koch network42,43,44,45, where each pair of nodes has two edges with opposite directions46, and the weights of edges are controlled by a weight parameter θ. Then, building on the time-delay system theory47,48,49, we further introduce a delay parameter p into the weighted directed network.

Following the introduction of weighted directed Koch network, we derive the exact analytical expression of ATT for a given target node. We find that as θ goes from 0 to infinity, the ATT of the target grows sublinearly, linearly, and surperlinearly depending on the value of the weight parameter. We also calculate the ATT on weighted directed Koch network with a delay parameter. The result shows that delay parameter can only change the coefficient of the ATT, but leaves the leading scale unchanged. In summary, the results advance our understanding of random walks process in directed weighted Koch network.

Results

Construction and properties of directed weighted Koch network

The Koch network is derived from the Koch curve, a well-known fractal, in which each node in the network corresponds to one triangle in the curve. The generation procedure of binary Koch networks was detailed in44. Let K(g) denotes the network after g iterations. For g = 0, K(0) is a triangle containing three nodes. For g ≥ 1, K(g) can be obtained from K(g−1) as follows: For each existing node, two new nodes are created and attached to the existing node to form a triangle44. Figure 1 shows the construction process of the first three generations. We note that L(g), the number of triangles in K(g), equals to 4g. The variable Lv(g), the number of nodes created at g th generation, equal to 6 × 4g−1. The variable Le(g), the number of edges created at g th generation, equal to 9 × 4g−1. Besides, the variable Ng, the number of the nodes in K(g), equals to 2 × 4g + 144.

Figure 1
figure 1

The first three generations of Koch network. (ac) are the first three generations of Koch network,respectively. The blue, yellow and green nodes are represent the generations 0,1, and 2, respectively.

Based on the undirected and unbiased Koch network, we propose a weighted directed Koch network by introducing a weight parameter θ quantifying the reciprocating relation between edges. In other words, the edge in the binary network is replaced by two opposite directed edges with different weights.

Let \(\overrightarrow{K}(g)\) represent the weighted directed Koch network at generation g and Wij(g) denote the weight of the edge linking node i and node j in \(\overrightarrow{K}(g)\). Then we define Wij(g) > 0, if node i and node j are adjacent, and Wij(g) = 0, otherwise. The weights of edges in the network are defined recursively as follows. When g = 0, \(\overrightarrow{K}\mathrm{(0)}\) is a triangle containing three nodes, in which W1,2(0) = W2,1(0) = W2,3(0) = W3,2(0) = W1,3(0) = W3,1(0) = 1. When g = 1, each node in \(\overrightarrow{K}\mathrm{(0)}\) is treated as the mother node and produce two new nodes, which and mother node are connected each other to form a new triangle. In the new triangle, the weight of the two edges linking mother node and new produced nodes is θ times that of edges linking mother node and its old neighbors, and the weight of edges linking new nodes and mother node equals to that of edges linking mother node’s old neighbors and mother node. As a result, W1,8(1) = θW1,3(0), W8,1(1) = W3,1(0), W1,9(1) = θW1,2(0), W9,1(1) = W2,1(0), W8,9(1) = θW8,1(1), W9,8(1) = θW9,1(1), W2,6(1) = θW2,3(0), W6,2(1) = W3,2(0), W2,7(1) = θW2,1(0), W7,2(1) = W1,2(0), W6,7(1) = θW6,2(1), W7,6(1) = θW7,2(1), W3,4(1) = θW3,2(0), W4,3(1) = W2,3(0), W3,5(1) = θW3,1(0), W5,3(1) = W1,3(0), W4,5(1) = θW4,3(1), W5,4(1) = θW5,3(1). When g > 1, each node of each triangle in \(\overrightarrow{K}(g-\mathrm{1)}\) generates one triangle respectively, and the iteration rules governing the weight of edges in new produced triangles are the same as those mentioned above. Figure 2 shows the succession process of the reciprocity weights.

Figure 2
figure 2

The weighted directed Koch network construction. (a) is the labels of nodes for unweighted and undirected Koch model, (b) is an example of directed weighted edges’ construction between node 1 and its neighbors for the first two steps.

In the directed weighted network, we define the out-strength and in-strength of node i at generation g as \({S}_{i}^{+}(g)={\sum }_{j\mathrm{=1}}^{{N}_{g}}{W}_{ij}(g),{S}_{i}^{-}(g)={\sum }_{j\mathrm{=1}}^{{N}_{g}}{W}_{ji}(g)\) respectively. Based on the evolution rule of network, it can be shown that \({S}_{i}^{+}(g\mathrm{)=2(}\theta +{\mathrm{1)}}^{g-{g}_{i}}\), where gi represents the generation at which the node i is created.

Numerical formulations of ATT in directed weighted Koch network

In this paper, we address a special biased random walks with a given target node. For convenience, we label all the nodes in the following way: the initial three nodes in \(\overrightarrow{K}\mathrm{(0)}\) are labeled as 1, 2 and 3, respectively. Let Ng−1 + 1, Ng−1 + 2, …, Ng denote the new nodes created at g th generation in \(\overrightarrow{K}(g)\), which are shown in Fig. 2. We assume that the target node is placed at any node in K(0). Without loss of generality, let node 1 be target node. This stochastic process of random walks is characterized by the transition matrix Pg. The element of Pg named pij(g) denotes the transition probability of jumping from i to j in each time step. It satisfies \({\sum }_{j\mathrm{=1}}^{{N}_{g}}\,{p}_{ij}(g\mathrm{)=1}\).

Let \({T}_{i}^{(g)}\) denote the mean first passage time for a walker moving from node i to target node in \(\overrightarrow{K}(g)\). Then, it obeys the equation

$$\begin{array}{rcl}{T}_{i}^{(g)} & = & \mathop{\sum }\limits_{j\mathrm{=1}}^{{N}_{g}}\,{p}_{ij}(g)({T}_{j}^{(g)}+\mathrm{1)}\\ & = & \mathop{\sum }\limits_{j\mathrm{=1}}^{{N}_{g}}{p}_{ij}(g){T}_{j}^{(g)}+\mathop{\sum }\limits_{j\mathrm{=1}}^{{N}_{g}}\,{p}_{ij}(g)\\ & = & \mathop{\sum }\limits_{j\mathrm{=2}}^{{N}_{g}}\,{p}_{ij}(g){T}_{j}^{(g)}+{p}_{i1}(g){T}_{1}^{(g)}+1\\ & = & \mathop{\sum }\limits_{j\mathrm{=2}}^{{N}_{g}}\,{p}_{ij}(g){T}_{j}^{(g)}+{p}_{i1}(g)\times 0+1\\ & = & \mathop{\sum }\limits_{j\mathrm{=2}}^{{N}_{g}}\,{p}_{ij}(g){T}_{j}^{(g)}+\mathrm{1,}\end{array}$$
(1)

which can be rewritten in matrix form as

$$\begin{array}{l}\overline{{{\bf{T}}}_{{\bf{g}}}}=\overline{{{\bf{P}}}_{{\bf{g}}}}\overline{{{\bf{T}}}_{{\bf{g}}}}+\overline{{{\bf{e}}}_{{\bf{g}}}},\end{array}$$
(2)

where \(\overline{{{\bf{T}}}_{{\bf{g}}}}\) and \(\overline{{{\bf{P}}}_{{\bf{g}}}}\) are formed by deleting the first row of Tg and Pg and respectively. The variable \({\bar{{\bf{e}}}}_{{\bf{g}}}\) is formed by removing the first element of eg, which is the Ng-dimensional vector of ones.

From Eq. (2), we obtain

$$\begin{array}{l}\overline{{{\bf{T}}}_{{\bf{g}}}}=(\overline{{{\bf{I}}}_{{\bf{g}}}}-\overline{{{\bf{P}}}_{{\bf{g}}}}{)}^{-1}\overline{{{\bf{e}}}_{{\bf{g}}}},\end{array}$$
(3)

where \(\overline{{{\bf{I}}}_{{\bf{g}}}}\) is the (Ng − 1) × (Ng − 1) identity matrix.

From Eq. (3), we further obtain

$$\begin{array}{l}{\langle T\rangle }_{g}=\frac{1}{{N}_{g}-1}\mathop{\sum }\limits_{i=2}^{{N}_{g}}\,{T}_{i}^{(g)}=\frac{1}{{N}_{g}-1}\mathop{\sum }\limits_{i=2}^{{N}_{g}}\,\mathop{\sum }\limits_{j=2}^{{N}_{g}}\,{\tau }_{ij}.\end{array}$$
(4)

where τij is the ijth element of matrix \({(\overline{{{\bf{I}}}_{{\bf{g}}}}-\overline{{{\bf{P}}}_{{\bf{g}}}})}^{-1}\). By definition, 〈Tg is the average trapping time (ATT) after g iterations. Note that Eq. (4) is valid only if node 1 is designated as absorbing node, or p1,1(g) = 1 and p1,j(g) = pi,1(g) = 0 for all i,j > 1.

Equation (4) shows that in order to compute 〈Tn or ATT, we only need to compute the inverse of transition matrix. However, computing the inverse of a matrix leads to heavy computational burden when network’s size is large. Therefore, alternative approach is needed to efficiently compute the ATT and equation 4 can be applied to verify the solutions derived by alternative approach.

Analytical solutions for ATT in weighted directed Koch network

Now, we analytically derive the ATT with a given target node on directed weighted Koch network. Let Ωg denote the set of nodes in generation g and \({\bar{\Omega }}_{g}\) be the set of new nodes created at generation g, then \(|{\Omega }_{g}|=|{\Omega }_{g-1}|+|{\bar{\Omega }}_{g}|\). It is obviously that \(|{\Omega }_{g}|=2\times {4}^{g}+1\), and \(|{\bar{\Omega }}_{g}\mathrm{|\; =\; 6}\times {4}^{g-1}\). Let ki(g) represents the degree of node i at gth generation. According to network’s construction rule, the iteration rule of ki(g) is ki(g) = 2ki(g − 1). Furthermore, among the 2ki(g − 1) neighbors of node i at generation g, there are ki(g − 1) neighbors created before generation g(old neighbor) and there are ki(g − 1) neighbors created just at generation g(new neighbor). We use Z to denote mean first passage time for a walker moving from node i to any of its ki(g) old neighbors, and let X represent the mean first passage time for a walker going from any of the new neighbors of i to one of its ki(g) old neighbors. By the iteration rule of the directed weighted Koch network, we have

$$\begin{array}{l}\{\begin{array}{rcl}Z & = & \frac{1}{1+\theta }+\frac{\theta }{1+\theta }(1+X),\\ X & = & \frac{1}{1+\theta }(1+Z)+\frac{\theta }{1+\theta }(1+X),\end{array}\end{array}$$
(5)

from which we obtain Z = (1 + θ)2, which is just the iteration rule of the average trapping time. In other words,

$$\begin{array}{l}{T}_{i}^{(g+1)}={(1+\theta )}^{2}{T}_{i}^{(g)}.\end{array}$$
(6)

We define the intermediary quantities for 1 ≤ m ≤ g.

$$\begin{array}{l}{T}_{m,tot}^{(g)}=\sum _{i\in {\Omega }_{m}}\,{T}_{i}^{(g)},\end{array}$$
(7)

and

$$\begin{array}{l}{\bar{T}}_{m,tot}^{(g)}=\sum _{i\in {\bar{\Omega }}_{m}}\,{T}_{i}^{(g)}.\end{array}$$
(8)

Then, we obtain the relation

$$\begin{array}{l}{\langle T\rangle }_{g}=\frac{1}{{N}_{g}-1}{T}_{g,tot}^{(g)}.\end{array}$$
(9)

Thus, the problem of computing 〈Tg reduces to computing \({T}_{g,tot}^{(g)}\). Taking the iteration rule of the average trapping time into consideration, we have

$$\begin{array}{l}{T}_{g,tot}^{(g)}={T}_{g-1,tot}^{(g)}+{\bar{T}}_{g,tot}^{(g)}={(1+\theta )}^{2}{T}_{g-1,\,tot}^{(g-1)}+{\bar{T}}_{g,tot}^{(g)}.\end{array}$$
(10)

Hence, in order to get \({T}_{g,tot}^{(g)}\), we must first obtain \({\bar{T}}_{g,tot}^{(g)}\).

When g = 1,

$${T}_{4}^{(1)}=\frac{1}{1+\theta }(1+{T}_{1}^{(1)})+\frac{\theta }{1+\theta }(1+{T}_{5}^{(1)}),$$
$${T}_{5}^{(1)}=\frac{1}{1+\theta }(1+{T}_{1}^{(1)})+\frac{\theta }{1+\theta }(1+{T}_{4}^{(1)}),$$
$${T}_{6}^{(1)}=\frac{1}{1+\theta }(1+{T}_{2}^{(1)})+\frac{\theta }{1+\theta }(1+{T}_{7}^{(1)}),$$
$${T}_{7}^{(1)}=\frac{1}{1+\theta }(1+{T}_{2}^{(1)})+\frac{\theta }{1+\theta }(1+{T}_{6}^{(1)}),$$
$${T}_{8}^{(1)}=\frac{1}{1+\theta }(1+{T}_{3}^{(1)})+\frac{\theta }{1+\theta }(1+{T}_{9}^{(1)}),$$
$${T}_{9}^{(1)}=\frac{1}{1+\theta }(1+{T}_{3}^{(1)})+\frac{\theta }{1+\theta }(1+{T}_{8}^{(1)}).$$

Thus, we have

$$\begin{array}{rcl}{\bar{T}}_{\mathrm{1,}tot}^{\mathrm{(1)}} & = & \sum _{i\in {\bar{\Omega }}_{1}}\,{T}_{{\rm{i}}}^{\mathrm{(1)}}\\ & = & {T}_{4}^{(1)}+{T}_{5}^{(1)}+{T}_{6}^{(1)}+{T}_{7}^{(1)}+{T}_{8}^{(1)}+{T}_{9}^{(1)}\\ & = & 6(1+\theta )+2({T}_{1}^{(1)}+{T}_{2}^{(1)}+{T}_{3}^{(1)})\\ & = & 6(1+\theta )+2{\bar{T}}_{0,tot}^{(1)}.\end{array}$$
(11)

Similarly, when g = 2, we obtain

$$\begin{array}{rcl}{\bar{T}}_{\mathrm{2,}tot}^{\mathrm{2)}} & = & \sum _{i\in {\bar{\Omega }}_{2}}\,{T}_{i}^{(2)}=\mathop{\sum }\limits_{i\in {\bar{\Omega }}_{10}}^{33}\,{T}_{i}^{(2)}\\ & = & {T}_{4}^{(1)}+{T}_{5}^{(1)}+{T}_{6}^{(1)}+{T}_{7}^{(1)}+{T}_{8}^{(1)}+{T}_{9}^{(1)}\\ & = & 24(1+\theta )+{2}^{2}({T}_{1}^{(2)}+{T}_{2}^{(2)}+{T}_{3}^{(2)})+2({T}_{4}^{(2)}+{T}_{5}^{(2)}+{T}_{6}^{(2)}+{T}_{7}^{(2)}+{T}_{8}^{(2)}+{T}_{9}^{(2)})\\ & = & 6(1+\theta )\times {4}^{1}+{2}^{2}{\bar{T}}_{0,tot}^{(2)}+2{\bar{T}}_{1,tot}^{(2)}.\end{array}$$
(12)

From Eqs (10) and (11), we obtain

$$\begin{array}{rcl}{\bar{T}}_{g,tot}^{(g)} & = & 6(1+\theta )\times {4}^{g-1}+2{\bar{T}}_{g-1,tot}^{(g)}+{2}^{2}{\bar{T}}_{g-2,tot}^{(g)}+\mathrm{..}.+{2}^{g-1}{\bar{T}}_{1,tot}^{(g)}+{2}^{g}{\bar{T}}_{0,tot}^{(g)}\\ & = & \mathrm{(1}+\theta )|{\bar{\Omega }}_{g}|+\sum _{{\rm{i}}\in {\Omega }_{g-1}}\,[{k}_{i}(g-\mathrm{1)}\times {T}_{i}^{(g)}],\end{array}$$
(13)

and

$$\begin{array}{rcl}{\bar{T}}_{g+\mathrm{1,}tot}^{(g+\mathrm{1)}} & = & \mathrm{6(1}+\theta )\times {4}^{g}+2{\bar{T}}_{g,tot}^{(g+\mathrm{1)}}+{2}^{2}{\bar{T}}_{g-\mathrm{1,}tot}^{(g+\mathrm{1)}}+\ldots +{2}^{g}{\bar{T}}_{\mathrm{1,}tot}^{(g)}+{2}^{g+1}{\bar{T}}_{\mathrm{0,}tot}^{(g)}\\ & = & \mathrm{(1}+\theta )|{\bar{\Omega }}_{g+1}|+\sum _{{\rm{i}}\in {\Omega }_{g}}\,[{k}_{i}(g)\times {T}_{i}^{(g+\mathrm{1)}}\mathrm{]}.\end{array}$$
(14)

Multiplying Eq. (13) with (1 + θ)2 and subtracting the result from Eq. (14), we find

$${\bar{T}}_{g+1,tot}^{(g+1)}-{(1+\theta )}^{2}{\bar{T}}_{g,tot}^{(g)}=6(1+\theta )\times {4}^{g}-6(1+\theta )\times {4}^{g-1}\times 2{(1+\theta )}^{2}+2{\bar{T}}_{g,tot}^{g+1},$$
(15)

which can be rewritten as

$$\begin{array}{rcl}{\bar{T}}_{g+1,tot}^{(g+1)} & = & 4{(1+\theta )}^{2}{\bar{T}}_{g,tot}^{(g)}-12(1+\theta )({\theta }^{2}+2\theta -1)\times {4}^{g-1}\\ & = & 4{(1+\theta )}^{2}{\bar{T}}_{g,tot}^{(g)}-3(1+\theta )({\theta }^{2}+2\theta -1)\times {4}^{g}.\end{array}$$
(16)

Using \({\bar{T}}_{\mathrm{1,}tot}^{\mathrm{(1)}}=\mathrm{2(1}+\theta \mathrm{)(7}+4\theta )\), Eq. (16) can be resolved inductively to yield

$$\begin{array}{rcl}{\bar{T}}_{g,tot}^{(g)} & = & {4}^{g-1}{(1+\theta )}^{2g-1}[2(7+4\theta )-\frac{3({\theta }^{2}+2\theta -1)}{2\theta +{\theta }^{2}}]\\ & & \,+\,\frac{3({\theta }^{2}+2\theta -1)(\theta +1)\times {4}^{g-1}}{2\theta +{\theta }^{2}}.\end{array}$$
(17)

When θ = 1, substituting Eq. (17) into Eq. (10), we obtain

$$\begin{array}{l}{T}_{g,tot}^{(g)}=\frac{{4}^{g}}{3}(10\times {4}^{g}+3g+2).\end{array}$$
(18)

When θ = 1, the result is consistent with the relevant expression in literature39. From Eq. (18), we find that the ATT grows linearly with the network size. In fact, when θ = 1, the network reduces to binary Koch network.

When θ ≠ 1, we have

$$\begin{array}{rcl}{T}_{g,tot}^{(g)} & = & \,{(1+\theta )}^{2g-1}\times (\frac{{4}^{g}-1}{3})\times [2(7+4\theta )-\frac{3({\theta }^{2}+2\theta -1)}{2\theta +{\theta }^{2}}]\\ & & +\,\frac{3({\theta }^{2}+2\theta -1)\times (\theta +1)}{2\theta +{\theta }^{2}}[\frac{{4}^{g}-{(1+\theta )}^{2g}}{4-{(1+\theta )}^{2}}]+4{(1+\theta )}^{2g}.\end{array}$$
(19)

Thus, according to the Eq. (9), we obtain

$$\begin{array}{rcl}{\langle T\rangle }_{g} & = & \frac{1}{{N}_{g}-1}{T}_{g,tot}^{(g)}\\ & = & \,{\mathrm{(1}+\theta )}^{2g-1}\times \frac{1-\frac{1}{{4}^{g}}}{6}\times [\mathrm{2(7}+4\theta )-\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}}{2\theta +{\theta }^{2}}]\\ & & +\,\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}\times (\theta +\mathrm{1)}}{\mathrm{2(2}\theta +{\theta }^{2})}[\frac{1-{(\frac{1+\theta }{2})}^{2g}}{4-{\mathrm{(1}+\theta )}^{2}}]+2{(\frac{1+\theta }{2})}^{2g},\end{array}$$
(20)

when g → ∞ and θ ≠ 1, we have

$$\begin{array}{rcl}{\langle T\rangle }_{g} & \approx & {(1+\theta )}^{2g}[\frac{7+4\theta }{3(\theta +1)}-\frac{{\theta }^{2}+2\theta -1}{2(2\theta +{\theta }^{2})(1+\theta )}]\\ & \approx & {N}_{g}^{{\log }_{2}(\theta +1)}.\end{array}$$
(21)

Equation (21) shows that 〈Tg grows sub-linearly with the network size if θ < 1, while 〈Tg grows super-linearly with the network size if θ > 1.

We have compared our analytical result with numerical result obtained from Eq. (4). They agree well with each other for different values of θ and network order g, which are summarized in Fig. 3.

Figure 3
figure 3

Verification of analytical solutions on weighted Koch network. ATT 〈Tg for three different values of θ and g. The filled symbols represent the data of numerical results and the empty symbols are the analytical values given by Eqs (19) or (20).

Expressions for ATT with delay parameter on weighted directed Koch network

We now introduce a delay parameter p to explore the impact of time-delay on random walk.

By the construction of network, we define the random walk process in the network with delay as follows: The walker in \(\overrightarrow{K}(g-\mathrm{1)}\) is allowed to move to any neighbor either in \(\overrightarrow{K}(g-\mathrm{1)}\) or \(\overrightarrow{K}(g)\) with probability p and 1 − p, respectively, where 0 ≤ p ≤ 1. The transition probability pij for the walker jumping from node i to node j is defined by

$${{\rm{p}}}_{ij}=\{\begin{array}{ll}p\times \frac{{W}_{ij}(g)}{{s}_{i}^{+}(g-\mathrm{1)}}+\mathrm{(1}-p)\times \frac{{W}_{ij}(g)}{{s}_{i}^{+}(g)}, & j\in {\Omega }_{g-1}\\ \mathrm{(1}-p)\times \frac{{W}_{ij}(g)}{{s}_{i}^{+}(g)}. & j\in {\bar{\Omega }}_{g},\end{array}$$
(22)

where \({s}_{i}^{+}(g)\) is the out-strength of node i. If the walker is at nodes newly created in generation g, it will do random walk on \(\overrightarrow{K}(g)\).

When p = 0, the walker moves from its current location (node i) to any neighboring node j with transition probability \({p}_{ij}(g)={W}_{ij}(g)/{S}_{i}^{+}(g)\). In other words, random walk with delay reduces to random walk in \(\overrightarrow{K}(g)\).

When p = 1, the walker jumps from current position (node i) to any neighboring node j in \(\overrightarrow{K}(g-\mathrm{1)}\) with transition probability \({p}_{ij}(g)={W}_{ij}(g)/{S}_{i}^{+}(g-\mathrm{1)}\) if node i is created before generation g. Otherwise, the walker moves from current position to any neighbor in \(\overrightarrow{K}(g)\) with transition probability \({p}_{ij}(g)={W}_{ij}(g)/{S}_{i}^{+}(g)\).

By the definition above, the quantities of Z and X obeys the relation

$$\begin{array}{l}\{\begin{array}{rcl}Z & = & p+(1-p)[\frac{1}{1+\theta }+\frac{\theta }{1+\theta }(1+X)],\\ X & = & \frac{1}{1+\theta }(1+Z)+\frac{\theta }{1+\theta }(1+X),\end{array}\end{array}$$
(23)

from which we have \(Z=\frac{\mathrm{(1}+\theta -\theta p\mathrm{)(1}+\theta )}{1+\theta p}\).

In this section, we define Di(g) as the MFPT in \(\overrightarrow{K}(g)\) with the delay state and 〈Dg as the average trapping time in \(\overrightarrow{K}(g)\). According Eq. (23), we have

$$\begin{array}{l}{D}_{i}(g+1)=\frac{(1+\theta -\theta p)(1+\theta )}{1+\theta p}{D}_{i}(g).\end{array}$$
(24)

To obtain 〈Dg, we need to define two quantities \({D}_{m,tot}^{(g)}={\sum }_{{\rm{i}}\in {\Omega }_{m}}{D}_{{\rm{i}}}^{(g)}\) and \({\bar{D}}_{m,tot}^{(g)}={\sum }_{{\rm{i}}\in {\bar{\Omega }}_{m}}{D}_{{\rm{i}}}^{(g)}\), where m ≤ g. With \({D}_{m,tot}^{(g)}\) and \({\bar{D}}_{m,tot}^{(g)}\) at hand, we have

$$\begin{array}{l}{\langle D\rangle }_{g}=\frac{1}{{N}_{g}-1}{D}_{g,tot}^{(g)},\end{array}$$
(25)

where

$$\begin{array}{rcl}{D}_{g,tot}^{(g)} & = & {D}_{g-1,tot}(g)+{\bar{D}}_{g,tot}(g)\\ & = & \frac{(1+\theta -\theta p)(1+\theta )}{1+\theta p}{T}_{g-1,tot}(g-1)+{\bar{D}}_{g,tot}(g),\end{array}$$
(26)

and the \({\bar{D}}_{g,tot}^{(g)}\) can be computed as follows:

$$\begin{array}{rcl}{\bar{D}}_{g,tot}^{(g)} & = & (1+\theta )|{\bar{\Omega }}_{g}|+\sum _{i\in {\varOmega }_{g-1}}\,[{k}_{i}(g-1)\times {D}_{i}^{(g)}]\\ & = & (1+\theta )|{\bar{\Omega }}_{g}|+\sum _{i\in {\varOmega }_{g-1}}\,[{k}_{i}(g-1)\times \frac{(1+\theta -\theta p)(1+\theta )}{1+\theta p}{T}_{i}^{(g-1)}],\end{array}$$
(27)

and

$$\begin{array}{l}{\bar{D}}_{g+1,tot}^{(g+1)}=(1+\theta )|{\bar{\Omega }}_{g+1}|+\sum _{i\in {\Omega }_{g}}\,[{k}_{i}(g-1)\times \frac{(1+\theta -\theta p)(1+\theta )}{1+\theta p}{T}_{i}^{(g)}].\end{array}$$
(28)

Multiplying Eq. (26) with 2(θ + 1)2 and subtracting the result from Eq. (28), we find

$$\begin{array}{c}{\bar{D}}_{g+1,tot}^{(g+1)}-2{(\theta +1)}^{2}{\bar{D}}_{g,tot}^{(g)}\\ \,=\,\mathrm{(1}+\theta )[|{\bar{\Omega }}_{g+1}|-\mathrm{2(}\theta +{\mathrm{1)}}^{2}|{\bar{\Omega }}_{g}|]+2\frac{\mathrm{(1}+\theta -\theta p\mathrm{)(1}+\theta )}{\mathrm{(1}+\theta p)}\sum _{{\rm{i}}\in {\bar{\Omega }}_{g}}\,{T}_{{\rm{i}}}^{(g+\mathrm{1)}}\\ \,=\,\mathrm{(1}+\theta )[|{\bar{\Omega }}_{g+1}|-\mathrm{2(}\theta +{\mathrm{1)}}^{2}|{\bar{\Omega }}_{g}|]+2\frac{\mathrm{(1}+\theta -\theta p\mathrm{)(1}+\theta )}{\mathrm{(1}+\theta p)}{\bar{T}}_{g,\text{tot}}^{(g)}.\end{array}$$
(29)

Plugging \(|{\bar{\Omega }}_{g}|=6\times {4}^{g-1}\) and Eq. (17) into Eq. (29), and taking \({\bar{D}}_{\mathrm{1,}tot}^{\mathrm{(1)}}=\frac{\mathrm{2(1}+\theta \mathrm{)(7}+4\theta -\theta p)}{1+\theta p}\) into consideration, we compute

$$\begin{array}{ll}{\bar{D}}_{g+\mathrm{1,}tot}^{(g+\mathrm{1)}} & =\,2(1+\theta {)}^{2}{\bar{D}}_{g,tot}^{(g)}-\mathrm{3(1}+\theta )({\theta }^{2}+2\theta -\mathrm{1)}\times {4}^{g}\\ & \,+\,2\times {4}^{g-1}\times \frac{{\mathrm{(1}+\theta )}^{2g}\mathrm{(1}+\theta -\theta p)}{\mathrm{(1}+\theta p)}\times [\mathrm{2(7}+4\theta )-\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}}{2\theta +{\theta }^{2}}]\\ & \,+\,\frac{\mathrm{6(}{\theta }^{2}+2\theta -\mathrm{1)(1}+\theta {)}^{2}\mathrm{(1}+\theta -\theta p)\times {4}^{g-1}}{\mathrm{(2}\theta +{\theta }^{2}\mathrm{)(1}+\theta p)}.\end{array}$$
(30)

Based on Eq. (30), we further find

$$\begin{array}{rcl}{\bar{D}}_{g,tot}^{(g)} & = & \frac{(7+4\theta -\theta p)}{1+\theta p}{2}^{g}{(1+\theta )}^{2g-1}\\ & & +\frac{(1+\theta -\theta p)}{1+\theta p}[2(7+4\theta )-\frac{3({\theta }^{2}+2\theta -1)}{2\theta +{\theta }^{2}}]\times {2}^{g-1}{(1+\theta )}^{2g-2}({2}^{g-1}-1)\\ & & +{2}^{g}\times [\frac{3{(1+\theta )}^{2}(1+\theta -\theta p)}{2(2\theta +{\theta }^{2})(1+\theta p)}-3(1+\theta )]\times [{(1+\theta )}^{2g-2}-{2}^{g-1}].\end{array}$$
(31)

When θ ≠ 1, we recall Eq. (26) to obtain

$$\begin{array}{ll}{D}_{g,tot}^{(g)} & =\,\frac{\mathrm{(1}+\theta -\theta p\mathrm{)(1}+\theta )}{1+\theta p}{T}_{g-\mathrm{1,}tot}^{(g-\mathrm{1)}}+{\bar{D}}_{g,tot}^{(g)}\\ & =\,\frac{\mathrm{(1}+\theta -\theta p)}{1+\theta p}\{{\mathrm{(1}+\theta )}^{2g-2}\times \frac{{4}^{g-1}-1}{3}\times [\mathrm{2(7}+4\theta )-\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}}{2\theta +{\theta }^{2}}]\\ & \,+\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}\times {(\theta +\mathrm{1)}}^{2}}{2\theta +{\theta }^{2}}[\frac{{4}^{g-1}-{\mathrm{(1}+\theta )}^{2g-2}}{4-{\mathrm{(1}+\theta )}^{2}}]+\mathrm{4(1}+\theta {)}^{2g-1}\}+{\bar{D}}_{g,tot}^{(g)}.\end{array}$$
(32)

By substituting Eq. (31) into Eq. (32), we obtain

$$\begin{array}{rcl}{D}_{g,tot}^{(g)} & = & \frac{(1+\theta -\theta p)(1+\theta )}{1+\theta p}{T}_{g-1,tot}^{(g-1)}+{\bar{D}}_{g,tot}^{(g)}\\ & = & \frac{(1+\theta -\theta p)}{1+\theta p}\{{(1+\theta )}^{2g-2}\times \frac{{4}^{g-1}-1}{3}\times [2(7+4\theta )-\frac{3({\theta }^{2}+2\theta -1)}{2\theta +{\theta }^{2}}]\\ & & +\,\frac{3({\theta }^{2}+2\theta -1)\times {(\theta +1)}^{2}}{2\theta +{\theta }^{2}}[\frac{{4}^{g-1}-{(1+\theta )}^{2g-2}}{4-{(1+\theta )}^{2}}]+4{(1+\theta )}^{2g-1}\}\\ & & +\,\frac{(7+4\theta -\theta p)}{1+\theta p}{2}^{g}{(1+\theta )}^{2g-1}\\ & & +\,\frac{(1+\theta -\theta p)}{1+\theta p}[2(7+4\theta )-\frac{3({\theta }^{2}+2\theta -1)}{2\theta +{\theta }^{2}}]\times {2}^{g-1}{(1+\theta )}^{2g-2}({2}^{g-1}-1)\\ & & +\,{2}^{g}\times [\frac{3{(1+\theta )}^{2}(1+\theta -\theta p)}{2(2\theta +{\theta }^{2})(1+\theta p)}]\times [{(1+\theta )}^{2g-2}-{2}^{g-1}].\end{array}$$
(33)

Then from Eq. (25), we have

$$\begin{array}{rcl}{\langle D\rangle }_{g} & = & \frac{{D}_{g,tot}^{(g)}}{{N}_{g}-1}\\ & = & \,\frac{\mathrm{(1}+\theta -\theta p)}{\mathrm{2(1}+\theta p)}\{{\mathrm{(1}+\theta )}^{2g-2}\times \frac{1-\frac{1}{{4}^{g-1}}}{12}\times [\mathrm{2(7}+4\theta )-\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}}{2\theta +{\theta }^{2}}]\\ & & +\,\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}\times {(\theta +\mathrm{1)}}^{2}}{2\theta +{\theta }^{2}}[\frac{\frac{1}{4}-{(\frac{1+\theta }{2})}^{2g}}{4-{\mathrm{(1}+\theta )}^{2}}]+\frac{4}{\mathrm{(1}+\theta )}{(\frac{1+\theta }{2})}^{2g}\}\\ & & +\,\frac{\mathrm{(7}+4\theta -\theta p)}{\mathrm{(1}+\theta p\mathrm{)(1}+\theta )}{(\frac{1+\theta }{2})}^{2g}\\ & & +\,\frac{\mathrm{(1}+\theta -\theta p)}{\mathrm{2(1}+\theta p\mathrm{)(1}+\theta {)}^{2}}[\mathrm{2(7}+4\theta )-\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}}{2\theta +{\theta }^{2}}]\times \frac{{\mathrm{(1}+\theta )}^{2g}}{{2}^{g}}{\mathrm{(2}}^{g-1}-\mathrm{1)}\\ & & +\,[\frac{\mathrm{3(1}+\theta {)}^{2}\mathrm{(1}+\theta -\theta p)}{\mathrm{2(2}\theta +{\theta }^{2}\mathrm{)(1}+\theta p)}-\mathrm{3(1}+\theta )]\,[{[\frac{{\mathrm{(1}+\theta )}^{2}}{2}]}^{g}-\frac{{\mathrm{(1}+\theta )}^{2}}{2}].\end{array}$$
(34)

Then, for g → ∞, we find

$$\begin{array}{rcl}{\langle D\rangle }_{g} & \approx & \frac{\mathrm{(1}+\theta -\theta p)}{\mathrm{24(1}+\theta p)}[\mathrm{2(7}+4\theta )-\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}}{2\theta +{\theta }^{2}}]{\mathrm{(1}+\theta )}^{2g-2}\\ & \approx & \frac{\mathrm{(1}+\theta -\theta p)}{\mathrm{24(1}+\theta p)}[\mathrm{2(7}+4\theta )-\frac{\mathrm{3(}{\theta }^{2}+2\theta -\mathrm{1)}}{2\theta +{\theta }^{2}}]{N}_{g}^{lo{g}_{2}(\theta +\mathrm{1)}}.\end{array}$$
(35)

When θ = 1, \(Z=\frac{\mathrm{2(2}-p)}{\mathrm{(1}+p)}\), we have

$$\begin{array}{l}{D}_{i}(g+1)=\frac{2(2-p)}{1+p}{T}_{i}(g).\end{array}$$
(36)

Plugging θ = 1 into (31), we have

$$\begin{array}{rcl}{\bar{D}}_{g,tot}^{(g)} & = & \,\frac{4(11-p)}{1+p}\times {2}^{3g-3}+\frac{5(2-p)}{1+p}\times {2}^{3g-1}({2}^{g-1}-1)\\ & & +\,[\frac{2-p}{1+p}-3]\times {2}^{2g}({2}^{g-1}-1).\end{array}$$
(37)

Therefore, we obtain

$$\begin{array}{ll}{D}_{g,tot}^{(g)} & =\,\frac{\mathrm{(2}-p)}{\mathrm{6(1}+p)}\times {4}^{g}\mathrm{(10}\times {4}^{g-1}+3g-\mathrm{1)}+\frac{\mathrm{4(11}-p)}{1+p}\times {2}^{3g-3}\\ & \,+\,5\frac{2-p}{1+p}\times {2}^{3g-1}{\mathrm{(2}}^{g-1}-\mathrm{1)}+[\frac{2-p}{1+p}-3]\times {2}^{2g}{\mathrm{(2}}^{g-1}-\mathrm{1)}.\end{array}$$
(38)

Thus,

$$\begin{array}{rcl}{\langle D\rangle }_{g} & = & \frac{{D}_{g,tot}^{(g)}}{{N}_{g}-1}\\ & = & \frac{\mathrm{(2}-p)}{\mathrm{12(1}+p)}\mathrm{(10}\times {4}^{g-1}+3g-\mathrm{1)}+\frac{\mathrm{(11}-p)}{1+p}\times {2}^{g-2}\\ & & +\,5\frac{2-p}{1+p}\times {2}^{g-2}{\mathrm{(2}}^{g-1}-\mathrm{1)}+[\frac{2-p}{\mathrm{2(1}+p)}-\frac{3}{2}]\times {\mathrm{(2}}^{g-1}-\mathrm{1)}\\ & \approx & \frac{\mathrm{35(2}-p)}{\mathrm{24(1}+p)}{N}_{g}.\end{array}$$
(39)

We compared analytical expressions in Eqs (34) and (38) with numerical results from Eq. (4). As shown in Fig. 4, the analytical results agree well with numerical results for different values of θ, p and n.

Figure 4
figure 4

Verification of analytical solutions on weighted Koch network in delay state. ATT 〈Dg for weighted directed Koch network in delay state with different p, θ and g. The filled symbols are the results of numerical results of Eq. (4) and the empty symbols represent the exact analytical values of Eqs (34) or (38).

It can be seen from Eqs (35) and (39) that 〈Dg grows sub-linearly, linearly, super-linearly with network size when θ < 1, θ = 1, θ > 1 respectively. However, Eqs (35) and (39) also show that the delay parameter p can only change the coefficient of the ATT. Although the delay parameter p has less pronounced influence on the transport efficiency, both it and the weight parameter are needed to tune for getting desired average trapping time.

Discussion

In this paper, we first proposed a weighted directed Koch network, in which the weight of edge is controlled by a parameter θ. Then, we derived the closed form solution of average trapping time (ATT) of a random walk with a given trap node. The solution shows that the weighted directed construction has a significant effect on trapping efficiency and the leading scale of ATT can be sub-linear, linear and super-linear with varying weight parameter. Finally, when a delay is introduced such that new extended structures do not always affect the system immediately, then we introduce a delay parameter p to steer the random walk on the weighted directed network. The exact solution of the ATT in this scenario shows that the delay parameter can only change the prefactor of the ATT. Our results also show that we can obtain desired ATT with appropriate values of weight parameter θ and delay parameter p. It is noteworthy that the procedure used here is limited to deriving average trapping time of random walks with trap located at node 1,2 or 3. For next steps, we will generalize our work to average trapping time with trap located at any nodes.