Towards quantifying the communication aspect of resilience in disaster-prone communities

In this study, we investigate the communication networks of urban, suburban, and rural communities from three US Midwest counties through a stochastic model that simulates the diffusion of information over time in disaster and in normal situations. To understand information diffusion in communities, we investigate the interplay of information that individuals get from online social networks, local news, government sources, mainstream media, and print media. We utilize survey data collected from target communities and create graphs of each community to quantify node-to-node and source-to-node interactions, as well as trust patterns. Monte Carlo simulation results show the average time it takes for information to propagate to 90% of the population for each community. We conclude that rural, suburban, and urban communities have different inherent properties promoting the varied flow of information. Also, information sources affect information spread differently, causing degradation of information speed if any source becomes unavailable. Finally, we provide insights on the optimal investments to improve disaster communication based on community features and contexts.

We draw once from a Bernoulli distribution with parameter P 0 .If the outcome is 1, we update the uninformed node's total trust in the new information.To update this total trust, we apply a discount factor to the trust value between node i and j.The discount factor accounts for the number of times node j passed the same information to node i before.Previous interactions with node j, as well as with other nodes, are discounted by a forgetting factor that accounts for the length of time that passed since those interactions.We calculate the trust as follows: T c is the total trust in the new information, N is the number of neighbors of i with a state of 1, M i j is the number of meetings between i and j, θ i j is the trust between node i and j, d is the discount factor and f is the forgetting factor, t c is current time step and t k is the time of meeting number k between i and j.Node i becomes informed when T c exceeds a threshold Θ.

Graph Generation Algorithms
For graph generation, we use two methods to generate synthetic graphs that closely resemble the original.In the first graph generation method, Algorithm 2, we first use kernel density estimation to learn the degree distribution of the degree sequence of initial survey data and sample from it to generate a degree sequence of size d = 1000.We shift the mean and standard deviation of the degree sequence of the initial survey data by small amounts, randomly sampled from intervals centered around zero as shown in Algorithm 2. In the second method, as shown in Algorithm 3, we change the heights of the bins of the histograms of the degree distribution.We again use kernel density estimation to learn the degree distribution of the degree sequence of the initial survey data.By sampling from this distribution, we generate a degree sequence of size d = 1000.We then create a histogram of the distribution and add or subtract 0.1 from the weighted frequency of each bin to get a new frequency for each bin.We normalize the new weighted frequencies based on the sum of the original frequency of each bin in the histogram and generate points from a uniform distribution between the original bin widths.The number of points generated from each bin equaled the calculated normalized frequencies.Finally, we perform some cleanup to ensure the length of the generated data is equal to the size needed and to round up to zero if any data point is less than zero.

Algorithm 1 The Monte Carlo Simulation Algorithm
if key j is in the meetings dictionary between i and j then

Generating Trust and Interaction Values
We discuss the algorithm for generating trust and interaction values as shown in pseudocode Algorithm 4. Since, we need to get synthetic trust and frequency of interaction data to label the edges of the synthetic graphs.We want the artificial trust and interaction data to be close to the original features of the ten communities.We first get the frequencies of each of the ordinal variables.Next, we change the sizes of the relative frequencies by adding or subtracting 0.1 from each of them.Then we renormalize the new relative frequencies and generate the new trust and interaction values according to these values.Then, the remaining code is for cleanup to ensure the length of data generated is equal to the expected length.

Require:
Graph G, exposedNodes, time 1: initialize populationPercent = 90% of nodes in Graph 2: initialize time = 0 3: while len(exposedNodes) < populationPercent do 4 17:Update M by 1 {M is number of meetings of i and j}18:Update array timeStep with current time19:Get the θ between node j and i {refers to trust value} 20d m ) × f time−t

Table 1 .
Table of Gradient Results