Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Introduction to complex networks

1.094 visualizaciones

Publicado el

Introduction to complex networks. I discuss percolation and appearance of giant components and related issues such as epidemics and attack tolerance.

Publicado en: Ciencias
  • Sé el primero en comentar

Introduction to complex networks

  1. 1. Introduction to Complex Networks V.A. Traag KITLV, Leiden, the Netherlands e-Humanities, KNAW, Amsterdam, the Netherlands March 30, 2014 eRoyal Netherlands Academy of Arts and Sciences Humanities
  2. 2. Overview 1 What are networks? 2 Classics: scale free & small worlds. 3 Percolation: giant components, failure & attack and epidemics. Probability generating functions.
  3. 3. Examples • Neural networks • Power grids • Gas networks • Internet router network • World Wide Web • Road networks • Airline networks • Call networks • Social networks • Social media networks
  4. 4. Examples • Neural networks • Power grids • Gas networks • Internet router network • World Wide Web • Road networks • Airline networks • Call networks • Social networks • Social media networks
  5. 5. Examples • Neural networks • Power grids • Gas networks • Internet router network • World Wide Web • Road networks • Airline networks • Call networks • Social networks • Social media networks
  6. 6. Examples • Neural networks • Power grids • Gas networks • Internet router network • World Wide Web • Road networks • Airline networks • Call networks • Social networks • Social media networks
  7. 7. Examples • Neural networks • Power grids • Gas networks • Internet router network • World Wide Web • Road networks • Airline networks • Call networks • Social networks • Social media networks
  8. 8. Examples • Neural networks • Power grids • Gas networks • Internet router network • World Wide Web • Road networks • Airline networks • Call networks • Social networks • Social media networks
  9. 9. Basics Network • Graph or networks G = (V , E) • Nodes V = 1, . . . , n (vertices) Power station, webpage, intersection, person. • Edges E ⊆ V × V (links, ties) Power cables, hyperlinks, roads, friendships. Can be directed, and possibly weighted Essentials • Degree ki is number of links at node i. • If |E| = m number of edges, then i ki = 2m. • Average degree k = 2m n . • Density p = m (n 2) = k n−1 ≈ k n . • Most networks sparse: k , p low.
  10. 10. Picture worth . . . words Visualisations essentially wrong, but sometimes insightful. Need to assess statistics to understand networks.
  11. 11. Analysed properties Analysis strategy • Focus on some key (statistical) ingredients. • Only overall general properties, no particulates. • Compare to random graph: what can we expect? • Modelling ⇒ replicate key properties. Some key properties • Degree distribution • Degree correlations • Path lengths • Clustering • Modularity • Dynamics: inter event times
  12. 12. Small world Milgram’s experiment (1960s) • Ask people to reach specific person: John Doe, Journalist, Kansas • Send letter to acquaintance, who forwards, and so on • Result: about 5 intermediaries to reach destination. • Six degrees of separation. Key question: is this different from a random graph?
  13. 13. Erd¨os-R´enyi (ER) graphs Random graph • Create empty graph G with n nodes. • Every edge probability p of appearing. • On average p n 2 = m edges. • Average degree k = pn. • Random graph essentially a (very simple) model. • Was (and still is) used frequently. Biology, epidemiology: well mixed population. • Many interesting questions still.
  14. 14. Small world? Path length • Every node ki ≈ k , in steps, reach about k . • When k = n reached whole network. • Hence ≈ log n log k : grows slowly! Random edges create short paths. Clustering • Clustering, Ci = ei ki 2 . • In ER graph Ci = p3n(n − 1) p2n(n − 1) = p. • Networks are sparse, low p, so low Ci . Real world: both short paths & clustering. How to get that?
  15. 15. Small world? Path length • Every node ki ≈ k , in steps, reach about k . • When k = n reached whole network. • Hence ≈ log n log k : grows slowly! Random edges create short paths. Clustering • Clustering, Ci = ei ki 2 . • In ER graph Ci = p3n(n − 1) p2n(n − 1) = p. • Networks are sparse, low p, so low Ci . Real world: both short paths & clustering. How to get that?
  16. 16. Watts & Strogatz Small world model • Create lattice (connect to nearest neighbours). • Rewire edge (or add) with probability p.
  17. 17. Watts & Strogatz
  18. 18. Watts & Strogatz Small world model • Create lattice (connect to nearest neighbours). • Rewire edge (or add) with probability p. Few shortcuts enough to create short paths
  19. 19. Degree distribution 0 20 40 60 80 100 ki ≈ k Degree Probability • In real networks, power-law ki ∼ k−α, usually 2 < α < 3. • In ER graphs, poisson ki ∼ k k k! .
  20. 20. Degree distribution 100 101 102 Hubs Degree Probability • In real networks, power-law ki ∼ k−α, usually 2 < α < 3. • In ER graphs, poisson ki ∼ k k k! .
  21. 21. Barab´asi & Albert How to get power-law degree distribution? Preferential attachment, cumulative advantage Start with graph with q nodes 1 Add node 2 Add q links to previous nodes with probability pi ∼ ki 3 Repeat (1)-(2). Results • Analysis by master rate equation p(k) = k−1 2 p(k − 1) − k 2 p(k). • Leads to p(k) = m(m+1)(m+2) k(k+1)(k+2) ∼ k−3. • Preferential attachment ⇒ scale free network.
  22. 22. Scale free Scale free: so what? Why does it matter? • Scale free networks robust again random node failure. • Vulnerable for targeted attacks (take out the hubs). • No threshold for epidemic spreading. Approach: percolation & generating functions.
  23. 23. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate mean k = k kpk • Sum S = Si , pgf f (x) = E(xS )
  24. 24. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate mean k = k kpk • Sum S = Si , pgf f (x) = E(xS )
  25. 25. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate mean k = k kpk • Sum S = Si , pgf f (x) = E(xS )
  26. 26. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate mean k = k kpk1k−1 • Sum S = Si , pgf f (x) = E(xS )
  27. 27. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate mean k = g (1) • Sum S = Si , pgf f (x) = E(xS )
  28. 28. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate moment km = x ∂ ∂x m g x=1 • Sum S = Si , pgf f (x) = E(xS )
  29. 29. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate moment km = x ∂ ∂x m g x=1 • Sum S = Si , pgf f (x) = E(xS )
  30. 30. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate moment km = x ∂ ∂x m g x=1 • Sum S = Si , pgf f (x) = E(x Si )
  31. 31. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate moment km = x ∂ ∂x m g x=1 • Sum S = Si , pgf f (x) = E(xSi )
  32. 32. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate moment km = x ∂ ∂x m g x=1 • Sum S = Si , pgf f (x) = g(x)
  33. 33. Generating functions Definition (Generating function) Let Pr(S = k) = pk. Then g(x) = E(xS ) = k pkxk is the probability generating function (pgf). Properties • Normalized g(1) = E(1S ) = k pk = 1 • Calculate moment km = x ∂ ∂x m g x=1 • Sum S = Si , pgf f (x) = g(x)m
  34. 34. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = k n k pk(1 − p)n−kxk • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  35. 35. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = k n k pk(1 − p)n−kxk • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  36. 36. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = k n k (xp)k(1 − p)n−k (binomial theorem) • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  37. 37. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = (px + (1 − p))n • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  38. 38. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = (1 + p(x − 1))n (remember k = pn) • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  39. 39. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = (1 + k (x−1) n )n (limn→∞, def. exp) • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  40. 40. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = e k (x−1) • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  41. 41. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = e k (x−1) • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  42. 42. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = e k (x−1) • Normalized g(1) = e k (1−1) = 1. • Mean g (x) = k e k (x−1). • Number of neighbours of m nodes g(x)m = em k (x−1).
  43. 43. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = e k (x−1) • Normalized g(1) = e k (1−1) = 1. • Mean g (1) = k . • Number of neighbours of m nodes g(x)m = em k (x−1).
  44. 44. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = e k (x−1) • Normalized g(1) = e k (1−1) = 1. • Mean g (1) = k . • Number of neighbours of m nodes g(x)m = em k (x−1).
  45. 45. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = e k (x−1) • Normalized g(1) = e k (1−1) = 1. • Mean g (1) = k . • Number of neighbours of m nodes (g(x)m) = m k em k (x−1).
  46. 46. Degree generating function Example, ER degree distribution • Let pk be probability node has degree k. • Take pk = n k pk(1 − p)n−k (Erd¨os-R´enyi) • Then pgf g(x) = e k (x−1) • Normalized g(1) = e k (1−1) = 1. • Mean g (1) = k . • Number of neighbours of m nodes (g(1)m) = m k .
  47. 47. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k kpk . • Average neighbour degree k kpk k . • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  48. 48. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k kpk . • Average neighbour degree k kpk k . • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  49. 49. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k kpk k . • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  50. 50. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k kpk k . • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  51. 51. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 k . • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  52. 52. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 k > k . • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  53. 53. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 k − k > 0. • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  54. 54. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0. • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  55. 55. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  56. 56. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  57. 57. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k qkxk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  58. 58. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k(k + 1)pk+1xk • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  59. 59. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k k kpkxk−1 • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  60. 60. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  61. 61. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  62. 62. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  63. 63. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of second neighbours m k pmp2(k|m)xk • Average number of second neighbours k2 − k .
  64. 64. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of second neighbours m pm k p2(k|m)xk • Average number of second neighbours k2 − k .
  65. 65. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of second neighbours m pmg1(x)m • Average number of second neighbours k2 − k .
  66. 66. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of second neighbours g(g1(x)) • Average number of second neighbours k2 − k .
  67. 67. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of third neighbours g(g1(g1(x))) • Average number of second neighbours k2 − k .
  68. 68. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of d neighbours g(g1(· · · g1(x) · · · )) • Average number of second neighbours k2 − k .
  69. 69. Excess degree distribution Earlier: random node Now: follow random edge to node • Probability that node has k links is kpk k . • Average neighbour degree k2 − k 2 k > 0 (friendship paradox). • Probability that node has k other links is qk = (k+1)pk+1 k • pgf g1(x) = 1 k g (x) • Second neigbhours for m degree g1(x)m = k p2(k|m)xk • Distribution of d neighbours g(g1(· · · g1(x) · · · )) • Average number of second neighbours k2 − k .
  70. 70. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u).
  71. 71. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u).
  72. 72. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u).
  73. 73. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u).
  74. 74. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u).
  75. 75. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u).
  76. 76. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u).
  77. 77. Giant component Giant component (GC) • Always only one GC (and lots of small ones). • Probability link does not connect node to GC u. • Probability node of degree k not in GC uk • Probability node not in giant component k pkuk = g(u) • Size of giant component: S = 1 − g(u). But what is u? Self consistency • Probability link not connects to GC is u. • Connects to node with k other neighbours: excess degree. • Average probability: k qkuk = g1(u) = u.
  78. 78. Giant component How to solve g1(u) = u? 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 g1(u) u = g1(u) u
  79. 79. Giant component If derivative g1(1) > 1 giant component appears. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 g (1) u
  80. 80. Giant component • GC appears when 1 < g1(1). • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  81. 81. Giant component • GC appears when 1 < g1(1) = k kqk. • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  82. 82. Giant component • GC appears when 1 < g1(1) = 1 k k k(k + 1)pk+1. • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  83. 83. Giant component • GC appears when 1 < g1(1) = 1 k k(k − 1)kpk. • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  84. 84. Giant component • GC appears when 1 < g1(1) = 1 k k k2pk − kpk. • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  85. 85. Giant component • GC appears when 1 < g1(1) = k2 − k k . • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  86. 86. Giant component • GC appears when 1 < g1(1) = k2 − k k . • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  87. 87. Giant component • GC appears when 1 < g1(1) = k2 − k k . • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  88. 88. Giant component • GC appears when 1 < g1(1) = k2 − k k . • For ER graphs k2 − k = k 2, so k > 1 the GC appears. 0 1 2 3 4 5 6 0 0.5 1 k S • For scale free graphs k2 → ∞ , so always GC (if 2 < α < 3).
  89. 89. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node does not “fail”. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average k qk(1 − φ + φuk). • Solve for u gives solution.
  90. 90. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node “functions”. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average k qk(1 − φ + φuk). • Solve for u gives solution.
  91. 91. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average k qk(1 − φ + φuk). • Solve for u gives solution.
  92. 92. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average k qk(1 − φ + φuk). • Solve for u gives solution.
  93. 93. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average k qk(1 − φ + φuk). • Solve for u gives solution.
  94. 94. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average k qk(1 − φ + φuk). • Solve for u gives solution.
  95. 95. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average k qk(1 − φ + φuk). • Solve for u gives solution.
  96. 96. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average 1 − φ + φ k qkuk. • Solve for u gives solution.
  97. 97. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average 1 − φ + φg1(u). • Solve for u gives solution.
  98. 98. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average 1 − φ + φg1(u) = u. • Solve for u gives solution.
  99. 99. Node failure How fast is giant component destroyed if nodes are removed? Same approach • Probability φ node not removed from network. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φ). • II: Neighbour is not removed (φ), but not in GC (uk). • So, probability is 1 − φ + φuk. • On average 1 − φ + φg1(u) = u. • Solve for u gives solution.
  100. 100. Node failure • Again, solving u = 1 − φ + φg1(u) not easy. • But if ∂ ∂u 1 − φ + φg1(u) > 1 GC exists. • For ER φc = 1/ k , for scale free φc = 0. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ER Scale Free φ S
  101. 101. Node failure • Again, solving u = 1 − φ + φg1(u) not easy. • But if ∂ ∂u 1 − φ + φg1(u) > 1 GC exists. • For ER φc = 1/ k , for scale free φc = 0. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ER Scale Free φ S
  102. 102. Node failure • Again, solving u = 1 − φ + φg1(u) not easy. • But if φg1(u) > 1 GC exists. • For ER φc = 1/ k , for scale free φc = 0. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ER Scale Free φ S
  103. 103. Node failure • Again, solving u = 1 − φ + φg1(u) not easy. • But if φ > 1 g1(u) GC exists. • For ER φc = 1/ k , for scale free φc = 0. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ER Scale Free φ S
  104. 104. Node failure • Again, solving u = 1 − φ + φg1(u) not easy. • But if φ > 1 g1(u) = φc GC exists. • For ER φc = 1/ k , for scale free φc = 0. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ER Scale Free φ S
  105. 105. Node failure • Again, solving u = 1 − φ + φg1(u) not easy. • But if φ > 1 g1(u) = φc GC exists. • For ER φc = 1/ k , for scale free φc = 0. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ER Scale Free φ S
  106. 106. Node failure • Again, solving u = 1 − φ + φg1(u) not easy. • But if φ > 1 g1(u) = φc GC exists. • For ER φc = 1/ k , for scale free φc = 0. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ER Scale Free φ S 0 0.2
  107. 107. Node attack What if we attack specific nodes? Same approach • Probability φk node of degree k does not “fail”. • On average φ = k φkpk. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φk). • II: Neighbour is not removed (φk), but not in GC (uk−1). • So on average u = k qk−1(1 − φk + φkuk−1). • Define f (u) = k φkqk−1uk−1. • Then u = 1 − f (1) + f (u), solve for u gives solution.
  108. 108. Node attack What if we attack specific nodes? Same approach • Probability φk node of degree k does not “fail”. • On average φ = k φkpk. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φk). • II: Neighbour is not removed (φk), but not in GC (uk−1). • So on average u = k qk−1(1 − φk + φkuk−1). • Define f (u) = k φkqk−1uk−1. • Then u = 1 − f (1) + f (u), solve for u gives solution.
  109. 109. Node attack What if we attack specific nodes? Same approach • Probability φk node of degree k does not “fail”. • On average φ = k φkpk. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φk). • II: Neighbour is not removed (φk), but not in GC (uk−1). • So on average u = k qk−1(1 − φk + φkuk−1). • Define f (u) = k φkqk−1uk−1. • Then u = 1 − f (1) + f (u), solve for u gives solution.
  110. 110. Node attack What if we attack specific nodes? Same approach • Probability φk node of degree k does not “fail”. • On average φ = k φkpk. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φk). • II: Neighbour is not removed (φk), but not in GC (uk−1). • So on average u = k qk−1(1 − φk + φkuk−1). • Define f (u) = k φkqk−1uk−1. • Then u = 1 − f (1) + f (u), solve for u gives solution.
  111. 111. Node attack What if we attack specific nodes? Same approach • Probability φk node of degree k does not “fail”. • On average φ = k φkpk. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φk). • II: Neighbour is not removed (φk), but not in GC (uk−1). • So on average u = k qk−1(1 − φk + φkuk−1). • Define f (u) = k φkqk−1uk−1. • Then u = 1 − f (1) + f (u), solve for u gives solution.
  112. 112. Node attack What if we attack specific nodes? Same approach • Probability φk node of degree k does not “fail”. • On average φ = k φkpk. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φk). • II: Neighbour is not removed (φk), but not in GC (uk−1). • So on average u = k qk−1(1 − φk + φkuk−1). • Define f (u) = k φkqk−1uk−1. • Then u = 1 − f (1) + f (u), solve for u gives solution.
  113. 113. Node attack What if we attack specific nodes? Same approach • Probability φk node of degree k does not “fail”. • On average φ = k φkpk. • Again u probability link does not connect to GC. Self consistency • I: Neighbour is removed (1 − φk). • II: Neighbour is not removed (φk), but not in GC (uk−1). • So on average u = k qk−1(1 − φk + φkuk−1). • Define f (u) = k φkqk−1uk−1. • Then u = 1 − f (1) + f (u), solve for u gives solution.
  114. 114. Failure and Attack
  115. 115. Epidemics Disease spreading • Standard models: Susceptable, Infected, Recovered. • SIR: transmission rate β, recovery rate ν. • Infect neighbour with probability φ = 1 − eβτ • How far will it spread: giant component. Percolation • I: Disease not transmitted (1 − φ). • II: Disease transmitted (φ), but not to GC (uk). • Already solved: critical φc = 1 g1(u) . • Epidemiological threshold βτ = log k2 − k k2 −2 k
  116. 116. Epidemics Disease spreading • Standard models: Susceptable, Infected, Recovered. • SIR: transmission rate β, infectious time τ = 1/ν. • Infect neighbour with probability φ = 1 − eβτ • How far will it spread: giant component. Percolation • I: Disease not transmitted (1 − φ). • II: Disease transmitted (φ), but not to GC (uk). • Already solved: critical φc = 1 g1(u) . • Epidemiological threshold βτ = log k2 − k k2 −2 k
  117. 117. Epidemics Disease spreading • Standard models: Susceptable, Infected, Recovered. • SIR: transmission rate β, infectious time τ = 1/ν. • Infect neighbour with probability φ = 1 − eβτ • How far will it spread: giant component. Percolation • I: Disease not transmitted (1 − φ). • II: Disease transmitted (φ), but not to GC (uk). • Already solved: critical φc = 1 g1(u) . • Epidemiological threshold βτ = log k2 − k k2 −2 k
  118. 118. Epidemics Disease spreading • Standard models: Susceptable, Infected, Recovered. • SIR: transmission rate β, infectious time τ = 1/ν. • Infect neighbour with probability φ = 1 − eβτ • How far will it spread: giant component. Percolation • I: Disease not transmitted (1 − φ). • II: Disease transmitted (φ), but not to GC (uk). • Already solved: critical φc = 1 g1(u) . • Epidemiological threshold βτ = log k2 − k k2 −2 k
  119. 119. Epidemics Disease spreading • Standard models: Susceptable, Infected, Recovered. • SIR: transmission rate β, infectious time τ = 1/ν. • Infect neighbour with probability φ = 1 − eβτ • How far will it spread: giant component. Percolation • I: Disease not transmitted (1 − φ). • II: Disease transmitted (φ), but not to GC (uk). • Already solved: critical φc = k k2 − k . • Epidemiological threshold βτ = log k2 − k k2 −2 k
  120. 120. Epidemics Disease spreading • Standard models: Susceptable, Infected, Recovered. • SIR: transmission rate β, infectious time τ = 1/ν. • Infect neighbour with probability φ = 1 − eβτ • How far will it spread: giant component. Percolation • I: Disease not transmitted (1 − φ). • II: Disease transmitted (φ), but not to GC (uk). • Already solved: critical φc = k k2 − k . • Epidemiological threshold βτ = log k2 − k k2 −2 k
  121. 121. Epidemics Epidemic threshold • For ER, threshold βτ = log k k −1. • For scale free, k2 diverges: always epidemic outbreak. 0 0.2 0.4 0.6 0.8 1 0 0.5 1 ER Scale Free φ S
  122. 122. Conclusions Models • Short pats & clustering: small world model • Scale free: preferential attachment • Many other mechanisms: e.g. triadic closure, homophily, etc. . . • Focus on stylistic features. Analysis • Scale-free networks robust, spread fast, but vulnerable for attack. • Generating functions greatly help analysis. • Compare observed network to random/model. How does it deviate? Questions?

×