- QQ：99515681
- 郵箱：[email protected]
- 工作時間：8:00-23:00
- 微信：codehelp

Project guide

The Art of Scientific Computing:

Complex Systems Project

Subject Handbook code:

COMP-90072

Faculty of Science

The University of Melbourne

Semester 1, 2019

Subject co-ordinator: A/Prof. Roger Rassool

Contents

1 Project 2

1.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Computational elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Progression plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Introduction 3

2.1 Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Sandpiles and statistics 4

3.1 The Bak-Tang-Wiesenfeld model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.2 The abelian property of sandpiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.3 Practical statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3.4 Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Get coding 10

4.1 Setting up the sandpile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.2 Playing with the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

5 Extensions 12

5.1 Extension: earthquake models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5.2 Extension: lattice gas automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

6 Assessment tasks 15

6.1 Tasks for your mid-semester report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

6.2 Tasks for your final report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

7 Further reading 16

1

1 Project

This project will acquaint students with some of the computational elements at play in some

numerical simulations. To flesh out some interesting statistics (here, interesting means nonGaussian),

this project begins with a simple sandpile model, in which very large avalanches

can occur more often than our usual statistical models would predict. The project will combine

simulations like these, conducted on a physical grid, and summarise the results as a series of

statistical averages, which will then be analyses in kind. Students will then pursue their own

extension topic, applying lessons in programming, numerical statistics and critical behaviour

to answer a question that simply cannot be handled with pen-and-paper mathematics alone.

1.1 Software

The purpose of this computational project is to understand some of the coding architecture

behind statistics-based simulations. To that end, we don’t want to obfuscate the functional

details of our sandpile programs but we don’t want to write random number generators from

scratch either. We’d like to avoid some high-level languages like Matlab or Mathematica, but

calling libraries within other languages is fine. We advise that you use one of:

Python

C or C#

Fortran

1.2 Computational elements

To complete the project, the following numerical techniques are required:

Random number generation and Monte Carlo methods.

Data collection and organisation.

Numerical integration of probability density functions.

1.3 Progression plan

This project consists of 4 steps that should be completed in order. The last of these steps

should occupy about half of your semester’s work.

1. Using a computer language of your choice, write a program that produces an N ×N array

which stores integer numbers, and a routine that can manipulate the sites in this array

given an address (i, j).

2. Investigate ways to generate (pseudo-)random numbers, and apply what you learn to a

sandpile model with grains added to randowm sites. Add an option to your code which

drops grains with a Gaussian probability distribution (co-centred on the sand pile).

3. Write a toppling routine that searches through your sandpile appropriately and initiates

avalanches when necessary. Include to this routine (or to another place in your program)

something that calculates the necessary statistics of an avalanche from start to finish

(avalanche lifetime, avalanche size and so on).

4. Based on your independent research, find an extension project of your own (or ask your

demonstrator for ideas!). It must address a problem to which no closed-form answer exists,

and you should follow your scientific curiousity down any avenues that open up. Try to

stick to problems that can be modeled with an array, using random number generation, can

be analysed with (non-Gaussian) statistics, and exhibits some form of critical behaviour.

2

2 Introduction

Intuitively, we like to think that catastrophic events have equally catastrophic causes. In physics

terms, we might postulate some unusual circumstances that perturb the energy of the system

proportionally to the resulting catastrophe. An easy example to think about is the extinction

of the dinosaurs – it was a very large and high-energy meteorite that impacted the Earth and

set off a catastrophic chain of events. Or in more recent times, we might look to the Great

Depression, traced back to the Wall Street crash in October 1929. There are many examples

of big events with big causes, but we should not be led to believe that this is the only way dramatic

change can occur. Often we will be interested in large dissipative systems, usually with

many interconnected components. In systems like this, frequently we will find that system-level

change hinges on the drop of pin or, as we will see in this prac, on a single grain of sand.

The laws of physics are well defined and absolute. A naive physicist might end the story

there. Just write down the Hamiltonian or Lagrangian and you’re set! Unfortunately, for

systems with huge numbers of degrees of freedom and many interacting components, the equations

of motion become computationally intractable. Solving for the exact dynamics might

take longer than the human race has left on earth. As a result, we need heuristic theories that

allow us to study the dynamics of complex systems. One such theory is that of self-organising

criticality.

Self-organising criticality was first proposed in 1987 by Bak, Tang and Wiesenfeld. The

theory states that a large and complex system with short range interactions naturally evolves

into a critical state. What does this mean? Well there are different definitions, but to put it

simply, a small perturbation to an element of a system in a critical state will result in a response

that can affect any number of elements in the system. To put it simply-er, the same mechanism

that leads to a minor response can also lead to a catastrophic response. For a system to

‘naturally evolve’ into this state, it must have some slow driving mechanism, and a means of

dissipation. For example, the Earth’s crust consists of tectonic plates, slowly moving against

each other. The microscopic components can be thought of as rigid particles, each subject to

straining force. In this case the shearing movement of tectonic plates drives the system. When

the strain on one particle exceeds a critical value, the particle ‘slips’ and transfers the strain to

its neighbouring particles. Depending on the state of the surrounding particles, this dissipation

of energy could result in further particles slipping. This process, as one might imagine, can

become extremely large (as in an earthquake) or may just involve a single event that slightly

lowers the energy of the system.

Since we cannot study exact system dynamics, we must rely on statistical approaches to get

an insight into what is happening. In driven, critical systems like the ones we will be studying,

catastrophic events are often not distributed, in a probabilistic sense, as we might expect. To

really get a handle on what is happening we will need to enter the world of non-equilibrium

statistical mechanics – goodbye nice, normal distributions, and hello power law behaviour and

scale-free networks. Non-equilibrium statistics underlie how we think about systems that display

self-oragnising criticality. In terms of the theory itself, one important thing to understand

is the sense in which it is holistic. When a system is critical, microscopic mechanisms (local increases

in stress, for example) are not responsible for the macroscopic observables of the system

(think earthquake size, duration, area etc). In particular the proportions of large and small

events do not depend on the exact microscopic details of what is happening (where exactly local

stress increases occur, for example). Consequentially, one cannot analyse the ‘inner workings’

of a system at the smallest scale, and from there try to explain the effects occurring on a larger

scale. In plain language, it is vain to hope that detailed understanding of a part of the system

will translate into system-level understanding.

3

Self-oragnising criticality as a framework has been used to study an extraordinarily diverse

range of systems. It has been used to characterise, analyse and predict earthquakes, financial

markets, evolution and extinction events, pulsar glitches, neuronal behaviour and much more.

2.1 Learning outcomes

1. An introduction to phenomenological/heuristic modelling in the study of complex systems.

2. Build, from the bottom-up, a physical simulation. Part of this involves being technically

adept (being able to write a reasonably-optimised computer program and collect the data

that arises), and part of this involves being physically astute – recognising which physical

variables are important and which ones are not. In turn, this speaks to the ability to

decide what makes this kind of model ‘good’ or ‘bad’, or at least effective or ineffective.

3. Introduce (broadly) the study of non-equilibrium statistical mechanics and specifically the

non-Gaussian statistics that underlie much of it.

4. Introduce the idea of self-ordered criticality and 1/f noise, and through this, the appeal

of universality in dynamical systems.

5. Invite students to study some specific areas of science, like granular flow or the mechanisms

that underlie earthquakes.

6. Introduce some practical sampling techniques such as Monte Carlo methods and ergodic

measurements.

7. Introduce students to interdisciplinary thinking. By this, we mean being able to start off

with a mathematically-abstracted model of sandpiles and understand that with a change

of perspective it can be used to think about something like earthquakes too.

8. Introduce some basic data science skills, like finding relevant data in the public domain,

scraping this data from the web and manipulating it into a form that can be read by a

computer program, and then to analysing the modelled data with reference to the empirical

data.

9. Finally, although this prac can be completed in any coding language, it lends itself to a

scripting language, so it is a good chance for students to use Matlab or learn Python.

3 Sandpiles and statistics

3.1 The Bak-Tang-Wiesenfeld model

When modelling a complex system, it is often impossible to create mathematical formalism

that is both sufficiently realistic and computationally or theoretically tractable. As a result,

researchers must come up with simple models that capture the important elements of the physical

system in question. If a model is suitable, it is sometimes possible to extrapolate findings

to inform about various observables pertaining to the original physical system. For systems

exhibiting self-oragnising criticality, there exists a deceptively simple paradigm: the sandpile.

There are a number of different styles of sandpile, but all share the fact that their dynamics

are governed by a set of very simple rules. The model we will be focusing on is called the

Bak-Tang-Wiesenfield model. It is composed of discrete time indices and finite-state cellular

automata.

Consider a table surface, discretised into a N × N grid. At each site (i, j) on the table, a

number is assigned z(i, j, t) corresponding to the number of grains of sand present any given

4

time. Imagine initially the table is empty such that z(i, j, 0) = 0. At each timestep, a grain of

sand is added to a random site δ(i, j) such that the state Z(t) of the table at any given time can

be described as Z(t) = P

z(i, j, 0) + δ(i, j). The sandpile will continue to grow on the

table until it reaches a critical value, marking the instability of a particular site. In this case,

if any site reaches or exceeds z(i, j, t) = 4, the pile will topple, losing four grains of sand, and

distributing them to its nearest neighbours. This process is called an avalanche, the properties

of which will be the study of this lab. Because we are setting four grains as the instability

threshold, we say that the critical parameter is 4. The toppling operation is described below:

z(i, j, t) = z(i, j, t) ? 4 (3.1)

z(i ± 1, j, t) = z(i ± 1, j, t) + 1 (3.2)

z(i, j ± 1, t) = z(i, j ± 1, t) + 1. (3.3)

It is possible that a toppling operation will occur on the boundaries of the table, i.e., when

i ∈ (0, N) and/or j ∈ (0, N). In this case, the sand that would topple to a site off the table is

simply deleted from the system.

Question 1: Why is it important for the table to have finite boundaries? If we were dropping

the sand in a non-random way, how would this answer change?

The system is allowed to evolve until the slow driving mechanism of adding sand is balanced

by the dissipation mechanism of avalanching and sand falling off the table. That is, we let the

system run until we reach statistically stationary state.

Question 2: Explain the difference between a statistically stationary state, and an equilibrium

state. Is the sandpile in equilibrium? In terms of system observables, how might we characterise

this state?

Avalanches are then recorded and statistics are built up about the distributions of their

various observables. To quantify an avalanche event, there are four main observables:

1. The avalanche size S. This quantity is measured by how many grains of sand are displaced

in a single avalanche event.

2. The avalanche lifetime T. This is the number of timesteps it takes for an avalanche to

relax the system to a critical state.

3. Avalanche area A. The number of unique sites toppled in a given avalanche. (A 6= S).

4. Avalanche radius R. This can be measured in a number of ways, but it is essentially a

measure of the distance from the original toppling site that the avalanche reaches. For

the sake of consistency, we will define this as the maximum number of sites away from the

initial site the avalanche reaches.

An example avalanche is shown in the figure below. The avalanche in question can be

characterised by the observables S = 16, T = 4, A = 4 and R = 2.

Figure 1: An avalanche in progress.

Question 3: Is there a difference between sampling a system over time and sampling an ensemble

of systems? If so, under what circumstances? You might like to think about the ergodic

hypothesis here...

5

3.2 The abelian property of sandpiles

As you might expect, the patterns and states that result from the Bak-Tang-Wiesenfield model

don’t replicate the same dynamics of a read sandpile. There is a great deal more physics going

on in the real world analogue, and consequently the predictions of the model are quite different.

This is an important point – the mathematical sandpile is inherently different to the physical

one. One of the main properties that makes the mathematical sandpile model so convenient to

work with when studying self-oragnising criticality is its Abelian property. Consider a sandpile

in the configurations in Figure (2). The sandpile undergoes an avalanche, and at the next time

step, there are two unstable sites. The question of which order should we topple the sites in

is handled by the Abelian property: it doesn’t matter. In this example, it is easy to see by

testing the two cases manually, that the resulting configuration is the same regardless of which

site is toppled first. However the situation can become non-trivial when toppling one site might

induce further instabilities in the system that wouldn’t occur for the other site were it toppled

first. This can be particularly challenging when trying to track the avalanche temporally as

well. Thus the aim of this section is to prove the Abelian property of the Bak-Tang Wiesenfield

sandpile model, and introduce some of the mathematical formalisms underlying it. From a

broader perspective, this should demonstrate an example of the way that mathematical formalism

can still be helpful in the study of complex systems, even if our usual differential equations

are too complicated.

Figure 2: An avalanche in progress which raises questions about abelian cascades.

To start, let’s take some of the ideas presented in the previous section and represent them

more formally. Our ‘table’ is now represented by the object V , which is a finite subset of Zd,

where d is the dimension of our model. For any site x, we introduce a configuration function

η(x) that maps η(x) : V → N, i.e., extracts the number of grains of sand at the position x

on the table. The configuration η itself is therefore a subset of NV. Now, a configuration η can be either stable or unstable, depending on the size of any given η(x) in V at a given time.

As you might expect, a stable configuration corresponds to a table with no elements greater

than a critical parameter k, and an unstable configuration corresponds to that with at least one

value η(x) ≥ k. To formally write this, we need to introduce the concept of the toppling matrix.

The toppling matrix Vx,y

is an operator that stabilises an unstable site x by distributing

its elements to neighbouring sites. It takes two values, corresponding to two sites x, y ∈ V and

updates according to the height configuration of V .

The toppling matrix must satisfy the following conditions:

6

Question 4: For each matrix toppling condition, explain its significance and justify its necessity.

The actual definition of the toppling matrix is given by:

1. If x ∈ V then Vx,x = 2d.

2. If x and y are nearest neighbours then

Note that by this definition, our critical parameter k is equal to 2d, where d is the dimension

of the sandpile.

Question 5: Explain and justify each equation in the definition of the toppling matrix.

Now that we have a rigorous and general definition for the toppling matrix, we can define

the toppling operator Tx , which maps a configuration η ∈ N

Essentially, it chooses a site x ∈ V and alters it and its surroundings based on the value η(x),

and the proximity of each site in V . Formally, this can be written as:

Question 6: Show that Tx commutes for unstable configurations.

The above exercise is the first step in showing that this sandpile model is in fact abelian,

and if avalanches were restricted to a single branching process, we would be done! However an

avalanche begins on an unstable configuration and ends on a stable one, with the possibility of

many topplings in between. For convenience, we introduce the set ?V to represent all stable

height configurations. Therefore, the toppling transformation T is the set of operations that

maps an unstable configuration to a stable one:

T : N

V → V (3.9)

Naturally, this can take multiple iterations of topplings. For example, the toppling transformation

in Figure (1) would be given by

ηt=4 = Tηt=0 = T(2,3)T(1,3)T(1,2)T(2,2)ηt=0. (3.10)

The general toppling transformation can be represented as

where N is the number of instabilities throughout an avalanche. There are important points to

be made here. N must not be infinite, or the system can never again reach its self-organising

critical state. This indicates the importance of boundary conditions, namely that there must

exist dissipative sites, such as those on the edges that remove sand from the system.

Now that we have the underlying mathematical formalisms and basic theoretical work down

packed, the proof that the sand pile is indeed abelian is left as an exercise and test of understanding!

Question 7: Prove that no matter which order we choose to perform the toppling events in,

we will always topple the same sites the same number of times during an avalanche (and thus

obtain the same final configuration).

7

The above question should be approached in the following way. Suppose that a certain

configuration η has more than one unstable site. In that situation, the order of the topplings is

not fixed. Clearly, if we only topple site x and site y , the order of these two topplings doesn’t

matter and both orders yield the same result. (In the physics literature, this is often presented

as a proof that T is well defined.) But clearly, more is needed to guarantee this. The problem

is that toppling x first, say, could possibly lead to a new unstable site z, which would never

have become unstable if y had been toppled first.

3.3 Practical statistics

In this section we will give the physicist’s take on the important differences between different

probability distribution functions, and then we will focus on power law distributions to talk

about scale free behaviour, rare events and 1/f noise. The probability distribution function

(pdf) of a random variable quantifies the likelihood that a particular sample of the random

variable will return a value within a given range. If x is the random variable, then its pdf p(x)

is defined so that p(x) dx equals the probability that the returned x value lies between x and

x+dx. Normalisation guarantees that the integral of the pdf over the whole domain of x is unity.

The following exercises will invite you to work with this definition, and give you an idea of

the use and abuse of probability in the world of finance, where practitioners often fail to account

properly for the chance of catastrophic events. The data in these exercises was obtained from

Wolfram Alpha (www.wolframalpha.com) and Index Fund Advisors (www.ifa.com).

Question 8: Let’s assume that you buy into an S&P 500 index fund for a certain amount of

money. In some circumstances it is appropriate to assume that your return on investment in

a month’s time will be normally distributed. Assume that the monthly returns for the last fifty

years are independently distributed and have followed a normal distribution with mean 0.87%

and standard deviation 4.33%. What is the pdf of expected percentage returns? What is the

variance? Assuming that I invested $10,000, what is the chance that I have more than $10,300

a month later? What’s the chance that I have between $9000 and $10,000 now, a month later?

Assume that in January 2009 the S&P lost roughly 11%. Under these assumptions, what’s the

chance that we have another month that is as bad or worse than this? In February 2009 the

S&P lost another 11%. Given the assumed independence, multiply through the probabilities to

estimate the likelihood that we had two straight months as bad as they were. Convert this to

a ‘We expect this bad luck once every n month pairs’ frequency statement. When we think of

other stock market crashes that have also occurred, clearly something is lost when we assume

that losses are uncorrelated in the mathematical models.

Question 9: Using the same numbers as above, write a short computer program to work out

what happens to the monthly returns after x months, answering the following questions. You

should be thinking about histograms and Monte Carlo methods. What is the pdf of expected

return on a $10,000 investment after 2 years? Just the plot is fine here. Compare this with the

pdf for 1 month returns. Assuming that I invested $10,000, what is the chance that I have more

than $10,300 two years later? What’s the chance that I have between $9000 and $10,000 now,

two years later? From October 2007 and through to February 2009 (so 17 months) the S&P

lost 52% in aggregate. Use your program to estimate the probability of this happening. Finally,

if you bought into the S&P fund in March 2009, the return on investment after 9 months would

have been a positive 53%. Assuming only Gaussian influences at work, what is the likelihood of

this? Convert your answers into rough (because the time periods are not exactly a year) ‘once

in a ... years’ statements. Note how the Monte Carlo approach is handy here – if we wanted to

answer these kind of questions analytically, we would need to be thinking about Markov chains

and Brownian motion, and end up having to solve Chapman-Kolmogorov-esque equations. NB:

You don’t need to include your code from this section – just explain what you did.

8

We find Gaussian or normal statistics intuitive because it’s pretty common for us to see

stuff in the world around us that is clustered symmetrically around a mean value, and that

is exponentially suppressed the further away we get from that mean value. Because of this

ubiquity, and because of their nice analytic properties, Gaussian distributions are a seductive

first-pass when we come to model many situations.

As you should have seen, with rare events, often this facile assumption doesn’t cut it. Where

things go pear-shaped is usually in the tails of the distribution – the extreme events are in

reality more common than the naive assumptions predict. One way to correct for this is to use

a distribution with fatter tails, maybe a Cauchy distribution. We could also assume that at

some point in the the Gaussian distribution changes to a power law. Both of these corrections

take into account the relatively increased likelihood of extreme events. To see this, let’s imagine

two different pdfs with mean 0 and variance a

and then the piecewise

Here we have to take μ > 3 to get a sensible variance.

Question 10: By writing another Monte Carlo program or by brute forcing the maths, plot the

pdfs and then answer the following questions. Let y be the number of times we have to draw

from the distribution before we get an a result greater than or equal to Y . For different values

of Y expressed in terms of a sensible a value, give the resulting pdf, pY (y). In terms of the

standard deviation a in the normal distribution, how far into the tail of the power law p(x) do

we need to sample before we know that the probability of more extreme x values is less than

5%? How does this compare to the 95% confidence level in the normal distribution? If you

assumed that power laws more accurately modelled rare events in financial markets, how would

this knowledge affect your assessment of risk?

As we can see, the power law has fatter tails, in the sense that you will sample very large

magnitude events much more frequently than you would if they were governed by a Gaussian

distribution. The main thing to understand about power laws is that they are the distribution

where you can say that smaller (larger) events are more likely than larger (smaller) events, in

a very straight-forward and specific sense, and that this relative likelihood is preserved as you

move through the spectrum, or as you ‘stretch’ or scale the statistics. Let’s take p(x) = axk

so that x is the number of sites toppled in our sand pile model. What happens if we take x to

cx, so we redefine an avalanche to occur over say c = 2 two sites? Nothing really, the relative

statistics are just shifted by a constant factor:

In fact, this is one way that we can get a feel for power law behaviour in the real world – if

we see something with scaling statistics (like earthquakes!) we know that there is a power law

hiding somewhere. Equally, if we’re looking at financial time series and we see similar behaviour

at different timescales, we should be wary of jumping straight into assumptions about normal

distributions. A lot of these ideas are formalised in mathematics, physics and computer science

in the systematic study of scale-free networks.

9

Finally, we come to the raison detre of self-organising criticality – a unifying explanation of

so-called ‘1/f noise’. 1/f noise is a bit of an umbrella term for phenomena which are distributed

according to a power law with exponent ?1. These phenomena typically depend heavily on

the past history in the system, and are therefore predictable in some sense, but are nonetheless

subject to chance and random fluctuations. From the 1960’s onwards, 1/f noise has been studied

fairly intensively, first in the context of metals and material science, and then in different

biological and evolutionary contexts too. Because it was appearing in a lot of different places,

it was and is tempting to think that there was some underlying pattern of events that just had

different physical expressions – in a philosophical sense, that there was some unifying framework

that could be used to explain disparate events. Sometimes you will see this temptation

referred to in the study of dynamical systems as ‘universality’ – the idea being that there’s a

class of systems that have broadly similar properties, independent of the detailed dynamics.

Lest you think this is a wishy-washy proposition, it’s this kind of thinking that underlies and

is formalised in studies of renormalisation and phase change (where you can compare water

boiling to iron magnetising, for example).

The Bak-Tang-Weisenfeld model was proposed to provide an easy-to-understand and universal

template for phenomenon that could give rise to 1/f noise. Subsequent work has shown that

you can map, in a precise and mathematically formal sense, systems which demonstrate 1/f

noise onto the sandpile model. Depending on your opinion of mathematical pre-eminence in

the world, we could say that self-oragnising critical systems demonstrate 1/f noise, or we could

say that systems demonstrate 1/f noise because they are self-oragnising in some sense. As you

study more physics, you might start to notice that this kind of mapping-onto-what-we-know

approach is taken quite a bit – like for example with the Ising model in statistical physics.

Question 11: Research and briefly describe the appearance of 1/f noise in a particular context

– for example in nerve cells or oxide films.

3.4 Characteristic functions

? Describe the concept of this: f(k) = R +∞

?∞ x

kp(x) dx. Apply it to a Gaussian pdf and to

a power-law pdf (which perhaps is Gaussian for small-ish x). Talk about how to generate

any of these numerically from data.

4 Get coding

The project will occur in three parts (plus an extension if you’re interested/time permitting).

The first is setting up a computational model of the original Bak-Tang-Wiesenfeld sandpile,

which was initially used to explain the self-oragnised-criticality exhibiting phenomenon ‘1/f

noise’. This initial part will essentially reproduce results from the paper P. Bak, C. Tang, K.

Wiesenfeld, ‘Self Ordered Criticality’, PRA July 1988, while familiarising you with the model.

The second part will involve playing with certain components of the model in order to change

the parameters it predicts, primarily the exponents of the power law distributions. Some of the

steps we undertake will increase the correspondence between our toy model and ‘real’ sandpile

– it will be important to analyse the key differences.

The final section is where you will cast your net more widely. You will obtain some real

data from trusted sources, and then make sure your program import and read the data. The

data here will be records of real earthquakes and their magnitudes. You will analyse these, and

try to alter your sandpile model to reproduce the trends predicted by real earthquake events,

thinking about the changes that you make to your model in a physical context.

10

4.1 Setting up the sandpile

1. Write a program in a language of your choice that initialised an N × N grid such that

each value z(i, j) = 0.

2. Now add a loop structure over a variable that represents time, that adds a ‘grain of sand’

to a random lattice site at each point in time. Plot the total number of grains of sand on

the table vs. time, and create a surface plot of the table at some large time to ensure your

program is working correctly.

3. Introduce the toppling rules of Equations (3.1), (3.2) and (3.3) into your program. This

will require some thought, and constitutes the ‘heart’ of this program. Write either a

flowchart for your program, or pseudo-code. In addition to your actual code, this will be

used to assess the logic behind your program, so think about it carefully. Again, plot the

sandpile mass vs. time, and a surface plot of the table at a large time to ensure nothing

weird is happening. Think about the best way to do this - later on, we will want to be

tweaking these rules, so it would be helpful to write your program in such a way that small

tweak do not necessitate complete rewrites!

4. Track the following quantities as a function of time, and plot their distributions: Avalanche

size, avalanche lifetime, avalanche area and avalanche radius. Think about how you plot

these distributions, and in particular the appropriateness and desirability of a log-log plot.

How would what we see in a log-log plot change if the underlying probability function

(Gaussian or power law etc) changed? Fit a power law and track the exponent of the

power law fit, and investigate if it varies for changing model parameters. (Size of table,

‘squareness’ of table, etc).

5. By fitting various functionals to the data, obtain approximate correlation functions between

each of the observables and avalanche size. Your results should be of the form

hyi ∝ hxi

4.2 Playing with the model

Note that in this section you are not expected to stick to the absolute letter of each point.

Rather, you should be using these suggestions as a basis, and then using your initiative to

thoughtfully discuss and demonstrate an extended sandpile model, comparing this to the original.

This is a good chance to demonstrate thinking that goes beyond the nuts and bolts of the

program you have. If the approach that you take sticks to the spirit of the intended direction,

and if you have taken care to demonstrate a quantitative and qualitative understanding of what

you have done, then that is okay. If in doubt, check with your demonstrator!

1. Briefly summarise what granular mechanics as a field of study actually is. Pick one key

result or phenomena and explain it briefly, giving an example (eg. jamming). Research

some existing efforts to experimentally study real world sandpiles. Detail the thrust of

these experiments, what worked and what did not. The difficulty of these experiments

will hopefully impart some practical appreciation of why we use simple mathematical

and computational models to study these systems. Be grateful you are not doing an

experimental prac!

2. Try to make the sandpile program as realistic as possible. This can be done in a few

ways: instead of a toppling rule that topples a site if it reaches four grains, alter the rule

such that the site will only topple in a given direction if the height difference exceeds some

critical gradient. Also, the pile should not be able to topple to sites with greater quantities

of sand. If you’re brave, you could also add some stochastics to the mix (i.e., incorporate

probability into the toppling rules). Just make sure you log and explain all changes you

11

make to the original model. Plot the usual parameters, and observe what the steady state

of your table looks like. What happens to the distributions of observables in the pile now?

Can you explain why? Also discuss the changes you have made in the context of the earlier

mathematical analysis. Is the sandpile still abelian?

3. With the original model and the new one, try dropping sand only in the middle of the

table. What do you observe?

4. What happens if you drop sand in different areas simultaneously? Or according to different

rules? Maybe you drop the sand randomly, but only in one half of the table. Or

you sometimes drop 2 grains at once. Does it affect the statistics? Is there a physical

significance to what you are doing? If so, in what ways might you want to be careful in

interpreting this kind of toy model in a physical context?

5. How would you go about simulating a sloped table? Bearing in mind the heuristic approach

we have taken so far in this prac. If you didn’t already in the second step, implement these

changes and study the statistics. What changes and what does not? Are you surprised?

6. Suppose we wanted to think about wind - maybe we want to start looking at the way that

snow piles up on the side of a mountain, or at sand dunes in the Sahara. How would we, in

the universe of our toy sandpile, try to do this? For that matter, would we have to change

our heuristic model if we wanted to think about snow instead of sand? Why? Why not?

You do not have to implement or show this, but it is good to think about and discuss.

7. Sometimes what is interesting is the picture of what happens in the moment that the

system moves out of criticality. For your own curiosity and to get some nice pictures, try

initiating your sandpile with 4 grains at each point instead of 0. Any drops you make now

will trigger big initial effects. Look at the sandpile over successive timesteps (eg. 1, 2, 3,

5, 10, 50, . . .) to get an idea of how it changes. Looking at it in this way can at times help

us to understand the limits of our toy model compared to the physical phenomena being

studied.

8. Think more generally about how the initial state of the sandpile affects the stationary state

– it should not (check this!) but it might affect the time it takes to reach the stationary

state. Does this let us start to think about ‘how’ critical the system is? That is, if repeated

drops did not trigger an avalanche, do you think the eventual avalanche will be bigger?

What might be a good way to quantify this? How would you start to think about waiting

times in this context? What might relaxation time be correlated with?

9. The previous investigations should have given you hands-on and contextual insight into

your model – you should have a pretty good idea of how your model can change and

why. Briefly summarise the nature and effect of four or five key model characteristics

(toppling rules, drop methods etc.) that influence your physical observables, in particular

the exponent of the power law. Why is this last feature important?

5 Extensions

5.1 Extension: earthquake models

1. Let’s think about earthquakes. Briefly give an ‘idiot’s guide’ to earthquakes – what they

are, the different types, why they might happen. Focus on the physics of earthquakes, and

learn about how many different fields and types of physical expertise are needed to get a

good scientific understanding. Briefly talk about the idea of Coulomb stress and how it

relates to earthquakes.

12

2. Do some research (see the references section to point you in the right direction) and

find an earthquake model that utilises the sandpile model with different parameters and

conditions. Write up a summary of this new model, and make sure to tie in major elements

of the model with real physical analogues of an actual earthquake, and the ways that the

basic sandpile model is reinterpreted in a new context.

3. Go to a (trusted) website that logs earthquake events (again, see references). One way or

another, scrape some data (time and magnitude are the most important quantities, but if

you can find it, area and duration are useful too) and obtain the exponent for earthquake

magnitude vs. frequency. What do you notice about the log-log plot? Is the data as scale

free as you would have thought? Is there a suitable regime for earthquake magnitudes

that the theory of self-oragnised criticality is valid for? Why do you think this is? What

might this tell us about what is happening physically?

4. The Earthquake model should differ from the sandpile model from before in two major

ways. First, the strain on each site in the grid should increase at the same rate per unit

time, and secondly, a collapsing site will not necessarily be conservative. To illustrate this,

we introduce a parameter 0 ? 0.25. A toppling event now obeys the rules

z(i, j, t) = z(i, j, t) ? K (5.1)

z(i ± 1, j, t) = z(i ± 1, j, t) + αK (5.2)

z(i, j ± 1, t) = z(i, j ± 1, t) + αK (5.3)

Clearly, describes the conservative-ness of the system. Think about how a real earthquake

might be occuring. Does this make sense? Code up your new earthquake model, and vary

the parameter . What system observable does this affect? Obtain some good statistics as

to how varying affects the observable, and plot them. Describe what is happening in the

plot. Is there an observable phase change in the model when it is parametrised by ?

5.2 Extension: lattice gas automata

This extension is optional, in the sense that you should only look at it if you have the time. It

will be a good chance to demonstrate that you have understood the spirit of the prac, however,

which in turn affects how your report is judged - so please do not neglect it just because it

is optional! If you start but don’t finish, don’t worry - just write up and try to understand

what you have done. In this section we will learn how, if we wanted to, we could extend this

kind of heuristic, automatabased modelling into the territory of accurate computational fluid

dynamics. To do this, we need lattice gas automata - a few simple ideas that can be used to

intuitively simulate very complicated scenarios.

The name of the game in this kind of approach is to discretise time, space and the fluid

that you are modelling. Instead of a continuous fluid (say air) imagine that the air is made up

of many particles of wind. The particles are in motion through the discrete space, and when

two wind particles collide at a point in space, there are rules that tell us where they go next.

This will depend on the speed and direction of the incident particles. In these kind of models,

depending on how many different speeds we want, there are only a fixed number of velocities

allowed. The simplest approach is to take every particle to be at a single speed, with velocities

determined by the lattice directions. You can imagine that the aggregate of the particles and

rules behaves like a continuous fluid, particularly if the discretization is fine. You might also

be able to see that complicated boundaries can be handled very easily – we’d just need a rule

that tells us what happens when a wind particle hits an edge from a particular direction. If we

wanted to think about snow getting blown around in wind, we need only consider a new type

of particle, and some new rules that tell us how the snow interacts with the snow and the wind.

13

The Navier-Stokes equations are what we use to mathematically model fluids, and the typical,

‘brute-force’ approach to this kind of modelling would be to integrate through these partial

differential equations with appropriate (complicated) boundary conditions, using some kind of

finite element method, and then add the snow particles in an ad hoc way. The lattice gas

approach has the advantage of being conceptually easy to use and adapt, and amenable to

parallelisation in a serious application. We can even recover the Navier-Stokes equations in

particular limits - have a read of the literature on lattice Boltzmann methods, if this interests

you.

1. Initial studies in this area tried the simulations with a square lattice – it was found that

the resulting simulations were pretty inaccurate. Modern, more exact simulations use a

hexagonal lattice. Can you think why?

2. Research what is meant by the HHP and FHP lattice Boltzmann models. What are the

most important things (think physical laws you already know) that this kind of model

should incorporate? What’s the Reynolds number of a fluid? How does it impact on the

level of sophistication you might need for your model? Assuming there’s only one speed

(but six velocities corresponding to different directions), and we don’t have to worry about

viscosity and particles at rest, briefly summarise the rules governing the interactions of a

single particle type at a hexagonal lattice point in a basic FHP model.

3. Because it’s best to start with the simplest cases first, stick with the square lattice and

the HHP model. Draw the rules governing particle collisions, and if you can write a model

that has a small number of particles blowing around in a box.

This is easier said than done, so have a think about the best way to do this. If you happen

to have a background in programming, an object oriented approach might be best. If you

don’t know what this means, it’s possible to be clever with the information you store at

each point in the array. Since we’re working in 2D one workaround may be to use complex

numbers to encode particle velocity. Another technique might be to store the horizontal

and vertical velocities in separate arrays, or store them as an array inside an array, and

use these to update the position in each timestep. Use absorbing boundary conditions,

and set up a constant source of wind particles to compensate (think of the source like the

sand grains you drop, except this time you have wind particles.) Give each wind particle

a random initial velocity. If your program is not well optimised - Does it have as few loops

and especially nested loops as possible? Is it vectorised? - then calculating collisions will

take a while. That’s fine, do what you can, and extrapolate (sensibly) what you can do to

talk about what you would do, if only you were a better coder (sigh). If you’re not making

any progress, try moving onto part (f). Being sure that you account for the wind blowing

out of the grid, and the wind you’re adding back in, what’s one physical observable you

can use to indicate that you have set the collision rules up properly?

4. Start to think about combining your wind model and sandpile model. You can do this

in many different ways, and there is no right answer. The end goal is that we want to

be able to begin looking at how the statistics change when there are some wind effects

incorporated. We will need some collision rules though, so think about how wind can

interact with sand. One way to proceed might be to imagine that the wind level is low,

so you don’t have many wind particles. Further, imagine that it is blowing above ground

level, so it might only affect stacks of sand that are say 3 grains high. A wind particle

hitting such a stack might blow the top grain onto the next stack, possibly triggering an

avalanche there. If you can implement such a model, study the statistics in the context of

the earlier, no-wind sandpile models.

5. Now we are in a position to start thinking about the effects of directed wind. If the wind is

preferentially blowing in a certain direction, how many wind particles do you need before

14

you see the sand start piling up at one end of the table? How do the statistics change?

Does this change if you add in the slope you simulated earlier? Or if you use different

drop rules? What about if you change the maximum height of each grain stack to say 10?

Have a play and discuss how realistic you think your model is, and what, if you were

specifically trying to learn about avalanches on a real-world mountain side, you might

conclude. It may be that you don’t think the model is valid in this sense at all; that’s fine,

just explain why, and explain how you would in theory extend it to be better.

6. Lastly, not to undermine what has just been done, but merely to show that we can think

about this kind of phenomena in lots of different ways, can you go back to your non-wind

model and try to introduce some noise? It might be that, suddenly and randomly, certain

stacks of sand topple, even if they are not critical - just as if they had received an extra jolt

of thermal energy. With different assumptions about the noise regimes, you should be able

to get different statistics out. If you can get similar statistics to the wind-powered model,

what does this tell us about both our models? Often in statistical mechanics a powerful

approach to take is to blackbox the noise in a model, so that you stop caring about its

specific source, and focus on the effects. If we can reproduce the effects of wind with

random, thermally-inspired noise, how might this inform say a purely stochastic model for

avalanche statistics?

Figure 3: A lattice gas model.

Figure 4: A zombie apocalypse simulation collected from YouTube.

6 Assessment tasks

6.1 Tasks for your mid-semester report

1. 2-4 tasks.

6.2 Tasks for your final report

1. More, including extension options.

15

7 Further reading

1. P. Bak, How Nature Works: The Science of Self-oragnised Criticality, August 1996.

This book provides you with everything qualitative you could possibly want to know about

this topic. The first three chapters provide an excellent introduction to this lab.

2. P. Bak, C. Tang, K. Wiesenfeld, Self-oragnised criticality: An explanation of the 1/f noise,

Phys. Rev. Lett. 59, 381, July 1987.

The results in this paper are the focus of the first part of this lab. Familiarise yourself

with it.

3. P. Bak, C. Tang, K. Wiesenfeld, Self-oragnised criticality, Phys. Rev. A 38, July 1988.

This paper is an extended version of the previous. It approaches self-oragnised criticality

in a broader scope.

4. A. Masselot, B. Chopard, Cellular Automata Modelling of Snow Transport By Wind,

Parallel Computing Group, University of Geneva, 1995.

This paper will be useful for the extension.

5. A. Masselot, B. Chopard, A lattice Boltzmann model for particle transport and deposition,

Europhys. Lett., 42 (3), 1998.

This paper is a good demonstration of the complex systems heuristic approach in a realworld

application.

6. ETH Zurich - Chair of Entrepreneurial Risks, Complex Systems and Self oragnisation,

http://bit.ly/2mHfIk4.

This is a good site from one of the world-leading research groups. The head of the group,

Professor Didier Sornette, began life as a geoscientist and helped to pioneer the study of

extreme events in the financial sector.

7. United States Government, Department of Geosciences, Earthquake catalogue:

https://earthquake.usgs.gov/earthquakes/search/

Here you can download in .csv format Earthquake event data from past years.

16

版權所有：編程輔導網 2018 All Rights Reserved 聯系方式：QQ:99515681 電子信箱：[email protected]

免責聲明：本站部分內容從網絡整理而來，只供參考！如有版權問題可聯系本站刪除。