Sunday, 15 October 2017

book review: Worldsoul

Liz Williams.
Worldsoul.
Prime Books. 2012

For Mercy Fane, the day starts as any other for a Worldsoul Librarian: choosing a weapon to fight the dangers escaping from the books. What emerges will involve not only her, but also a hidden descendent of old Norse legends, a demon from Hell itself, and an Alchemist from the Eastern quarter, in a desperate fight against those who would destroy Worldsoul.

There is a rich vein of fantasy that has heroic Librarians fighting against the dangers that can leak out of Old Books. I understand the desire for heroic librarians; I’m not so sure that having the books be the danger is the best idea in this age of anti-intellectual post-truth.

However, here we have another hero-Librarian fighting off the demons. Worldsoul is a beautifully drawn, rich and detailed world, with a complex plot drawing on a range of old-Earth mythologies. In a lesser author’s hands, the range of characters, locales, and mythologies might have resulted in fragmentation; here Williams draws them all together in a fine tapestry, with a powerful cumulative building of the plot details.

The plot itself comes to a conclusion, but the ending provides the potential for a sequel. There are hints on the web that such a sequel is “projected”, but it does not seem to exist yet. I would welcome another tale in this universe.




For all my book reviews, see my main website.

Saturday, 14 October 2017

multi-objective optimisation II

In a previous post, I described what a Pareto front is, and Deb et al (2000)’s algorithm for finding it.  We have the first element we need for a multi-objective optimisation algorithm.  There is one more subtlety first, though.

We want to get a good estimate of the whole Pareto front, not just a few points on it.  So we want the points we find to be well spread out across the front.  Deb et al define the crowding distance: we prefer less crowded to more crowded points, as the densely crowded ones are not giving us much extra information about the front.  Their algorithm is:


Here, I is the set of points, and I[i].m is the value of the mth objective function of the ith individual in I.  We can code this up in python as follows (with I being a list of points in a particular front):
def crowding_distance_assignment_single_front(pfront):
  # pfront = list of individuals, dictionary { 'state':s, 'fitness':(f1, ... fm) }
  inf = 1000000                  # "infinity"
  dist = [0] * len(pfront)       # list of distances, I_distance
  m = len(pfront[0]['fitness'])  # no of objectives (= size of 'fitness' tuple)
  for mi in range(m):
    # sifront = [(i, pfront[i]), ...] sorted on mi-th fitness values of pfront
    # sifront[j][0] = original index i of item in pfront list
    # sifront[j][1]['fitness'][mi] = mi-th fitness value of jth sorted individual
    sifront = sorted(enumerate(pfront), key=lambda x:x[1]['fitness'][mi]) 
 
    dist[sifront[0][0]] += inf    # first boundary item in list
    dist[sifront[-1][0]] += inf   # last boundary item in list
    for i in range(1,len(sifront)-1):       
      dist[sifront[i][0]] += sifront[i+1][1]['fitness'][mi] \
                             - sifront[i-1][1]['fitness'][mi]

  # sort pfront into reverse distance order, using distance list
  dist, pfront = (list(t) for t in 
                  zip(*sorted(zip(dist,pfront), key=lambda x:x[0], reverse=True)))

  return(pfront)
The algorithm works by sorting the list of individuals in order of their fitness value for objective mi.  The distance value of each point is the sum of the distances to its two neighbouring points: crowded points will have low distances, so low distances are bad.  The first and last points in the list are given a very high (good) value.  This is repeated for each objective dimension, adding up the distance measures.

The python then does one more thing than the Deb et al algorithm.  It uses the distances calculated to sort the original list of individuals into reverse distance order (high distance isolated points near the the front, low distance crowded points towards the end) and returns that sorted list.

Then we can sort the list of fronts in the obvious way:
def crowding_distance_assignment(pfrontlist):
  for i, pfront in enumerate(pfrontlist):
    pfrontlist[i] = crowding_distance_assignment_single_front(pfront)
  return(pfrontlist)
Now we have enough to make a multi-objective search algorithm:
Np = 100        # parent population size
Nq = 100        # no of children
Ngen = 250      # no of generations
Nstate = 3      # size of individual state

parents = init_popl(Np,Nstate)
children = make_children(parents, Nq)

for _ in range(Ngen):
    popl = parents + children
    # construct the list of fronts from the population (previous post) 
    pfrontlist = fast_nondominated_sort(popl)              
    # sort each front into least crowded (highest distance) order
    pfrontlist = crowding_distance_assignment(pfrontlist) 

    # new parents become the best of the previous popl of parents + children
    parents = [item for pfront in pfrontlist for item in pfront][:Np]  
    # make new children from those parents 
    #  note that parents are sorted in fitness order    
    children = make_children(parents, Nq)                                    
We need to define init_popl() to build some initial (usually random) population.
import random
def init_popl(n,nstate):
    # return list of n individuals, each with nstate state vars, each in -1..1    
    pop = []
    for _ in range(n):                      
      state = tuple([random.uniform(-1,1.) for i in range(nstate)])
      pop += [{'state': state, 'fitness': fitness(state)}]        
    return pop
Here I have simply hardcoded in an initial range of +/-1, because I am demonstrating this with the equally hard-coded Kursawe function objectives:
import math
def fitness(state):
    # Kursawe function
    x1,x2 = state
    f1 = -10*math.exp(-0.2*math.sqrt(x1**2+x2**2)) \
        - 10*math.exp(-0.2*math.sqrt(x2**2+x3**2))
    f2 = abs(x1)**0.8+5.*math.sin(x1**3) \
       + abs(x2)**0.8+5.*math.sin(x2**3) \
       + abs(x3)**0.8+5.*math.sin(x3**3)
    return(f1,f2)
Now all that’s left is to define make_children(), which breeds the next generation. Deb et al simply say “Binary tournament selection, recombination, and mutation operators are used to create a child population”.  So, for the sake of demonstration, let’s do the simplest possible thing here: binary tournament with a fixed mutation of the three component state, and no recombination:
def make_children(parents, nq):
    # parents = list of items in fitness order, fittest first 
    #   (except on v first call)
    # nq = no of children to make
    
    # tournament selection + mutation
    np = len(parents)
    children = []
    for _ in range(nq):
        # binary tournament: chose two parents randomly
        p12 = random.sample(range(np),2)
        # tournament winner = lower number, because sorted
        p = parents[min(p12)]  
        s = mutate(p['state'])
        children += [ {'state': s, 'fitness': fitness(s)} ]
    return children

def mutate(state):
    x,y,z = state
    return (x+random.uniform(-0.1,0.1), 
            y+random.uniform(-0.1,0.1), 
            z+random.uniform(-0.1,0.1)) 
Clearly, init_popl(), fitness(), make_children(), and mutate() should be modified into general purpose code, but that’s not the point of this post.

Here are some results, for the Kursawe function, and for a population size of 100 (click to get a larger view).
After 2 generations: (top left) fitness/objective values (f1,f2), with the current fronts of the entire population as calculated using fast_nondominated_sort(), and different front plotted in different colours; (top right) the corresponding (x1,x2) state space (x3 omitted for clarity) of the entire population, with the points plotted in the same colours as in the fitness plot; (bottom left) the fittest front only, plotted with a rainbow colouring along its length; (bottom right) the state space corresponding to the fittest front, with the points plotted in the same colours as along the front.
After 25 generations; plots as above.  The fronts are converging, and the Pareto front is becoming clear.  The states corresponding to different positions on the Pareto front (that is, to different tradeoffs between the objectives) are also becoming clear. 
After 250 generations.  The Pareto front is fully defined and evenly filled; the corresponding state values are well clustered.
After 250 generations with no call of the crowding_distance_assignment() function.  The Pareto front is less well defined, and less evenly populated.
So that’s it!  Deb et al’s multi-objective optimisation algorithm, in python.

Reference

Deb, Kalyanmoy; Agrawal, Samir; Pratap, Amrit; Meyarivan, T. (2000) A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Parallel Problem Solving from Nature, PPSN VI. LNCS 1917:849–858. Springer. doi:10.1007/3-540-45356-3_83

Friday, 13 October 2017

book review: Uprooted

Naomi Novik.
Uprooted.
Pan. 2015

The wizard known as Dragon takes a village maiden every ten years, to serve him in his castle, while he keeps all the villages in the valley safe from the darkness of The Wood. This year, to everyone’s surprise, he chooses the awkward Agnieszka. What she didn’t know is that she has magic. And her magic may be the key to unlocking the secret of The Wood. If anyone will take her seriously.

This is a great fantasy novel. Agnieszka has to learn she has magic, then learn how to use it, then learn what it needs to be used on. This sounds standard enough stuff, especially with the evil in the dark forest, and the Dragon wizard, and the king’s son, and the missing queen, and, and… Yet the plot keeps branching off in surprising directions, growing in unexpected ways, and these standard fantasy tropes are made to bear unexpected fruit.

And the plot really races along. At one point I was musing that there had already been enough material and plot twists for one or even two volumes of the typical “fat book” fantasy trilogy, but I was barely half way through! This does not compromise the richness of the plot: Uprooted is exciting and satisfying read.




For all my book reviews, see my main website.

Wednesday, 11 October 2017

multi-objective optimisation I

There are lots of optimisation algorithms around: I’ve used simulated annealing and evolutionary algorithms, to name but two.  However, I’ve only ever used them to optimise a single value.  But real problems often have multiple objectives to satisfy simultaneously: time, price, and quality is a traditional triple.  However, multiple objectives are often in conflict, and so it is not possible to optimise them all simultaneously: there are tradeoffs.  “Pick any two of three” is the stated tradeoff for time, price, and quality; more nuanced tradeoffs are available.

One common technique is simply to add the various objective values together, with some weight for each, to reduce the problem to a single optimisation.  But which weights?  Different weights imply different tradeoffs.

So the next step is a full multi-objective optimisation system.  To see what is going on, the crucial concept of dominated solution needs to be understood.

Let’s assume that a candidate solution s has objective values (f1, f2, ..., fn).  And let’s assume we want to minimise each of those objectives (the maximisation case is similar, mutatis mutandis).  Then solution s1 dominates solution s2 if every one of its objective values is less than the corresponding value for s2.  If on the other hand, s2 is better in all its objectives than s1, then s2 dominates s1.  And in the case when neither dominates the other, these solutions are non-dominated.  The figure below illustrates these concepts.

Candidate solutions plotted in some in 2D objective space, where we wish to minimise both objectives.  Consider the central red solution.  It dominates all candidate solutions in the upper right quadrant, because both its objective values are better than those in the dominated quadrant.  It is dominated by all candidate solutions in the lower left quadrant, because both its objective values are worse than those in that quadrant.  The other two quadrants are non-dominated solutions: one objective is better, but the other is worse.  The large green dots along the lower edge represent solutions on the Pareto front: all the solutions that are not dominated by any other candidate.  The concepts generalise to higher dimensions when there are more than two objectives.  
We can write a simple python function to calculate whether one solution dominates another, for an arbitrary number of objectives:
def dominates_min(a,b):   # for minimisation problems
  # a and b are nD tuples of the respective n objective values
  # a dominates b if a.x < b.x and a.y < b.y and ...
  return all([(a < b) for a, b in zip(a,b)])
How this works:  Let the tuples by (ax, ay, az) and (bx, by, bz).  Then zipping them together gives the list of tuples [(ax,bx),(ay,by),(az,bz)].  The set comprehension gives a list of booleans, True or False depending if the respective ai<bi or not.  Then the all function tests whether all the values are True.
In a complete collection of candidate solutions, the ones not dominated by any other solution form the Pareto front.  These are the best possible solutions: no other candidate performs better in all objectives.  The front describes the possible tradeoffs: values along the front represent different tradeoffs.

When we don’t have all candidate solutions to hand, maybe just a sample of solutions, we need an optimisation algorithm to find those candidates close to the actual Pareto front, and to use them to find even better ones.

But first, we need to calculate the points on the Pareto front of a given population, that is, all the candidate solutions that are not dominated by any other solutions (the green points in the figure above).  The obvious algorithm (for each candidate solution, test it against all other solutions for all objectives) is rather inefficient, being O(MN2), where M is the number of objectives, and N is the number of solutions to test.  Deb et al (2000) give a more efficient algorithm:


We can code this up in python as follows (with comments showing how it relates to the algorithm above):
def fast_nondominated_sort_single_front(pop):
  # pop = list of individuals
  #   individual of form { 'state': (s1,s2,s3,...), 'fitness':(f1,f2,f3,...) }
  # pfront = list of individuals on the calculated "pareto front"

  pfront = [ pop[0] ]      # P' = {1}
  for pi,p in enumerate(pop):  # for each p in P and p notin P'
    if pi == 0 : continue      # (so not the first element)

    pfront = pfront + [ p ]    # P' = P' cup {p}
    # make a copy to iterate over, because pfront is being modified; 
    #   don't need the last item, which is p  (makes q neq p below True)
    pfrontcopy = pfront[:-1]   

    pf = p['fitness']
    for qi,q in enumerate(pfrontcopy):    # for each q in P' and q neq p 
      qf = q['fitness']
      if dominates_min(pf,qf):     # if p dom q
        pfront.remove(q)           #   then P' = P' \ {q}
      elif dominates_min(qf,pf):   # else if q dom p
        del pfront[-1]             #   then P' = P' \ {p}
        break    # don't have to check any more qs as p already gone

  return(pfront)
Once we have the front, we could remove it from the population, and claculate the next front, and so on.  This is useful in an optimisation algorithm, to find less fit (not on the current best front) but still possibly good solutions, for use in building even better ones.

The multiple front calculation is as follows:
def fast_nondominated_sort(pop):
  # find a list of ever decreasing fronts
  # pop = list of individuals
  # pfrontlist = list of fronts = list of lists of individuals

  popcopy = pop[:]
  pfrontlist = []
  while popcopy:
    pfront = fast_nondominated_sort_single_front(popcopy)
    pfrontlist += [pfront]
    # remove the found pfront from the pop
    popcopy = [ ind for ind in popcopy if ind not in pfront ]  

  # check that no-one has been lost
  assert len(pop) == len([item for pfront in pfrontlist for item in pfront]) 

  return(pfrontlist)
Given some population of candidate solutions, we can plot out the relevant fronts so calculated (plotting code always seems much more complicated than the actual algorithms!):
import matplotlib.pyplot as plt
def plot_fronts(pfrontlist):
  # pfrontlist = list of fronts; each front is a list of individuals

  fig = plt.figure() 
  ax1 = fig.add_subplot(111) 

  # one colour per front 
  colors = matplotlib.cm.rainbow(np.linspace(0, 1, len(pfrontlist))) 
  # iterate backwards, to draw better fronts on top of less good ones 
  for i in range(len(pfrontlist)-1, -1, -1) :  
    pfront = pfrontlist[i]

    # sort members of front in order of fitness; draw a line along the front
    data = sorted([ pf['fitness'] for pf in pfront ]) 
    xs = [ i[0] for i in data ]
    ys = [ i[1] for i in data ]
    ax11.plot(xs, ys, ':o', color=colors[i], mew=0)
  plt.show()
This gives a plot like:
A sequence of fronts, found by finding the Pareto front of a population of candidate solutions, removing it from the population, finding the front of the remaining candidates, and so on.


In a later post, I will go through the rest of the Deb et al (2000) algorithm, and show how they incorporate this front-finding into a full-blown multi-objective optimiser.

Reference

Deb, Kalyanmoy; Agrawal, Samir; Pratap, Amrit; Meyarivan, T. (2000) A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Parallel Problem Solving from Nature, PPSN VI. LNCS 1917:849–858. Springer. doi:10.1007/3-540-45356-3_83

Tuesday, 10 October 2017

production-quality spam

What new spam is this?

Here’s an email I’ve just received (needless to say, I have not designed a tool called BalbiEtAl2017, which sounds like a BibTeX tag).

I haven’t clicked on the link, because, email  link, duh!







For all my social networking posts, see my Google+ page

Monday, 9 October 2017

Brexit: the Athenian expedition against Syracuse

Repeating history, one disaster at a time.

Read the reasoning...




[via Danny Yee's blog]

For all my social networking posts, see my Google+ page

Sunday, 8 October 2017

phenomenon-based learning

Finland ahead of the education game yet again.
Finland Will Become the First Country in the World to Get Rid of All School Subjects
[assume there’s an interesting snippet quoted from the article here – there isn’t, because the article is “protected”, by someone who doesn’t understand how the Web works] 



[via Danny Yee’s blog]

For all my social networking posts, see my Google+ page

Saturday, 7 October 2017

book review: The Long Way to a Small Angry Planet

Becky Chambers.
The Long Way to a Small Angry Planet.
Hodder. 2014

Rosemary Harper signs on as a certified clerk to the small spaceship Wayfarer, whose multi-species crew builds small hyperspace tunnels through navigable space. She wants a fresh start to get away from her secret past, and this looks ideal. But then the Wayfarer is offered a big job: to build a long hyperspace tunnel from a newly joined member of the Galactic Commons. The job is valuable, but it will involve a long trip to the starting point. The contract will allow them to move their tunnelling operation into the big league, so they snap it up. But the long trip will expose everyone’s secrets…

Both that summary, and even moreso the book cover blurb, make this sound as if it is a book about Rosemary and her past, and that thrilling adventures will ensue. In fact, it is a book with an ensemble cast, and an episodic travelogue style, as the ship moves through a variety of encounters that help uncover different aspects and secrets of both the crew members, and their various species. Stakes are high, but on the individual rather than the galactic level.

And it’s just delightful. The various alien species are alien enough to be interestingly different, without being incomprehensible. The different species are not monolithic: there are different characters within each. Humans are more fleshed out than the other species: there are three specific subgroups, of old-Earthers, Mars colonists, and the ship-born. But this is actually a plot point: the rest of the GC think humans are under-civilised because of this. The small events we see do have consequence, if sometimes only for one or two characters, and help gradually paint a rich picture of this future galaxy. There is tension and conflict and loss and revelation, but relatively little violence. The characters are simply adult and competent, rather than overly-dramatic, cartoonish, or hyper-competent. It feels real.

I once summarised a book to a friend as “nothing happens, but it’s very exciting!”. This one is maybe: “nothing happens, but it’s wonderful!” I went straight out and bought the next book.



For all my book reviews, see my main website.

Friday, 6 October 2017

sequestering carbon, several books at a time LXXV

The latest batch (this is a more than two-month-long batch, hence its somewhat unusual size)


Tuesday, 3 October 2017

Grayling again

AC Grayling's righteous wrath:
I Accuse... AC Grayling’s denunciation of Britain’s craven politicians
This is why Briefing Paper 07212 of June 3, 2015, told MPs that the EU referendum was advisory only, non-binding on Parliament and government. That is why the then Minister for Europe, David Lidington, repeated this, indeed stressed it, viva voce in the House of Commons in the debate on the referendum Bill later that same month, in alleging why there was no need for a super-majority requirement in the referendum. (Was this a deceit knowingly practised on the House of Commons by the minister and the government?) 
It was accordingly for Parliament to debate the outcome of the referendum and decide what to do in the best interests of the country. It has never done so. Parliament has never debated the outcome of the referendum. 
I accuse Parliament of a gross dereliction in this regard. I accuse it of a total failure in the exercise of its sovereignty. I accuse it of having abdicated its sovereignty, handing it to one-third of a restricted electorate who voted for a proposition that came with no plan, no roadmap, no costings, and no analysis of consequences, but supported instead by manifest distortions, falsehoods and false promises.




For all my social networking posts, see my Google+ page

Monday, 2 October 2017

spooky moon

A lovely spooky moon behind the scudding clouds

20:36 BST

Sunday, 24 September 2017

sunset

A glorious sunset this evening:

18:58 BST


Wednesday, 20 September 2017

view from a hotel window

I'm visiting Abertay University for a couple of days, to get a grant proposal with some colleagues here finished off.  This evening, taking a break from editing the proposal, here is the view from my hotel window, looking east over the dock by the river Tay:




Tuesday, 19 September 2017

Julia sets

Here are some beautiful visual explanations:
How to Fold a Julia Fractal : a tale of numbers that like to turn





For all my social networking posts, see my Google+ page

Monday, 18 September 2017

Saturday, 16 September 2017

A conceptual and computational framework ...

Our latest paper, in PLOS Computational Biology, is snappily titled:  A conceptual and computational framework for modelling and understanding the non-equilibrium gene regulatory networks of mouse embryonic stem cells.

Our author summary is:
Pluripotent stem cells possess the capacity both to renew themselves indefinitely and to differentiate to any cell type in the body. Thus the ability to direct stem cell differentiation would have immense potential in regenerative medicine. There is a massive amount of biological data relevant to stem cells; here we exploit data relating to stem cell differentiation to help understand cell behaviour and complexity. These cells contain a dynamic, non-equilibrium network of genes regulated in part by transcription factors expressed by the network itself. Here we take an existing theoretical framework, Transcription Factor Branching Processes, which explains how these genetic networks can have critical behaviour, and can tip between low and full expression. We use this theory as the basis for the design and implementation of a computational simulation platform, which we then use to run a variety of simulation experiments, to gain a better understanding how these various transcription factors can combine, interact, and influence each other. The simulation parameters are derived from experimental data relating to the core factors in pluripotent stem cell differentiation. The simulation results determine the critical values of branching process parameters, and how these are modulated by the various interacting transcription factors.



Thursday, 14 September 2017

new chapter!

Got a copy of a nice book through the post today


That's because in it is a chapter by my colleagues and me:

Dominic Horsman, Viv Kendon, Susan Stepney, J. P. W. Young.  Abstraction and Representation in Living Organisms: When Does a Biological System Compute?  in Gordana Dodig-Crnkovic, Raffaela Giovagnoli, eds, Representation and Reality in Humans, Other Living Organisms and Intelligent Machines, pp.91-116, Springer, 2017

Abstract:
Even the simplest known living organisms are complex chemical processing systems. But how sophisticated is the behaviour that arises from this? We present a framework in which even bacteria can be identified as capable of representing information in arbitrary signal molecules, to facilitate altering their behaviour to optimise their food supplies, for example. Known as Abstraction/Representation theory (AR theory), this framework makes precise the relationship between physical systems and abstract concepts. Originally developed to answer the question of when a physical system is computing, AR theory naturally extends to the realm of biological systems to bring clarity to questions of computation at the cellular level.


Friday, 8 September 2017

ECAL 2017, Friday

The final day of the European Conference of Artificial Life, in Lyon; actually only half a day, to allow people to travel home.

The keynote presentation today was Sabine Hauert talking on Swarm engineering across scales: From robots to nanomedicine.  She started off with a description of her work on swarms of flocking robots, and how to control their behaviour without GPS absolute position, only short range wifi and a compass heading, and where the fixed-wing flyers have restricted manoeuvrability.  But these are small swarms: 10s of devices.  At the other end of the scale is use of small particles in nanomedicine, where the numbers are more like 10^13, 12 orders of magnitude larger, and the controlability is even less, having to rely mostly on diffusion and similar processes.  Most of the “programming” goes into the design of the “body” of these “brainless” particles, adjusting parameters such as size, shape, charge, material, and surface decoration with molecules that can interact with cell receptors.  Once the nanoparticles have “leaked” out of the bloodstream into the tissue, they might be activated by light or heat.  The aim is to come up with generalisable design guidelines.  One approach to searching the vast design space is to use a game to crowd-source workable designs for different scenarios.

After coffee was the final technical session of the conference.  I went to the Artificial chemistries and models of cellular dynamics 2 session.  In Transparency Of Execution Using Epigenetic Networks we heard about a form of neural network that can modify its topology during execution.  A Dynamic Model of the Phosphate Response System with Synthetic Promoters in Escherichia coli was a quite detailed simulation of a particular pathway.  I missed the last presentation, as I needed to get back and get ready for the closing session, which included some announcements about ISAL and the Artificial Life journal (I plugged the review article section), and then huge thanks to everyone involved in organising and running this splendid conference.

Then is was a picnic lunch, and saying our goodbyes (until next year?), and travelling home, our brains full of lots of wonderful new stuff.

Travel involved a tram to the train station, the express tram to the airport, a lot of waiting around (I had gone a bit early, as I had heard there were delays at passport control at Lyon; I needn’t have bothered, as they didn’t open passport control until just before the flight was scheduled), an easyJet flight to Gatwick, a train to St Pancras, a walk across the road to Kings Cross, another train, then finally a taxi, arriving home at 11pm.  One book read on the way out; another read on the way home.


Thursday, 7 September 2017

ECAL 2017, Thursday

Day four of the European Conference of Artificial Life, in Lyon, with a full day of presentations.

Again, the day started with an excellent keynote: Bill Sellers talking on Synthetic palaeontology: Reconstructing Ancient Life using Virtual Robots.  How did dinosaurs walk?  Could they run?  How can we tell when all we have is fossilised bones?  So, the idea is to build a dynamic model in the computer, which can be validated against modern animals like crocodiles and ostriches (although these do have fundamental differences from the original dinosaurs).
Only a feed forward control mechanism is needed in these models. From the dynamic model, it is possible to evolve gaits, and discover how the animals might have moved.  There are constraints that need to be considered, such as bone strength.  This means big dinosaurs probably didn’t run: the bones in their legs would have snapped on impact.  The same technology can be used to investigate the gait of early humans and various hominins, looking at the evolution from a crouched to an upright posture.

After coffee I went to the Morphogenesis session. The talks had a lot of overlap with the Morphogenetic Engineering Workshop on Monday, but added some more detail.

Lunch was organised with special “junior/senior tables” where we oldies got to impart words of wisdom to the up-and-coming generations; or just have a nice chat, actually.

After lunch I went to the session on Applications of ALife, which I was chairing.  Chairing is always a little stressful: you have to listen to the talk in order to have ready a question to ask if no-one else does (but not ask it if there’s a good flow of questions), and also have to timekeep.  The Effects of Environmental Structure on the Evolution of Modularity in a Pattern Classifier was about evolved hierarchical modularity.  Next was EvoMove: Evolutionary-based living musical companion, which I knew something about, since it was an output from the EvoEvo EU project I was involved with (which explains my several previosu trips to Lyon).  The third talk was unfortunetely cancelled, so we had a slightly awkward half-hour break (we don’t move talks around, in case people want to move between sessions).  Finally was Road Detection using Convolutional Neural Networks, spotting the rather ill-defined edges of some narrow roads through countryside.

Then it was off for coffee, followed by the second keynote of the day: Phillippe Faure on Drug addiction and alteration of decision making process.  This was a discussion of a series of complex and subtle experiments on mice and rats, including some that ran for months: how mice balance exploration of their environment with exploitation of resources; how even genetically identical mice have different "personalities"; how their physical and social environment causes individuation; how addiction to eg nicotine also modifies behaviours other than the desire for nicotine, such as favouring immediate reward over uncertainty; and more.  One part I liked was when someone from the audience asked a questions about the difference between reward and punishment: Faure answered about how the mice avoid punishment, but charmingly added “we don’t do those experiments”.

After this was an announcement about next year’s ALife conference in Tokyo.  Then it was the late breaking poster session, accompanied by wine and cheese, which was sufficiently delicious and plentiful that no further meal was needed.  I attended the European Research Council (ERC) grant information session, which introduced me to “Synergy Grants”, which sound ideal for interdisciplinary work (although they do have a success rate of 2%).







Wednesday, 6 September 2017

ECAL 2017, Wednesday

Day three of the European Conference of Artificial Life, in Lyon, with half a day of presentations, and half a day of excursions.

To start the day we had a keynote presentation by Csaba P谩l on the Evolution of complex adaptations.  The emphasis here is that the current state of an evolved system, such as the complex bacterial flagellum, is not necessarily related to how it evolved.  The question is how can a system that needs several complex adaptations, each of which may be individually deleterious, actually evolve?  The answer is ... complicated.  There are many mechanisms, including non-adaptive origins such as neutral mutations, macro-mutations such as gene and chromosome duplications, mutations affecting multiple traits, pre-adaptation / exaptation, noise, dynamic environments, and more.  The four main methodological pillars used to research these issues are population genetics, systems biology, experimental evolution, and comparative genomics.  This great talk was another example of how nothing in biology makes sense except in the light of the phrase “but it’s more complicated than that”.

After coffee I went to the Artificial Chemistries track.  Structural Coupling of a Potts Model Cell examined the coupling between an organism and its environment, and how that affects the  morphological transition network.   Functional grouping analysis of varying reactor types in the Spiky-RBN AChem (my student’s paper) discussed an AChem where the binding properties are emergent properties of the composed molecules.  Time as It Could Be Measured in Artificial Living Systems discussed the simplest possible clocks that might be exploited by simple systems.  Finally, Delving Deeper into Homeostatic Dynamics of Reaction Diffusion Systems with a General Fluid Dynamics and Artificial Chemistry Model looked at a modification to a thermal Gray-Scott reaction-diffusion system that has a more physically plausible source and sink of material.

After lunch we had a choice of excursions: a vineyard, or old Lyon.  I chose the latter.  A bus took us up to the top of the old city, to the Basilica, an amazing building, cool grey stone on the outside, lush decoration on the inside.

Basilica front aspect
Basilica decorated ceiling
Basilica: one of many murals

Then it was back into the bus, and down the hill to the old town: narrow cobbled streets, tall old buildings, gorgeous smells of fresh food; hidden courtyards and towers, and secret passageways ("traboules") between the streets.  Then back on the bus to return to the hotel.

the old reflected in the new, seen through the bus window

The evening saw us all congregate for the conference dinner on the Herm猫s restaurant boat.  The boat seemed to spend a lot of time turning around.  GPS helped explain the reason: cast off; turn round to go south down the Rh么ne past the confluence; turn round to go north up the Sa么ne; turn round to go south down the Sa么ne past the confluence; turn round to go north up the Rh么ne back to the mooring.  Then back to the hotel using the excellent tram system.



Tuesday, 5 September 2017

ECAL 2017, Tuesday

Day two of the European Conference of Artificial Life, in Lyon, was the start of the conference proper.  The proceedings for the conference are open access.

We started with a fascinating keynote from philosopher Viola Schiaffonati, on Experimenting with computing and in computing: Stretching the traditional notion of experimentation.  As an ex-physicist, currently computer scientist, with a strong interest in computer simulation of biological and other complex systems, I found this a very useful exploration of the uses of simulation.  Physics-like subjects tend to have strong clear theories, and here “simulation” is more of “prediction without needing to run the experiment”, as in a virtual wind-tunnel, say.  Biology-like subjects, on the other hand, have less well-defined theories: they are more models comprising a “nest” of concepts, results and techniques from a range of sources, and simulations are more “does this nest hang together in correspondence with reality?”, or even “what tweaks do I need to make this nest hang together?”  The talk examined these concepts, put them together in an interesting way, and also had lots of useful references (not least to the “nest metaphor” paper which I need to read in detail).

After coffee I went to the Complex Dynamical Systems track.  Locating critical regions by the Relevance Index introduced a new measure useful for spotting systems at, or moving towards the “edge of chaos”.  Criticality as it could be demonstrated how agents who learn a structure that allows them to exhibit critical behaviour can exploit that learning to explore solutions to complex tasks in different environments.  Reservoir computing with a chaotic circuit was a neat demonstration of how even a (relatively) simple device can perform the complex computations of a reservoir computer.  Finally Signatures of criticality in a maximum entropy model of the C. elegans brain during free behaviour looked at potential critical behaviour in the “brain” of a (relatively!) simple biological organism.

smooth evolvable path surrounded by
rugged unevolvable peaks
After lunch I went to the Evolutionary Dynamics track.  Lineage selection leads to evolvability at large population sizes was a very nice talk demonstrating that the far ago ancestor of today’s creatures was not necessarily the fittest at the time: in a large population some groups can scamper up sharp fitness peaks and get stuck, whereas others trudge slowly up a smooth shallow incline of fitness, and survive to be even fitter in the long term. A 4-base model for the Aevol in-silico experimental evolution platform described the move from the classical binary representation to a four-base representation in the Aevol system, allowing degeneracy in the decoding.    MABE (Modular Agent Based Evolver): a framework for digital evolution research described a new modular architecture that should allow in silico evolution experiments to be set up much more straightforwardly.  Finally Gene duplications drive the evolution of complex traits and regulation described a series of experiments to investigate to various facets of gene duplication in Avida: is it the genetic material in order, or just the material, or just the increase in genome size, that’s important?

Then after another coffee break (hydration is very important at these events!) there was the second  keynote of the day: a fine double act from Andreas Wessel-Therhorn and Laurent Pujo-Menjouet speaking on The Illusion of Life, or the history and principles of animation.  They demon-strated the “12 principles of animation” with simple examples and then how they are realised in actual films.  They showed the clip of the death of Bambi’s mother that traumatised generations of children (myself included), showing that the actual death is never shown, despite the fact that many people remember it vividly. (I am not one of those who falsely recall it being depicted: I was traumatised by the fact of the death, not its depiction!) They also demonstrated how some of these animation principles have been used in the design of the Jibo robot’s movements, to give it “the illusion of life”.

The final event of the day was the poster session, including our Tuning Jordan algebra artificial chemistries with probability spawning functions.  I looked at all the posters quickly, but I then had to go off to the ISAL board meeting, where we reviewed the society’s activities, learned lessons from this year’s organisers, and brought next year’s organisers into the fold.





Monday, 4 September 2017

ECAL 2017, Monday

Day one of the European Conference of Artificial Life, in Lyon, was dedicated to Workshops and the Summer School.

All the workshops running in parallel meant a tricky choice.  In the morning I went to the (half day) Morphogenetic Engineering Workshop. First we had three plant-inspired talks: on simulating complex ecosystems to investigate the evolution of diversity; on guiding the growth of a system by being inspired by plant growth mechanisms; on real-time interactive systems for biological investigations, based on game engines.  After the break there were three more talks, on using the NEAT encoding scheme to evolve cellular automata rulesets; an investigation into criticality in gene regulatory networks modelled using Random Boolean Networks; a multi-level model of autopoiesis to investigate self-organisation.  So the conference was off to a great start!

After lunch I gave a talk on Open-Endedness in Simulations at the ISAL Summer School.  My very brief abstract: Open-ended behaviour in simulated systems is one goal of artificial life, yet the term “open-ended” is rarely defined. Here I discuss a recent definition in terms of models and meta-models, its consequences for discovering multi-scale open-endedness in computer simulations, and some suggested ways forward.  The talk was based on findings/rants from four recent-ish papers: Reflecting on Open-Ended Evolution (ECAL 2011), Bio-Reflective Architectures for Evolutionary Innovation (ALife 2016), Defining and Simulating Open-Ended Novelty: Requirements, Guidelines, and Challenges (2016), and Semantic closure demonstrated by the evolution of a universal constructor architecture in an artificial chemistry (2017).  Later, a colleague said “I heard you talk on this in Cancun, and thought you were mad.  This time, I think I can see what you are getting at.  Maybe next time I will believe you!”  I suspect this might be partly due to me having had 30 minutes for a highly compressed summary last year, and 90 minutes for a more relaxed approach this time.

I then had the opportunity to drop into the final session of the Living Architectures Workshop. The presentation about the HyperCell project was given via skype, and covered a lot of ground, from a design for the flexible, magnetically connecting “cells” that looked wonderful, to large scale applications for “growing” buildings.

The formal part of the day was completed with a fascinating keynote by Andr茅 Brack,  Honorary Research Director, CNRS, Center for molecular biophysics, Orleans, France.  The topic was on the origin of life, from Miller & Urey’s now over 60-year-old experiment, to today’s explorations of the solar system, and the possibility of life on exoplanets.  Lots of fascinating chemistry, and delightful anecdotes from a life in science (including, how to get your name on a Science paper by saying “add copper chloride”).

Then it was off to dinner with the other Associate Editors of the Artificial Life journal, for strategy and planning discussions.  I’ve been banging on for years about how important good review articles are to any discipline, so I am now responsible for the reviews part of the journal!




Sunday, 3 September 2017

view from two hotel windows

The view from my hotel window at Gatwick airport didn't look quite so interestingly techno-noirish in the cold light of dawn.


One of the advantages for staying at this particular airport hotel: I left hotel reception at 6:45, walked to the automated bag drop area, used the machine to check my suitcase, walked up to departures, was processed through security, and was in the departure lounge by 7:00.  I could have had another 15 minutes sleep!  (But there would have probably been longer queues by then.)

Then we spent 45 minutes sitting in the plane at the gate, while they fixed a dodgy-looking seal.  Sitting on the tarmac at Gatwick on my way to an Artificial Life conference is getting to be a tradition.

We landed at Lyon, and I got the express tram to the city centre, then another tram to my hotel.  As I walked up to it I realised I had been there before.  Same strange decor; same helpful friendly staff; same lack of a hotel restaurant because it's Sunday.  The view from this window is a little more typical than this morning's:


The conference starts tomorrow: I'm looking forward to it!

Saturday, 2 September 2017

view from a hotel window

At Gatwick Airport, ready to catch an early flight to Lyon tomorrow, where I will be attending the European Conference on Artificial Life.  This is the techno-thriller-like view from my hotel room window:


Sunday, 27 August 2017

an infrequent power loss

It’s not just night and clouds that reduce solar power:






For all my social networking posts, see my Google+ page

Friday, 25 August 2017

it's a trap!

It all makes horrible sense now!
To truly understand the Brexit debacle, look to Star Wars 
Hugely encouraging word from Brussels, where a fan theory has apparently developed around Britain’s Brexit plan. According to a recent Politico report, some on the EU side believe there is no way the UK could truly be as sensationally unprepared and aimless as it has appeared in the early rounds of negotiations, and that it consequently must all be a clever trap.



For all my social networking posts, see my Google+ page

Wednesday, 23 August 2017

paths to unconventional computing

Andrew Adamatzky, Selim Akl, Mark Burgin, Cristian S. Calude, Jos茅 F茅lix Costa, Mohammad M. Dehshibi, Yukio-Peggio Gunji, Zoran Konkoli, Bruce MacLennan, Bruno Marchal, Maurice Margenstern, Genaro J. Mart铆nez, Richard Mayne, Kenichi Morita, Andrew Schumann, Yaroslav D. Sergeyev, Georgios Ch. Sirakoulis, Susan Stepney, Karl Svozil, Hector Zenil.
East-West Paths to Unconventional Computing
Progress in Biophysics and Molecular Biology, 2017
doi:10.1016/j.pbiomolbio.2017.08.004

This is possibly the strangest paper I have been involved with; it certainly has the most authors!

The abstract says:
Unconventional computing is about breaking boundaries in thinking, acting and computing. Typical topics of this non-typical field include, but are not limited to physics of computation, non-classical logics, new complexity measures, novel hardware, mechanical, chemical and quantum computing. Unconventional computing encourages a new style of thinking while practical applications are obtained from uncovering and exploiting principles and mechanisms of information processing in and functional properties of, physical, chemical and living systems; in particular, efficient algorithms are developed, (almost) optimal architectures are designed and working prototypes of future computing devices are manufactured. This article includes idiosyncratic accounts of ‘unconventional computing’ scientists reflecting on their personal experiences, what attracted them to the field, their inspirations and discoveries.
Surprisingly, perhaps, one of the keywords is “spirituality”.  Now, I agree that Unconventional computing encourages a new style of thinking, but this would be thinking of computation as a physical rather than a mathematical process (in my opinion), and nothing about “spirituality” (whatever that is).

But it was fun for me to write my bit, and to read my co-authors journeys.  You can find a pre-production version, all 82pp of it, here.



Monday, 21 August 2017

book review: Artificial Chemistries

Wolfgang Banzhaf, Lidia Yamamoto.
Artificial Chemistries.
MIT Press. 2015


[disclaimer: I received a copy from the publisher, in order to write this review for Artificial Life, doi: 10.1162/ARTL_r_00239]

An enormous quantity may be termed “astronomical”, referencing the huge span of time since the Big Bang (~ 1017 seconds), the huge size of the universe (~ 1027 metres), or the huge amount of material in the observable universe (~ 1080 atoms). Yet these quantities pale into insignificance compared to those generated by combinatorics, where numbers are combined using multiplication and exponentiation, leading to an “explosion” in their size. The number of possible proteins of the typical length of eukaryotic proteins is 20400 = ~10520 (although not all of these would have a sensible shape or function); the number of possible memory configurations of a mere 1kB of RAM is 28×2^10 = ~1010^3; the number of books in Borges’ Library of Babel is more than 1010^6 (yet hardly any are interesting books), they can be shelved in ~ (1010^6) ^ 1010^6 ways ways, and even the library始s catalogue is huge; and so on.

Daniel Dennett, in his book Darwin’s Dangerous Idea, uses a clever trick to remind us of the sheer scales involved. He builds up an intuition, or possibly more of a feeling, of such sizes, then dubs these “Vast”, with a capital V. Ever after, the term Vast evokes that sheer scale.

Within the Vastness of all possibilities, only a subset is somehow “interesting”: most is mere noise. This subset may be Vast in its own right, yet Vanishingly Small relative to the Vastness of all possibilities. How to find such Vanishingly Small needles in the Vastness of a combinatoric haystack?
 
One technique might be dubbed “search and construct”. Search for a useful set of atoms, primitives, components, that form the basis of the Vast combinatorial space. Then use rules and processes to define or generate only those constructs with interesting structure and behaviour within that space. For the Library of Babel, the atoms are characters, the Vastness is all possible books of these characters. But what rules delimit the subspace of interesting books, books that are grammatical, readable, and worthwhile? There is chunking to form higher-level components: words. There are syntactic restrictions on the form of sentences, and further semantic restrictions to be meaningful. But to go further, to construct the subset that is literature, say, requires as yet uncodified human creativity. For computer programming, constructing a member of the interesting subset is a slightly easier task. The primitives are the relevant high-level language constructs and their syntactic constraints, the rules include well-formedness constraints and patterns, yet there is still much creativity needed to construct useful programs.

Many researchers turn to the natural world for inspiration. Evolution is one process that explores these interesting possibilities. It can be considered part of a process that searches for genomes, then constructs phenotypes. Interestingness here is viability. A range of artificial evolutionary algorithms take inspiration from these natural processes. In nature, the starting point for evolution is already something quite complex: an organism, even a single-celled organism, is non-trivial, not a random collection of molecules. Can we find a mechanism for generating this initial complexity?

Underlying life is chemistry. Chemistry is combinatorics par excellence. From a small set of atoms, chemical bonding laws produce a Vast set of molecules with structure and behaviour. It has chunking: atoms can form small molecular building blocks, such as DNA bases and amino acids, that are themselves the components in higher level constructions. Good blocks can be searched and selected for by evolution. As we have seen from the protein example above, the larger molecules produced are still a Vanishingly Small subset of the potential Vastness. Not all combinations of atoms can form stable molecules, and not all molecules that can form have a function or structure that can contribute to further construction.

Artificial Chemistry (AChem) takes such ideas from natural chemistry, in order to generate and explore a variety of forms of combinatoric Vastness in silico. If we think of AChems as a generic form of “search and construct” processes, and as rule-based novelty generators, we can see that they can be applied not simply to “chemical” problems, but to a whole range of domains where such processes are needed and used, including computing, dynamical systems, language and music, and modelling in silico and in vitro complex systems.

An AChem provides three components for virtual world explorations. First, there is the material, the virtual atoms and molecules, that provides the Vast combinatorial space of potential structures. Then there are the reaction rules, the analogues of the laws of nature in our virtual world, which define how the material combines and dissociates, and possibly even how the space it occupies is restructured (such as with P-systems). These rules implicitly define a subspace of possible structures in that Vastness. Finally, there is the algorithm, which lays out our explicit experimental setup to explore that implicit subspace, anywhere from exhaustive search to pouring some virtual stuff in a virtual bucket and watching what happens.

Nature provides just the one particular kind of material—real world atoms and molecules—and one set of rules—chemical bonding and reactions that say which molecules are possible, and which are not. The only freedom the scientist has is in the algorithm: the experimental setup that controls which molecules encounter which others, under what environmental conditions. Despite its real-world constraints, chemistry provides all the richness and complexity sufficient for life itself.

The playpen of AChem is even richer, since we also have the freedom to choose different basic material, and different rules. Yet it has the corresponding downside in that we now have to implement the rules, of our virtual world.

This new book forms a comprehensive introduction to many different facets of the discipline of AChem. The plurality in its title, Artificial Chemistries, indicates the diversity of approaches covered. It covers the why, how, and what of the choices of material, rule, and algorithm, and their consequences. For the beginning student, it provides a wide-ranging review of the subject, and its 1000-item bibliography is a marvellous resource in its own right, providing entry into the relevant scientific literature. For the practising AChemist, it provides an invaluable reference material on all topics in the discipline.

Despite its comprehensive nature, this book is no mere “annotated bibliography”: its structure provides a narrative unity for the discipline. Part I comprises four foundational chapters, laying out the philosophy and scope of the subject, illustrated with some simple example AChems. It includes a primer on basic concepts from chemistry, such as chemical reactions, the law of mass action, equilibrium, chemical bonds and catalysis. It also covers differential equation modelling and computational techniques.

Part II comprises four chapters covering the natural world inspiration. It starts with the chemistry of life, that of biochemistry and large organic molecules including proteins, RNA and DNA. The level of detail is useful for showing the underlying complexity and richness of the chemical processes that are frequently abstracted as mere string concatenation. It would probably do students good to review this material again once they have designed their initial AChem, to help them appreciate the simplifications they have made. The next chapter discuss simple cells, including their structure with lipid walls, and their dynamics in terms of metabolism. It includes discussion of autopoeisis, Robert Rosen’s ideas on organisation in living systems, origin of life theories, and more. All this is necessarily brief, as each topic has deservedly book-length treatment elsewhere, and so things can get quite dense in places: the Rosen section in particular will probably be incomprehensible to anyone who has not already encountered the material. But the bibliography will guide the curious reader to further explanations. Next come chapters on evolution and open-ended systems. Open-endedness is the holy grail of AChems: not only can they explore a Vast configuration space, they may be able to grow this very space by opening up new possibilities and dimensions through their own contingent development. These chapters contain a mix of fairly standard material given added value by being filtered through an AChem perspective—for example, evolutionary dynamics is discussed in terms of chemical reactions—and some quite deep and provocative concepts.

Part III comprises three chapters of massive literature review, documenting and categorising AChems into rewriting systems, automata, and bio-inspired. In rewriting systems the reaction rules state how a particular string or other representation is systematically changed into a new form; these include lambda calculi, P-systems, L-systems, and the like. Automata AChems comprise molecules whose atoms are assembly language-level computational instructions: molecular behaviour is given by the execution of these fragments. These include specific systems such as Tierra and Avida, as well as more generic systems such as cellular automata, von Neumann constructors, and all the way up to Turing Machines. The bio-inspired AChems hold more closely to biological mechanisms, such as enzyme reactions, RNA binding, shape-based lock-and-key binding, genetic networks, and swarms. These chapters demonstrate a strength and weakness of AChems: the ability to build yet another arbitrary complex system. Some of these AChems have been examined in detail over a long period of time by research groups; others exist in only a paper or two from a single doctoral student project. These chapters can be used as a reference to find specific AChems, or as a source material for developing new AChems, hopefully as a synthesis and unification of existing ones. Their comprehensive nature can be a problem on occasion: a whole algorithm may be covered in a single spare sentence. Yet the Vast bibliography leads on to more detail.

Part IV comprises four chapters focussing on the global dynamics of general AChems. Whilst parts II and III will be best for students, this part will be of most value to more experienced researchers. First is a chapter on Organisation Theory, written with Pietro Speroni di Fenizio. This looks at conditions for and properties of closed sets of molecules: sets where each molecule is produced by members of the set, and so the reaction network is closed. The following chapter discusses the dynamics of such organisations: effects of reaction rates and probabilities on their construction and maintenance. Next comes a chapter dealing with what for me is the raison d’锚tre of AChems: emergence. It provides a discussion of relevant topics: self-organisation, non-equilibrium thermodynamics, chaos, downward causation, all as they are relevant to AChems. Several deep and important concepts are each outlined in half a page, and the chapter covers a stunning range of topics. The final chapter in this part continues the theme of emergence by discussing constructive dynamical systems: how AChems can produce novelty.

Part V comprises five chapters on applications of AChems to a wide range of domains. Here we get discussion of everything from robotics to unconventional computation, from nuclear physics to economics, from modelling biological systems to synthetic biology.

The book also includes an appendix giving details of the PyCell AChem package, which provides an immediate entry to computational AChems.

This book is really three or more significant books rolled into one, as needed to cover the breadth of the subject. There are interdisciplinary issues here: a practitioner needs to know a lot about a wide range of subjects. As such, it is a remarkable work of scholarship, bringing together a whole host of diverse information, and synthesising it into a coherent and valuable account of the discipline of Artificial Chemistry. I learned a lot from reading it; not just the material that was new to me, but also new ways of looking at known material, and the valuable syntheses of a wide range of concepts. The authors should be commended for their impressive contribution to the field. Any AChemist, ALifer or, more generally, any nature-inspired computer scientist or engineer, will find Artificial Chemistries a valuable addition to their research bookshelf.




For all my book reviews, see my main website.

Saturday, 19 August 2017

The Geometry of Speed Limiting Resources

Benjamin Russell, Susan Stepney.
The Geometry of Speed Limiting Resources in Physical Models of Computation.
International Journal of Foundations of Computer Science 28(4):321-333, 2017.
doi:10.1142/S0129054117500204

This is the latest paper in our series on using geometrical approaches to determining speed limits for quantum operations: it takes time to change a quantum state, which will limit the speed of quantum computers.  The series started with a generalisation of the Zeppelin navigation problem, and continued with a further generalisation allowing us to use the word “brachistochrone” in the title.  The current paper is a further generalisation still, to a wider class of systems.

Abstract:
We study the maximum speed of quantum computation and how it is affected by limitations on physical resources. We show how the resulting concepts generalize to a broader class of physical models of computation within dynamical systems and introduce a specific algebraic structure representing these speed limits. We derive a family of quantum speed limit results in resource-constrained quantum systems with pure states and a finite dimensional state space, by using a geometric method based on right invariant action functionals on SU(N). We show that when the action functional is bi-invariant, the minimum time for implementing any quantum gate using a potentially time-dependent Hamiltonian is equal to the minimum time when using a constant Hamiltonian, thus constant Hamiltonians are time optimal for these constraints. We give an explicit formula for the time in these cases, in terms of the resource constraint. We show how our method produces a rich family of speed limit results, of which the generalized Margolus–Levitin theorem and the Mandelstam–Tamm inequality are special cases. We discuss the broader context of geometric approaches to speed limits in physical computation, including the way geometric approaches to quantum speed limits are a model for physical speed limits to computation arising from a limited resource.


Thursday, 17 August 2017

book review: Narrative Theory and the Cognitive Sciences

David Herman, ed.
Narrative Theory and the Cognitive Sciences.
CSLI. 2003

The problem is this. We understand the world through narrative (allegedly). Complex systems are unnarratable (so it seems). Therefore, we literally cannot understand complex systems. Since most of today’s big problems are complex systems (climate, poverty, disease, what have you), this is a bit of a snag. A small group of us are working on a research programme called Narrating Complexity; this book provides some background reading for me from the narrative cognition side of the coin.

I haven’t read all of the book, only those chapters that seem immediately relevant to the Narrating Complexity programme (plus a fourth one, as the train was slower that I was expecting). But the chapters I have read are very good, and my ranking is based on these; the book is by no means “unfinishable” according to my classification, it is simply “unfinished”.

Abbott discusses why complex emergent systems may be unnarratable. Turner’s idea of blends compressing down to the human scale might give a hint towards a solution to this. Some system, rather than being a “mere heap”, is being conceptualised as an entity, albeit maybe not with an agency that we would recognise. Herman shows why we need to be able to do this. And Jahn provides the beginnings of a proto-computational model of how parts of the story formation may occur.

Mark Turner. Double-scope Stories.
Humans can recall or imagine stories that run counter to the events we are experiencing, and not get confused by this. We can take a story, and blend it with a real event, to provide new meaning. And we can take two distinct stories, possibly with clashing structures, and blend those, to get a new “double-scope” blended story with its own emergent structures, allowing us to generate novel ideas and concepts. We can continue doing this, until stories have many levels of blended complexity. One of the things such blending can do is compress expansive conceptual structures down to human scale, where we can grasp them. Turner dissects several examples, including: a scene from the 17th century French play Ph猫dre; a concept such as punishment that is derived from a blend; the Old English poem The Dream of the Rood that has many levels of blending; and even a children’s story that has several sequential blends, and may be thought of as teaching the concept of blending.

H. Porter Abbott. Unnarratable Knowledge: The Difficulty of Understanding Evolution by Natural Selection.
We understand the world through explanatory narratives of entities with agency. Parts of the world that do not have suitable structure are unnarratable, and hence are not easily understood. Evolution is one such process. Neither natural selection nor species have agency, nor are narrative entities. Such processes have two levels: the lower level of swarms of interacting individually-narratable individuals, and the higher level of emergent collective systems that are more than the sum of their individual parts, without narratable agency. Non-scientific forms of explanation, that specifically invent entities with the agency to guide or control the higher levels, have much greater narrative power, and so can find readier acceptance among the public. It may be that complex systems are fundamentally unnarratable: “There isn’t a story. It’s more like tending a garden, only you’re growing it with 10,000 other gardeners.”

David Herman.  Stories as a Tool for Thinking.
Narrative can have many cognitive functions. It is a system for structuring patterns of events progressing through time: for structuring processes. It can be used to “chunk” experiences into “frames” of stereotypical experiences, then used to compare this typical against the actual. This helps us to understand the world more, and therefore have to memorise less. It allows us to generate and evaluate what-if scenarios. It allows us to draw coherent system boundaries: to extract and bound a relevant collection of participants, events, and structures from the overall stream of events we experience. By requiring a “beginning” and “middle” and an “ending”, it allows us to draw temporal system boundaries, and provides a resource for closure. It allows us to conceptualise causal relationships. And it provides a means for people to think together.

Manfred Jahn. Awake! Open your eyes! The Cognitive Logic of External and Internal Stories.
This chapter investigates the difference between external stories (a narrated tale) v internal stories (such as a dream or imagining). However, this apparently simple partition becomes fragmented on deeper investigation, as stories go through several stages of external and internal forms as they develop: they are narrated, heard, internalised, modified, re-narrated, and so on. This leads to a new cyclical model of stories, with a computational flavour.




For all my book reviews, see my main website.