Saturday 28 October 2017

TV review: Dark Matter, season 1

Six people wake up from stasis on an interstellar spaceship: all have lost their memories. Which one of them wiped all their minds? And why? They must work as a team to survive, but how can they do so when any one of them may be a traitor?

As the season progresses we learn more about the crew. Initially, it seems they are a gang of criminal mercenaries, ready to hire out to various Corporations to destroy inconvenient colonies, steal dangerous research, and more. But as we learn along with them more about their backgrounds, it becomes clear that this simplistic picture is far from the truth. Each of them has deeper secrets, and no-one is who they seem.

It’s a pity that the three “women” each have to be atypical in some particular way: the kid, the android, the [spoiler], whereas the men are all simply men with their own agendas. But, hey ho, at least there are women here.

I don’t know if it’s just me, but I’m sure after all this time I would have given the android a name. Mind you, the crew are still calling each other by their waking-order numbers, which makes it complicated to remember who’s who. (Which one is “Four” again?) In fact, their strangely spacious spaceship, the Raza, is the only main character with a name!

Each episode has some good action, moves the arc along, and reveals more background. Sit back and enjoy some fun hijinks with mercenaries who are actually the only good guys around! And be prepared for a massive end-of-season cliffhanger.




For all my SF TV reviews, see my main website.

Friday 27 October 2017

sparrowhawk back

Sparrowhawk seen in the garden this morning:

I’ll just rest a bit here in the warm autumn sunshine …

… and have a good stretch.

Let’s turn to warm my front … ooh, it’s a bit windy!

I’d better tidy myself up …

Is that someone behind me? … okay, no

I’m ready for my closeup now

Shot through glass again, so not the best focus/colour – but still a magnificent sight.

We had glimpsed it a few weeks back.  Arriving home in the car, we spotted it as it flew off up from the ground in one direction, and a pigeon in a surprisingly large cloud of feathers flew off in the opposite direction.  Sorry we interrupted your dinner!  Today the view was a little more relaxed.

Wednesday 25 October 2017

book review: What Works

Iris Bohnet.
What Works: gender equality by design.
Harvard University Press. 2016

We hear a lot about people trying to fix gender equality issues with little success, or even denying there is a problem. This book provides evidence that there is indeed a problem, why many existing approaches to tackling it don’t work, and lays out proven ways to ameliorate it.

I’m taking it as given that there is a problem, but if you disagree, Bohnet provides plenty of evidence of its existence. The problem is not restricted to the misogynists actively discriminating; it is also due to the unconscious biasses we all have, sabotaging our best efforts. But unconscious bias training has very little effect. And stereotypes are hard to overcome: if women aren’t appointed to certain positions because of the stereotype that they aren’t appropriate in those positions, then there will never be any evidence to overcome the stereotype. So what should we do?

The answer Bohnet advocates is behavioural design: changing not our innermost biasses, but nudging what we do in the right direction. After all, a bias that is never acted on doesn’t really matter. So Bohnet lays out a series of design changes – to our hiring and promotion processes, to our team building, to our norms, and more – to make it easier to act in a more inclusive way. These can be implemented piecemeal using an unfreeze (the old behaviours) – change (to the new behaviours) – refreeze (stick with the new behaviours) process. (This reminded me somewhat of Arnold’s microresolution approach; the changes here are “micro” relative to the business scale.)

Examples of behavioural design include: recruit staff in batches, rather than one at a time, to reduce the temptation to go for the standard option, and to allow for more diverse choices; interview one-on-one rather than in panels, and aggregate the individual interviewers’ inderpendent scores, to avoid groupthink. Quotas can help level the playing field; to get round the perception that the “beneficiaries” of the quota are under-qualified, first choose a long-list of candidates based on quality, and only then use the quota to increase diversity; everyone eventually chosen has passed the same quality threshold.

Some of the evidence shows possibly counter-intuitive effects. For example, having a “token” minority can backfire: the way our biassed brains work means that singletons will typically be judged by their group stereotype, not by their individual qualities. Including more than one person from the particular group allows each to be seen and judged more as an individual, rather than just as a representative of their class. This is the “critical mass” effect: a minority shouldn’t be present as less than one third, or three people, in total. This is an interesting approach. It implies that if you have a class of say 40 students, 30 men and 10 women, to be partitioned into teams of four, then it is much better to have five teams with two men and two women, and five teams all men, than to have 10 teams of three men and one woman.

There are many more relatively simple ideas for change here, from wording in job adverts to de-risking applications, from negotiation processes to stereotype threats, from the importance of role models to implementing transparent processes. And Bohnet is a strong advocate for the use of data to determine the presence and shape of the problem, and the using controlled experiments to determine the effectiveness of the interventions.

I have just summarised parts of the advice: Bohnet provides the rationale and the evidence. If you are serious about improving gender equality, and equality for other under-represented groups, then this is the book for you.




For all my book reviews, see my main website.

Tuesday 24 October 2017

autumn

A few random photos from the garden.

a falling apple spears itself on a fennel stalk
the sedums give marvellous autumn colour
one of the many frogs in the garden at the moment

Monday 23 October 2017

book review: The Language Hoax

John McWhorter.
The Language Hoax: why the world looks the same in any language.
Oxford University Press. 2014

McWhorter is arguing against the notion that language affects thought in some profound way. He does not deny that subtle experiments consistently show that speakers of different languages have minutely different reaction times, or other differences, in certain respects. But that is the point: they are small, almost undetectable, differences, not huge contrasts. A difference that makes hardly any difference is hardly a difference.

His main argument is that people think these things are more impressive than they are because of the way they are reported in the press; that they think the differences are stranger and more unique than they are because they don’t know enough different languages; and that they think the differences are more important than they are because of some kind of paternalistic anti-racism (it is typically non-English languages that are shown to make more distinctions than does English, despite there being examples in the opposite direction).

In particular, the lack of knowledge of further languages exhibiting the particular features can make for wild over-generalisations. English doesn’t make this specific distinction, but language X does: therefore speakers of X must be more sensitive to these distinctions; somehow speakers of X need to make these distinctions because of their rich culture. On the face of it, this might sound plausible; but once you learn that languages W, Y, Z of nearby peoples also don’t make these distinctions, whereas languages A, B, C of very different peoples do, the argument is less persuasive. For example, McWhorter discusses the Amazonian Tuyuca language, which has evidential markers, little suffixes that indicate how you came to know the things you say (I hear, I am told, I don’t know for sure, ...):
[p39] One could suppose it must have something to do with living in a rain forest where one must always be on the alert to dangerous animals, or to the presence of other animals on which one depends for sustenance. Wouldn't being a Tuyuca seem to require constant attention to whether one hears something, whether one can depend on someone's statement that there is a new source of a certain food somewhere far off, and so on?
   This sounds eminently plausible when we think only of the Tuyuca. However, as odd as evidential markers seem to an English speaker, they are actually quite common worldwide. Crucially, to perceive any kind of link between culture and evidential markers from a worldwide perspective is […] extremely difficult.
   Basically, to link evidential markers to what a people are like is to say that some groups are more skeptical than others. […] However, who among us is prepared to say that the Ancient Greeks, who produced some of the world's first philosophical treatises scrupulously examining all propositions no matter how basic, and lived in a society always under siege from other empires as well as from rival Greeks themselves, were a relatively accepting, unskeptical people with only a modest interest in sources of information?
It is not that the subtleties of language structure greatly influence thought, and that those differences are caused by local cultures and environments. It is rather that languages just naturally and continually embellish and simplify and grow and shrink and change, and have manifold complexities in different parts of their structure.

McWhorter brings to bear his immense knowledge of language in this lively and provocative small book. Having read it, I will no longer swallow reports that language significantly affects thought, and I now know a lot more about the rich complexity of languages.




For all my book reviews, see my main website.

Sunday 22 October 2017

Christmas teaser

I see that the cover of last week’s Radio Times shows Clara Oswald impersonating Queen Victoria.

Is this a teaser for the BBC Christmas special?  Maybe hinting at a cybermen-kidnapping-royalty plot, or even a prequel to Rose’s role in the Royal werewolf plot?

Or am I just overthinking this?


Friday 20 October 2017

Haskell fibs

I've not seen this before.  It is just so cool:
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)



For all my social networking posts, see my Google+ page

Thursday 19 October 2017

sledding to Grieg

Some people have waaaay too much time on their hands ... which is fortunate for the rest of us!
Little sled person treks Line Rider track synchronized to “In the Hall of the Mountain King”




For all my social networking posts, see my Google+ page

Sunday 15 October 2017

book review: Worldsoul

Liz Williams.
Worldsoul.
Prime Books. 2012

For Mercy Fane, the day starts as any other for a Worldsoul Librarian: choosing a weapon to fight the dangers escaping from the books. What emerges will involve not only her, but also a hidden descendent of old Norse legends, a demon from Hell itself, and an Alchemist from the Eastern quarter, in a desperate fight against those who would destroy Worldsoul.

There is a rich vein of fantasy that has heroic Librarians fighting against the dangers that can leak out of Old Books. I understand the desire for heroic librarians; I’m not so sure that having the books be the danger is the best idea in this age of anti-intellectual post-truth.

However, here we have another hero-Librarian fighting off the demons. Worldsoul is a beautifully drawn, rich and detailed world, with a complex plot drawing on a range of old-Earth mythologies. In a lesser author’s hands, the range of characters, locales, and mythologies might have resulted in fragmentation; here Williams draws them all together in a fine tapestry, with a powerful cumulative building of the plot details.

The plot itself comes to a conclusion, but the ending provides the potential for a sequel. There are hints on the web that such a sequel is “projected”, but it does not seem to exist yet. I would welcome another tale in this universe.




For all my book reviews, see my main website.

Saturday 14 October 2017

multi-objective optimisation II

In a previous post, I described what a Pareto front is, and Deb et al (2000)’s algorithm for finding it.  We have the first element we need for a multi-objective optimisation algorithm.  There is one more subtlety first, though.

We want to get a good estimate of the whole Pareto front, not just a few points on it.  So we want the points we find to be well spread out across the front.  Deb et al define the crowding distance: we prefer less crowded to more crowded points, as the densely crowded ones are not giving us much extra information about the front.  Their algorithm is:


Here, I is the set of points, and I[i].m is the value of the mth objective function of the ith individual in I.  We can code this up in python as follows (with I being a list of points in a particular front):
def crowding_distance_assignment_single_front(pfront):
  # pfront = list of individuals, dictionary { 'state':s, 'fitness':(f1, ... fm) }
  inf = 1000000                  # "infinity"
  dist = [0] * len(pfront)       # list of distances, I_distance
  m = len(pfront[0]['fitness'])  # no of objectives (= size of 'fitness' tuple)
  for mi in range(m):
    # sifront = [(i, pfront[i]), ...] sorted on mi-th fitness values of pfront
    # sifront[j][0] = original index i of item in pfront list
    # sifront[j][1]['fitness'][mi] = mi-th fitness value of jth sorted individual
    sifront = sorted(enumerate(pfront), key=lambda x:x[1]['fitness'][mi]) 
 
    dist[sifront[0][0]] += inf    # first boundary item in list
    dist[sifront[-1][0]] += inf   # last boundary item in list
    for i in range(1,len(sifront)-1):       
      dist[sifront[i][0]] += sifront[i+1][1]['fitness'][mi] \
                             - sifront[i-1][1]['fitness'][mi]

  # sort pfront into reverse distance order, using distance list
  dist, pfront = (list(t) for t in 
                  zip(*sorted(zip(dist,pfront), key=lambda x:x[0], reverse=True)))

  return(pfront)
The algorithm works by sorting the list of individuals in order of their fitness value for objective mi.  The distance value of each point is the sum of the distances to its two neighbouring points: crowded points will have low distances, so low distances are bad.  The first and last points in the list are given a very high (good) value.  This is repeated for each objective dimension, adding up the distance measures.

The python then does one more thing than the Deb et al algorithm.  It uses the distances calculated to sort the original list of individuals into reverse distance order (high distance isolated points near the the front, low distance crowded points towards the end) and returns that sorted list.

Then we can sort the list of fronts in the obvious way:
def crowding_distance_assignment(pfrontlist):
  for i, pfront in enumerate(pfrontlist):
    pfrontlist[i] = crowding_distance_assignment_single_front(pfront)
  return(pfrontlist)
Now we have enough to make a multi-objective search algorithm:
Np = 100        # parent population size
Nq = 100        # no of children
Ngen = 250      # no of generations
Nstate = 3      # size of individual state

parents = init_popl(Np,Nstate)
children = make_children(parents, Nq)

for _ in range(Ngen):
    popl = parents + children
    # construct the list of fronts from the population (previous post) 
    pfrontlist = fast_nondominated_sort(popl)              
    # sort each front into least crowded (highest distance) order
    pfrontlist = crowding_distance_assignment(pfrontlist) 

    # new parents become the best of the previous popl of parents + children
    parents = [item for pfront in pfrontlist for item in pfront][:Np]  
    # make new children from those parents 
    #  note that parents are sorted in fitness order    
    children = make_children(parents, Nq)                                    
We need to define init_popl() to build some initial (usually random) population.
import random
def init_popl(n,nstate):
    # return list of n individuals, each with nstate state vars, each in -1..1    
    pop = []
    for _ in range(n):                      
      state = tuple([random.uniform(-1,1.) for i in range(nstate)])
      pop += [{'state': state, 'fitness': fitness(state)}]        
    return pop
Here I have simply hardcoded in an initial range of +/-1, because I am demonstrating this with the equally hard-coded Kursawe function objectives:
import math
def fitness(state):
    # Kursawe function
    x1,x2 = state
    f1 = -10*math.exp(-0.2*math.sqrt(x1**2+x2**2)) \
        - 10*math.exp(-0.2*math.sqrt(x2**2+x3**2))
    f2 = abs(x1)**0.8+5.*math.sin(x1**3) \
       + abs(x2)**0.8+5.*math.sin(x2**3) \
       + abs(x3)**0.8+5.*math.sin(x3**3)
    return(f1,f2)
Now all that’s left is to define make_children(), which breeds the next generation. Deb et al simply say “Binary tournament selection, recombination, and mutation operators are used to create a child population”.  So, for the sake of demonstration, let’s do the simplest possible thing here: binary tournament with a fixed mutation of the three component state, and no recombination:
def make_children(parents, nq):
    # parents = list of items in fitness order, fittest first 
    #   (except on v first call)
    # nq = no of children to make
    
    # tournament selection + mutation
    np = len(parents)
    children = []
    for _ in range(nq):
        # binary tournament: chose two parents randomly
        p12 = random.sample(range(np),2)
        # tournament winner = lower number, because sorted
        p = parents[min(p12)]  
        s = mutate(p['state'])
        children += [ {'state': s, 'fitness': fitness(s)} ]
    return children

def mutate(state):
    x,y,z = state
    return (x+random.uniform(-0.1,0.1), 
            y+random.uniform(-0.1,0.1), 
            z+random.uniform(-0.1,0.1)) 
Clearly, init_popl(), fitness(), make_children(), and mutate() should be modified into general purpose code, but that’s not the point of this post.

Here are some results, for the Kursawe function, and for a population size of 100 (click to get a larger view).
After 2 generations: (top left) fitness/objective values (f1,f2), with the current fronts of the entire population as calculated using fast_nondominated_sort(), and different front plotted in different colours; (top right) the corresponding (x1,x2) state space (x3 omitted for clarity) of the entire population, with the points plotted in the same colours as in the fitness plot; (bottom left) the fittest front only, plotted with a rainbow colouring along its length; (bottom right) the state space corresponding to the fittest front, with the points plotted in the same colours as along the front.
After 25 generations; plots as above.  The fronts are converging, and the Pareto front is becoming clear.  The states corresponding to different positions on the Pareto front (that is, to different tradeoffs between the objectives) are also becoming clear. 
After 250 generations.  The Pareto front is fully defined and evenly filled; the corresponding state values are well clustered.
After 250 generations with no call of the crowding_distance_assignment() function.  The Pareto front is less well defined, and less evenly populated.
So that’s it!  Deb et al’s multi-objective optimisation algorithm, in python.

Reference

Deb, Kalyanmoy; Agrawal, Samir; Pratap, Amrit; Meyarivan, T. (2000) A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Parallel Problem Solving from Nature, PPSN VI. LNCS 1917:849–858. Springer. doi:10.1007/3-540-45356-3_83

Friday 13 October 2017

book review: Uprooted

Naomi Novik.
Uprooted.
Pan. 2015

The wizard known as Dragon takes a village maiden every ten years, to serve him in his castle, while he keeps all the villages in the valley safe from the darkness of The Wood. This year, to everyone’s surprise, he chooses the awkward Agnieszka. What she didn’t know is that she has magic. And her magic may be the key to unlocking the secret of The Wood. If anyone will take her seriously.

This is a great fantasy novel. Agnieszka has to learn she has magic, then learn how to use it, then learn what it needs to be used on. This sounds standard enough stuff, especially with the evil in the dark forest, and the Dragon wizard, and the king’s son, and the missing queen, and, and… Yet the plot keeps branching off in surprising directions, growing in unexpected ways, and these standard fantasy tropes are made to bear unexpected fruit.

And the plot really races along. At one point I was musing that there had already been enough material and plot twists for one or even two volumes of the typical “fat book” fantasy trilogy, but I was barely half way through! This does not compromise the richness of the plot: Uprooted is exciting and satisfying read.




For all my book reviews, see my main website.

Wednesday 11 October 2017

multi-objective optimisation I

There are lots of optimisation algorithms around: I’ve used simulated annealing and evolutionary algorithms, to name but two.  However, I’ve only ever used them to optimise a single value.  But real problems often have multiple objectives to satisfy simultaneously: time, price, and quality is a traditional triple.  However, multiple objectives are often in conflict, and so it is not possible to optimise them all simultaneously: there are tradeoffs.  “Pick any two of three” is the stated tradeoff for time, price, and quality; more nuanced tradeoffs are available.

One common technique is simply to add the various objective values together, with some weight for each, to reduce the problem to a single optimisation.  But which weights?  Different weights imply different tradeoffs.

So the next step is a full multi-objective optimisation system.  To see what is going on, the crucial concept of dominated solution needs to be understood.

Let’s assume that a candidate solution s has objective values (f1, f2, ..., fn).  And let’s assume we want to minimise each of those objectives (the maximisation case is similar, mutatis mutandis).  Then solution s1 dominates solution s2 if every one of its objective values is less than the corresponding value for s2.  If on the other hand, s2 is better in all its objectives than s1, then s2 dominates s1.  And in the case when neither dominates the other, these solutions are non-dominated.  The figure below illustrates these concepts.

Candidate solutions plotted in some in 2D objective space, where we wish to minimise both objectives.  Consider the central red solution.  It dominates all candidate solutions in the upper right quadrant, because both its objective values are better than those in the dominated quadrant.  It is dominated by all candidate solutions in the lower left quadrant, because both its objective values are worse than those in that quadrant.  The other two quadrants are non-dominated solutions: one objective is better, but the other is worse.  The large green dots along the lower edge represent solutions on the Pareto front: all the solutions that are not dominated by any other candidate.  The concepts generalise to higher dimensions when there are more than two objectives.  
We can write a simple python function to calculate whether one solution dominates another, for an arbitrary number of objectives:
def dominates_min(a,b):   # for minimisation problems
  # a and b are nD tuples of the respective n objective values
  # a dominates b if a.x < b.x and a.y < b.y and ...
  return all([(a < b) for a, b in zip(a,b)])
How this works:  Let the tuples by (ax, ay, az) and (bx, by, bz).  Then zipping them together gives the list of tuples [(ax,bx),(ay,by),(az,bz)].  The set comprehension gives a list of booleans, True or False depending if the respective ai<bi or not.  Then the all function tests whether all the values are True.
In a complete collection of candidate solutions, the ones not dominated by any other solution form the Pareto front.  These are the best possible solutions: no other candidate performs better in all objectives.  The front describes the possible tradeoffs: values along the front represent different tradeoffs.

When we don’t have all candidate solutions to hand, maybe just a sample of solutions, we need an optimisation algorithm to find those candidates close to the actual Pareto front, and to use them to find even better ones.

But first, we need to calculate the points on the Pareto front of a given population, that is, all the candidate solutions that are not dominated by any other solutions (the green points in the figure above).  The obvious algorithm (for each candidate solution, test it against all other solutions for all objectives) is rather inefficient, being O(MN2), where M is the number of objectives, and N is the number of solutions to test.  Deb et al (2000) give a more efficient algorithm:


We can code this up in python as follows (with comments showing how it relates to the algorithm above):
def fast_nondominated_sort_single_front(pop):
  # pop = list of individuals
  #   individual of form { 'state': (s1,s2,s3,...), 'fitness':(f1,f2,f3,...) }
  # pfront = list of individuals on the calculated "pareto front"

  pfront = [ pop[0] ]      # P' = {1}
  for pi,p in enumerate(pop):  # for each p in P and p notin P'
    if pi == 0 : continue      # (so not the first element)

    pfront = pfront + [ p ]    # P' = P' cup {p}
    # make a copy to iterate over, because pfront is being modified; 
    #   don't need the last item, which is p  (makes q neq p below True)
    pfrontcopy = pfront[:-1]   

    pf = p['fitness']
    for qi,q in enumerate(pfrontcopy):    # for each q in P' and q neq p 
      qf = q['fitness']
      if dominates_min(pf,qf):     # if p dom q
        pfront.remove(q)           #   then P' = P' \ {q}
      elif dominates_min(qf,pf):   # else if q dom p
        del pfront[-1]             #   then P' = P' \ {p}
        break    # don't have to check any more qs as p already gone

  return(pfront)
Once we have the front, we could remove it from the population, and calculate the next front, and so on.  This is useful in an optimisation algorithm, to find less fit (not on the current best front) but still possibly good solutions, for use in building even better ones.

The multiple front calculation is as follows:
def fast_nondominated_sort(pop):
  # find a list of ever decreasing fronts
  # pop = list of individuals
  # pfrontlist = list of fronts = list of lists of individuals

  popcopy = pop[:]
  pfrontlist = []
  while popcopy:
    pfront = fast_nondominated_sort_single_front(popcopy)
    pfrontlist += [pfront]
    # remove the found pfront from the pop
    popcopy = [ ind for ind in popcopy if ind not in pfront ]  

  # check that no-one has been lost
  assert len(pop) == len([item for pfront in pfrontlist for item in pfront]) 

  return(pfrontlist)
Given some population of candidate solutions, we can plot out the relevant fronts so calculated (plotting code always seems much more complicated than the actual algorithms!):
import matplotlib.pyplot as plt
def plot_fronts(pfrontlist):
  # pfrontlist = list of fronts; each front is a list of individuals

  fig = plt.figure() 
  ax1 = fig.add_subplot(111) 

  # one colour per front 
  colors = matplotlib.cm.rainbow(np.linspace(0, 1, len(pfrontlist))) 
  # iterate backwards, to draw better fronts on top of less good ones 
  for i in range(len(pfrontlist)-1, -1, -1) :  
    pfront = pfrontlist[i]

    # sort members of front in order of fitness; draw a line along the front
    data = sorted([ pf['fitness'] for pf in pfront ]) 
    xs = [ i[0] for i in data ]
    ys = [ i[1] for i in data ]
    ax11.plot(xs, ys, ':o', color=colors[i], mew=0)
  plt.show()
This gives a plot like:
A sequence of fronts, found by finding the Pareto front of a population of candidate solutions, removing it from the population, finding the front of the remaining candidates, and so on.


In a later post, I will go through the rest of the Deb et al (2000) algorithm, and show how they incorporate this front-finding into a full-blown multi-objective optimiser.

Reference

Deb, Kalyanmoy; Agrawal, Samir; Pratap, Amrit; Meyarivan, T. (2000) A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Parallel Problem Solving from Nature, PPSN VI. LNCS 1917:849–858. Springer. doi:10.1007/3-540-45356-3_83

Tuesday 10 October 2017

production-quality spam

What new spam is this?

Here’s an email I’ve just received (needless to say, I have not designed a tool called BalbiEtAl2017, which sounds like a BibTeX tag).

I haven’t clicked on the link, because, email  link, duh!







For all my social networking posts, see my Google+ page

Monday 9 October 2017

Brexit: the Athenian expedition against Syracuse

Repeating history, one disaster at a time.

Read the reasoning...




[via Danny Yee's blog]

For all my social networking posts, see my Google+ page

Sunday 8 October 2017

phenomenon-based learning

Finland ahead of the education game yet again.
Finland Will Become the First Country in the World to Get Rid of All School Subjects
[assume there’s an interesting snippet quoted from the article here – there isn’t, because the article is “protected”, by someone who doesn’t understand how the Web works] 



[via Danny Yee’s blog]

For all my social networking posts, see my Google+ page

Saturday 7 October 2017

book review: The Long Way to a Small Angry Planet

Becky Chambers.
The Long Way to a Small Angry Planet.
Hodder. 2014

Rosemary Harper signs on as a certified clerk to the small spaceship Wayfarer, whose multi-species crew builds small hyperspace tunnels through navigable space. She wants a fresh start to get away from her secret past, and this looks ideal. But then the Wayfarer is offered a big job: to build a long hyperspace tunnel from a newly joined member of the Galactic Commons. The job is valuable, but it will involve a long trip to the starting point. The contract will allow them to move their tunnelling operation into the big league, so they snap it up. But the long trip will expose everyone’s secrets…

Both that summary, and even moreso the book cover blurb, make this sound as if it is a book about Rosemary and her past, and that thrilling adventures will ensue. In fact, it is a book with an ensemble cast, and an episodic travelogue style, as the ship moves through a variety of encounters that help uncover different aspects and secrets of both the crew members, and their various species. Stakes are high, but on the individual rather than the galactic level.

And it’s just delightful. The various alien species are alien enough to be interestingly different, without being incomprehensible. The different species are not monolithic: there are different characters within each. Humans are more fleshed out than the other species: there are three specific subgroups, of old-Earthers, Mars colonists, and the ship-born. But this is actually a plot point: the rest of the GC think humans are under-civilised because of this. The small events we see do have consequence, if sometimes only for one or two characters, and help gradually paint a rich picture of this future galaxy. There is tension and conflict and loss and revelation, but relatively little violence. The characters are simply adult and competent, rather than overly-dramatic, cartoonish, or hyper-competent. It feels real.

I once summarised a book to a friend as “nothing happens, but it’s very exciting!”. This one is maybe: “nothing happens, but it’s wonderful!” I went straight out and bought the next book.



For all my book reviews, see my main website.

Friday 6 October 2017

sequestering carbon, several books at a time LXXV

The latest batch (this is a more than two-month-long batch, hence its somewhat unusual size)


Tuesday 3 October 2017

Grayling again

AC Grayling's righteous wrath:
I Accuse... AC Grayling’s denunciation of Britain’s craven politicians

This is why Briefing Paper 07212 of June 3, 2015, told MPs that the EU referendum was advisory only, non-binding on Parliament and government. That is why the then Minister for Europe, David Lidington, repeated this, indeed stressed it, viva voce in the House of Commons in the debate on the referendum Bill later that same month, in alleging why there was no need for a super-majority requirement in the referendum. (Was this a deceit knowingly practised on the House of Commons by the minister and the government?)

It was accordingly for Parliament to debate the outcome of the referendum and decide what to do in the best interests of the country. It has never done so. Parliament has never debated the outcome of the referendum.

I accuse Parliament of a gross dereliction in this regard. I accuse it of a total failure in the exercise of its sovereignty. I accuse it of having abdicated its sovereignty, handing it to one-third of a restricted electorate who voted for a proposition that came with no plan, no roadmap, no costings, and no analysis of consequences, but supported instead by manifest distortions, falsehoods and false promises.




For all my social networking posts, see my Google+ page

Monday 2 October 2017

spooky moon

A lovely spooky moon behind the scudding clouds

20:36 BST