Tuesday, 15 April 2014

spring sunshine

A post about sunshine on the garden, not on the solar panels, for a change.

The yellow rose has poked a branch through the back of the shrubs, and is looking brilliant.

It's running a bit rampant on the front:

Although the Cox eating apple tree is not yet quite in flower, the cooking apple tree blossom is looking fine:

The trees we planted along the back a few years ago are doing well.

Their pink blossom and dark red leaves certainly provide a spectacular contrast:

Oh, and all that sunshine is generating lots of electricity, too!

sequestering carbon, several books at a time XXII

Today saw the delivery of:

Sunday, 13 April 2014


We have had our solar power system for three months now, and in that quarter, we have generated a smidge over 2000kWh, or 2MWh.  That sounds a lot!

According to Wikipedia, Drax power station generates 24 TWh per year, which would have been 6TWh during the same quarter.  So we are 300 nanoDrax-equivalent (or Drax is 3Mega-house-equivalent), which rather puts things in perspective.

However, over that same quarter (again according to Wikipedia), Drax would have produced 5.7Mtonnes of CO2 and 0.375Mtonnes of ash.  Our system, on the other hand, produced no CO2 and no ash: not even a femtoDrax.

Saturday, 5 April 2014

a new view of solar stats

March is over, so we have another month of solar power generation statistics.  Looking at the sunniest day (as determined by total power generation) of each month, we see:

The horizontal time axis runs from 3:00am to 9:00pm GMT. The vertical axis runs from zero to 8kW. The orange regions indicate the minimum, lower quartile, median, upper quartile, and maximum generation at that time, over the respective month.

The different amount of sunlight month on month is clearly visible.  The total power generated on the sunniest day each month (essentially sunny all day) was 26.5 kWh on 13th January, 41.4 kwH on 16th February, and 52.0 kWh on 24th March.  In the March plot we start to see saturation: we have an 8kW system, and so the top of the curve flattens around noon as the system generates at full capacity.

Of course, one of the reasons we want a solar power system is to use the electricity we generate.  These plots show only generation, not usage, because that's the data the solar power system provides.

So we got another meter: a Wattson monitor, that shows not only power generated, but power used, too.  We download the data weekly as it requires an actual cable to connect the Wattson to a computer (the generation system by contrast shares its data via Bluetooth).

Here the horizontal time axis runs from midnight to midnight GMT/BST. The vertical axis runs from -8kW to 8kW. The region above the axis represents our usage: green is generated usage, red is imported from the grid. The region below the line is surplus generation exported to the grid.  (So the green area corresponds to total generation, or the black line in the plots above.)  During the day, usage tends to track generation.  That's because we have a system that pours excess generation into the immersion heater, up to a maximum of 3kW.

The early evening spike at the weekends is dinner being cooked in the electric oven; during the week we usually use the gas hob.  Looking closely you can often see a small red spike around 7am (when generation is just starting); this is the kettle for my morning coffee.  On the 17th, 18th, 19th, and 26th there is a lot of red: we had the “boost” on the immersion heater switched on, because the gas boiler had a fault, so we had no gas heating.

So just eyeballing these charts, it looks like we might be saving about a third of our electricity bill, in addition to what we get paid for the generated power.  This proportion will probably increase in the summer as the number of hours of daylight increases.

Then, of course, there is all the entertainment value of writing Python scripts to generate the charts, examining said charts, and speculating on the shape of future charts.  Great value for money all round!

Saturday, 22 March 2014

book review: Think Complexity

Allen B. Downey.
Think Complexity.
O'Reilly. 2012

The problem with being an autodidact is the unknown unknowns: if you are teaching yourself something, how can you fill the gaps in your knowledge that you don’t even know are there? I am teaching myself Python. Not from scratch, because I can already program in other languages. But that’s part of the problem: because I know how to program, I am learning Python from the on-line documentation (which could be better) and Stack Overflow (which in invaluable). This means I can find the constructs I look for; but what about the ones I don’t know exist?

So I’ve been thinking about getting a book, to help fill the gaps. I came across Think Complexity, a slim book (130pp) that claims to be targeted at an intermediate level, with the bonus of using examples from Complexity Science, a subject I also study.

It starts off well, with a mention of Python generators (which I had come across as a concept) and their “yield” statement (which was new to me). Yet the discussion is very brief: less than a page. I wanted to know more, as it sounded interesting, and made me wonder if the approach would allow coroutines (am I showing my age?). So I googled, and found David Beazley’s excellent tutorials, one on Python generators, using them in a functional manner to implement processing pipelines, and one, indeed, on coroutines. There is a lot more than even hinted at in Think Python.

Next comes a chapter on algorithmic complexity and “big O” notation that introduces Python list comprehensions. Now, it’s virtually impossible to have visited Stack Overflow more than a few times without having come across list comprehensions: they are marvellous beasts. However, their introduction here crystallised my apprehension with the book: they are explained with just a few examples only. Examples can be great for showing what is possible, and the examples here are good in that they start trivial and get more complicated. But you also need a description of the underlying syntax, so that you know that you have inferred the structure correctly from the examples, and to cover usages not illustrated by the examples.

The chapter on Cellular Automata uses NumPy arrays, but doesn’t talk about them much. NumPy is excellent for doing anything with arrays, and if you have come to Python via Matlab, like I have, you will feel right at home with them. One interesting snippet made here is an efficient way to implement Conway’s Game of Life using convolution from SciPy (although Bill Gosper’s HashLife, underlying Golly, is faster, and more interesting algorithmically). 

Then there are brief chapters on fractals, self-organised criticality, and agent-based models. But not a lot more Python. The book finishes up with several case studies prepared by students following up some of the concepts in the book; these are probably the most interesting parts. However, they are interesting mainly from a complexity viewpoint, not really from a Python viewpoint.

In summary, although this is advertised as “intermediate level” Python, it doesn’t go very far beyond what you can pick up readily from Stack Overflow. However, the idea of teaching a programming course using fun examples from Complexity Science is a good one: so many texts use relatively boring examples with little motivation. It is clear here from the chapters contributed by the students that they really engaged with the material.

For all my book reviews, see my main website.

Sunday, 16 March 2014


It almost looks like the clouds are streaming into the sun!

18:37 GMT, looking west