The latest batch:
These include the proceedings of the UCNC conference I recently attended. Several others are a birthday present. (I also got a mug featuring the BBC Micro Owl logo, and a Hubble Deep Field throw.)
Sunday, 24 July 2016
Friday, 22 July 2016
does anyone even remember what "VHS" stands for?
I saw them come. I saw them go.
Japan 'to stop making VCR machines'
For all my social networking posts, see my Google+ page
Japan 'to stop making VCR machines'
VCRs were used to play and record onto VHS cassettes |
For all my social networking posts, see my Google+ page
Wednesday, 20 July 2016
"I did not over think it. I thought it through."
Labels:
politics
The Narcissism Of Motherhood:
[via Danny Yee's blog]
For all my social networking posts, see my Google+ page
Being a mother is not a job. If it were a job there’d be a selection process, pay, holidays, a superior to report to, performance assessments, Friday drinks, meetings and you could resign from your job and get another one because you didn’t like the people you were working with.
[via Danny Yee's blog]
For all my social networking posts, see my Google+ page
Monday, 18 July 2016
Sunday, 17 July 2016
a second Brexit riposte roundup
Here are my Google+ posts on the Brexit fallout I made over the couple of weeks I was away at conferences.
Brexit was a con : Referendum information: 1293 words on Brexit, v 670 pages for Scottish Independence. "An informed electorate" my hat.
Professor A C Grayling’s letter to all 650 MPs urging Parliament not to support a motion to trigger Article 50 of the Lisbon Treaty, 1 July 2016. Well said.
"Cat, what's your opinion on the UK leaving the EU?" Sometimes you just have to laugh.
Boris Johnson made foreign secretary by Theresa May. WTF?! I've been having this really surreal dream for about 3 weeks now. It's worse than Alice in Wonderland, the bizarre things that keep happening. I'd like to wake up, now, please.
Everything you need to know about Theresa May’s Brexit nightmare in five minutes. It's even worse than I thought:
For all my social networking posts, see my Google+ page
Brexit was a con : Referendum information: 1293 words on Brexit, v 670 pages for Scottish Independence. "An informed electorate" my hat.
Professor A C Grayling’s letter to all 650 MPs urging Parliament not to support a motion to trigger Article 50 of the Lisbon Treaty, 1 July 2016. Well said.
"Cat, what's your opinion on the UK leaving the EU?" Sometimes you just have to laugh.
Boris Johnson made foreign secretary by Theresa May. WTF?! I've been having this really surreal dream for about 3 weeks now. It's worse than Alice in Wonderland, the bizarre things that keep happening. I'd like to wake up, now, please.
Everything you need to know about Theresa May’s Brexit nightmare in five minutes. It's even worse than I thought:
You mean we can't negotiate any trade deals, inside or outside the EU, while the two-year Article 50 process is ongoing?
Exactly. Actually, it's against the law for EU member states (we'd still be an EU member state until the end of the two-year process) to conduct bilateral trade negotiations with other member states or countries.
For all my social networking posts, see my Google+ page
Friday, 15 July 2016
UCNC day 5
Labels:
communication,
computer,
conference,
Manchester,
mathematics
The final (half) day of UCNC in Manchester.
The last invited speaker of the conference was Steve Furber, talking about the SpiNNaker project (SpiNNaker stands for "Spiking Neural Network Architecture"). After some interesting historical context, he told us of the SpiNNaker machine: one million processors in an asynchronous spiking architecture. The preliminary machine, with 500,000 cores, was launched 30 Mar 2016, and more cores have been added since. It can be programmed in the Python PyNN language. For example, 165 lines of Python are needed for a Sudoku solver, where the neuronal groups inhibit other groups with the same integer value in the the same row, column, or 3x3 cell. Once a solution has been found, the inhibitory links decrease, and the spiking rate goes up, solving a "diabolical" puzzle in about 10 seconds. This isn't just a toy: it is representative of complex constraint problems. So far people have only been running small programs, as they think how to scale up their ideas. Although each core is a standard processor, exploiting the asynchronous spiking communication requires a different way of thinking.
Then on to the final technical session. First was a talk on "Model-Based Computation"; an attempt to extend the definition of analogue computation (which implements a model analgous to the problem) in a way that can cover more of unconventional computation. Then a couple of mathematical talks about chemical reaction system formalisms. The first, "Towards Quantitative Verification of Reaction Systems" encoded the system in a formal solver to prove properties. The next, "Reachability Problems for Continuous Chemical Reaction Networks" looked at proving safety properties in systems with continuous values of reactant concentrations. The final talk was on "Global Network Cooperation Catalysed by a Small Prosocial Migrant Clique", looking at evolutionary game theory in networks with no global knowledge, and how a small clique of cooperators migrating into a network of defectors could change it to a network of cooperators.
So, another conference ends. Next year, in Arkansas.
After two solid weeks of travel and listening, my brain is full of exciting science, and I need a lot of sleep! I'm looking forward to getting home for a bit of a rest.
The last invited speaker of the conference was Steve Furber, talking about the SpiNNaker project (SpiNNaker stands for "Spiking Neural Network Architecture"). After some interesting historical context, he told us of the SpiNNaker machine: one million processors in an asynchronous spiking architecture. The preliminary machine, with 500,000 cores, was launched 30 Mar 2016, and more cores have been added since. It can be programmed in the Python PyNN language. For example, 165 lines of Python are needed for a Sudoku solver, where the neuronal groups inhibit other groups with the same integer value in the the same row, column, or 3x3 cell. Once a solution has been found, the inhibitory links decrease, and the spiking rate goes up, solving a "diabolical" puzzle in about 10 seconds. This isn't just a toy: it is representative of complex constraint problems. So far people have only been running small programs, as they think how to scale up their ideas. Although each core is a standard processor, exploiting the asynchronous spiking communication requires a different way of thinking.
Then on to the final technical session. First was a talk on "Model-Based Computation"; an attempt to extend the definition of analogue computation (which implements a model analgous to the problem) in a way that can cover more of unconventional computation. Then a couple of mathematical talks about chemical reaction system formalisms. The first, "Towards Quantitative Verification of Reaction Systems" encoded the system in a formal solver to prove properties. The next, "Reachability Problems for Continuous Chemical Reaction Networks" looked at proving safety properties in systems with continuous values of reactant concentrations. The final talk was on "Global Network Cooperation Catalysed by a Small Prosocial Migrant Clique", looking at evolutionary game theory in networks with no global knowledge, and how a small clique of cooperators migrating into a network of defectors could change it to a network of cooperators.
So, another conference ends. Next year, in Arkansas.
After two solid weeks of travel and listening, my brain is full of exciting science, and I need a lot of sleep! I'm looking forward to getting home for a bit of a rest.
Thursday, 14 July 2016
UCNC day 4
Labels:
communication,
conference,
language,
Manchester,
philosophy,
research
UCNC day 4, with an embarrassment of riches in the form of invited talks.
We kicked off with an invited talk from Friedrich Simmel on “Chemical Communication Between Cell-Sized Reaction Compartments”. This was a fascinating account about a series of experiments sending signals between cells, droplets, and “genelets” (droplets containing cellular “naked” genetic machinery), based on the ideas of quorum sensing: when a high enough chemical signal concentrations is produced, because there are enough producers around, it invokes a response. We saw droplets signalling the chemicals, inducing bacteria to react, and that signal propagating through multiple droplets. Apparently there is a “bacterial Turing test”: can you make a droplet that a bacterium will interact with (through chemical signals) just as if it were another bacterium? These systems pass it. Through a clever use of microfluidics, we saw videos of sheets of bacteria interacting, via fluorescent protein production. The fluorescence increases both due to the being switched on by the signalling, and due to the bacteria reproducing, two process with similar timescales. The possibilities of this approach include forming spatial and temporal patterns through reaction-diffusion systems of interacting genetically programmed droplets. If all this wasn’t enough, Simmel finished his talk with a description of using electron lithography to etch chips, deposit gene-length strands of DNA in a controlled manner, which could then be manipulated to stick together (condense) into linear bundles. It’s early days yet; next on the agenda is using gene expression to control the condensation. Heady stuff!
Next was the workshop on Physics and Computation. Gilles Dowek started with an invited talk on “Quantitative Informational Aspects in Discrete Physics”. Gandy has shown that if a system (1) is homogeneous in space and time; (2) has a bounded speed of information transport; (3) has a bounded density of information, then it can be simulated by a cellular automaton. Since physics appears to satisfy these properties, it should be so simulable. Then came a short but necessary digression on Planck’s constant. The physical constant c has the dimensions of a speed, and it is the speed of light. Planck’s constant has the dimension of an action; what action is it? After a bit of discussion, it turns out that it is (a small multiple of) the area of a bit of information (in a particular choice of units where everything has a dimension that is some power of length) as given by the Bekenstein bound. Then Dowek went through how to build a CA in Newtonian physics, special relativity, and general relativity, that models free fall (subject to some assumptions). It can’t be done in Newtonian physics, because there is no bound on speed. In SR it can be done, with particles that contain 320 bits of information (using the Planck area); in GR they only need 168 bits. This is an existence proof, but the CAs defined are not very satisfactory, for several reasons. The task is to do better! Listening to this, I recalled Randal Beer’s Game of Life talk from ALife last week: looking at a CA in terms of processes rather than cells gives a much more natural formulation. I wonder if that would work here?
Then we had a talk about “The Information Content of Systems in General Physics Theories”. The idea here is to look at a broad range of probabilistic theories, of which quantum mechanics is one instance. Investigating their computational complexity of the “advice” given by a physical system can shed light on what makes QM special, different from just a general theory.
After lunch Ana Belén Sainz gave an invited talk on “Postquantum Steering”. This was in the same vein as the previous talk: look at a general theory, then compare with QM. Here the idea was applied to one particular kind of system: how much can Bob “steer” distant Alice’s state, by making measurements on his own state?
Next came some more talks. The first, on “Sequent Calculus Representations for Quantum Circuits”, was an approach to making reasoning about quantum circuits look like proof theoretic reasoning in other branches of computer science, by finding an appropriate set of axioms. Next was a talk on “Physical Computation, P/poly and P/log*”, looking at the computational complexity of physical computing as an unconventional co-processor, in terms of its advice complexity. After coffee we had a talk on “Local Searches Using Quantum Annealers: How to Beat Noise and Take Advantage of the Classical Algorithms we Already Have, or, Modernising Quantum Annealing using Local Search”. This contrasted classical simulated annealing, including its two improvements of parallel tempering and population annealing, with the quantum version: quantum annealing. Each has is strengths and weakeness; here was a suggestion of how to use the quantum annealing as a “subroutine”, getting the best of both approaches. The final workshop talk was on “Quantum Probability as an Application of Data Compression Principles”, a philosophical look at probabilities in general, and branching world probabilities in particular.
The day was then completed with a further invited talk, Bob Coecke talking “In Pictures: From Quantum Foundations to Natural Language Processing”. He zipped through a beautiful, formal, diagrammatic notation for quantum systems, and how the power of this notation makes many complicated quantum puzzles ad proofs essentially vanish. There will be a book, Picturing Quantum Processes, from Cambridge University press covering this published soon. It's 922pp: pictures take a lot of space! After all this the quantum mechanics, he went off in an unexpected direction, by showing how the very same notation could be used to calculate the meaning of sentences from their underlying grammar and the meaning of the individual words. Some modern meaning systems use high dimensional vectors to encapsulate word meanings. Adding the grammar via the diagrams improves the calculated meaning enormously. Thinking about the mathematical structures needed leads to the suggestion of using density matrices rather than vectors, to cope with ambiguous meanings. I love this kind of work: a deep piece of work in one domain that is not only applicable in a seemingly unrelated domain, but that suggests advances there, too.
We kicked off with an invited talk from Friedrich Simmel on “Chemical Communication Between Cell-Sized Reaction Compartments”. This was a fascinating account about a series of experiments sending signals between cells, droplets, and “genelets” (droplets containing cellular “naked” genetic machinery), based on the ideas of quorum sensing: when a high enough chemical signal concentrations is produced, because there are enough producers around, it invokes a response. We saw droplets signalling the chemicals, inducing bacteria to react, and that signal propagating through multiple droplets. Apparently there is a “bacterial Turing test”: can you make a droplet that a bacterium will interact with (through chemical signals) just as if it were another bacterium? These systems pass it. Through a clever use of microfluidics, we saw videos of sheets of bacteria interacting, via fluorescent protein production. The fluorescence increases both due to the being switched on by the signalling, and due to the bacteria reproducing, two process with similar timescales. The possibilities of this approach include forming spatial and temporal patterns through reaction-diffusion systems of interacting genetically programmed droplets. If all this wasn’t enough, Simmel finished his talk with a description of using electron lithography to etch chips, deposit gene-length strands of DNA in a controlled manner, which could then be manipulated to stick together (condense) into linear bundles. It’s early days yet; next on the agenda is using gene expression to control the condensation. Heady stuff!
Next was the workshop on Physics and Computation. Gilles Dowek started with an invited talk on “Quantitative Informational Aspects in Discrete Physics”. Gandy has shown that if a system (1) is homogeneous in space and time; (2) has a bounded speed of information transport; (3) has a bounded density of information, then it can be simulated by a cellular automaton. Since physics appears to satisfy these properties, it should be so simulable. Then came a short but necessary digression on Planck’s constant. The physical constant c has the dimensions of a speed, and it is the speed of light. Planck’s constant has the dimension of an action; what action is it? After a bit of discussion, it turns out that it is (a small multiple of) the area of a bit of information (in a particular choice of units where everything has a dimension that is some power of length) as given by the Bekenstein bound. Then Dowek went through how to build a CA in Newtonian physics, special relativity, and general relativity, that models free fall (subject to some assumptions). It can’t be done in Newtonian physics, because there is no bound on speed. In SR it can be done, with particles that contain 320 bits of information (using the Planck area); in GR they only need 168 bits. This is an existence proof, but the CAs defined are not very satisfactory, for several reasons. The task is to do better! Listening to this, I recalled Randal Beer’s Game of Life talk from ALife last week: looking at a CA in terms of processes rather than cells gives a much more natural formulation. I wonder if that would work here?
Then we had a talk about “The Information Content of Systems in General Physics Theories”. The idea here is to look at a broad range of probabilistic theories, of which quantum mechanics is one instance. Investigating their computational complexity of the “advice” given by a physical system can shed light on what makes QM special, different from just a general theory.
After lunch Ana Belén Sainz gave an invited talk on “Postquantum Steering”. This was in the same vein as the previous talk: look at a general theory, then compare with QM. Here the idea was applied to one particular kind of system: how much can Bob “steer” distant Alice’s state, by making measurements on his own state?
Next came some more talks. The first, on “Sequent Calculus Representations for Quantum Circuits”, was an approach to making reasoning about quantum circuits look like proof theoretic reasoning in other branches of computer science, by finding an appropriate set of axioms. Next was a talk on “Physical Computation, P/poly and P/log*”, looking at the computational complexity of physical computing as an unconventional co-processor, in terms of its advice complexity. After coffee we had a talk on “Local Searches Using Quantum Annealers: How to Beat Noise and Take Advantage of the Classical Algorithms we Already Have, or, Modernising Quantum Annealing using Local Search”. This contrasted classical simulated annealing, including its two improvements of parallel tempering and population annealing, with the quantum version: quantum annealing. Each has is strengths and weakeness; here was a suggestion of how to use the quantum annealing as a “subroutine”, getting the best of both approaches. The final workshop talk was on “Quantum Probability as an Application of Data Compression Principles”, a philosophical look at probabilities in general, and branching world probabilities in particular.
The day was then completed with a further invited talk, Bob Coecke talking “In Pictures: From Quantum Foundations to Natural Language Processing”. He zipped through a beautiful, formal, diagrammatic notation for quantum systems, and how the power of this notation makes many complicated quantum puzzles ad proofs essentially vanish. There will be a book, Picturing Quantum Processes, from Cambridge University press covering this published soon. It's 922pp: pictures take a lot of space! After all this the quantum mechanics, he went off in an unexpected direction, by showing how the very same notation could be used to calculate the meaning of sentences from their underlying grammar and the meaning of the individual words. Some modern meaning systems use high dimensional vectors to encapsulate word meanings. Adding the grammar via the diagrams improves the calculated meaning enormously. Thinking about the mathematical structures needed leads to the suggestion of using density matrices rather than vectors, to cope with ambiguous meanings. I love this kind of work: a deep piece of work in one domain that is not only applicable in a seemingly unrelated domain, but that suggests advances there, too.
Wednesday, 13 July 2016
UCNC day 3
Labels:
conference,
growth,
Manchester,
research
UCNC day 3. Well, only half a day, as the afternoon was reserved for exploring Manchester.
We started with a super invited tutorial on “Self-Assembling Adaptive Structures with DNA”, by Rebecca Schulman. Rather than trying to assemble arbitrary structures, let’s just look at what can be done with 1D systems: filaments of DNA nanotubes than can controllably be built into strings, trees, and network structures. She pointed out that it doesn’t make sense to build every structure from weaving pure DNA: a human-size object would need about 3 light years of it. But smaller things can sensibly be built this way. This approach doesn’t include only static structures: movement can be achieved by growing at the front and dissolving at the back. This is the way the cytoskeleton in cells works to move them around. DNA nanotube growth can be controlled by a variety of chemical processes, but it’s hard to design different systems: there’s no good enough model or simulation of how it all works. Currently things are a mixture of approximate yet expensive simulations, and lab experiments. But this is clearly a very powerful and rich area.
This was followed by the technical session: three talks related to DNA computing. The first was implementing a circuit model in a 2D gellular automaton. Next was another gellular automaton system: a maze solver. We finished the morning with a description of implementing a stack (the push-pop data structure) in DNA: the design is fascinating, and it has been implemented, at least for stack sizes of three. Again, this work is just a hint of things to come.
Then off to explore the wilds of Manchester…
We started with a super invited tutorial on “Self-Assembling Adaptive Structures with DNA”, by Rebecca Schulman. Rather than trying to assemble arbitrary structures, let’s just look at what can be done with 1D systems: filaments of DNA nanotubes than can controllably be built into strings, trees, and network structures. She pointed out that it doesn’t make sense to build every structure from weaving pure DNA: a human-size object would need about 3 light years of it. But smaller things can sensibly be built this way. This approach doesn’t include only static structures: movement can be achieved by growing at the front and dissolving at the back. This is the way the cytoskeleton in cells works to move them around. DNA nanotube growth can be controlled by a variety of chemical processes, but it’s hard to design different systems: there’s no good enough model or simulation of how it all works. Currently things are a mixture of approximate yet expensive simulations, and lab experiments. But this is clearly a very powerful and rich area.
This was followed by the technical session: three talks related to DNA computing. The first was implementing a circuit model in a 2D gellular automaton. Next was another gellular automaton system: a maze solver. We finished the morning with a description of implementing a stack (the push-pop data structure) in DNA: the design is fascinating, and it has been implemented, at least for stack sizes of three. Again, this work is just a hint of things to come.
Then off to explore the wilds of Manchester…
Tuesday, 12 July 2016
UCNC day 2
Labels:
conference,
fractals,
Manchester,
research
UCNC day 2, a full day of talks.
First up was Masami Hagiya with an invited tutorial on “Gellular Automata”. These are a form of cellular automata implemented using gels and chemical reactions. The walls between cells can be “decomposed” or “composed” using chemical reactions – or instead can “swell” or “unswell” forming a valve. This allows chemicals to move between cells. There are theoretical results demonstrating these systems can in principle implement certain kinds of CAs. The tutorial moved on to talking about implementations. Most of the manipulations involve a form of DNA chemical computing: using complementary strands to form networks of polymers, or to control diffusion by attaching anchors. These processes can be controlled by the DNA technique of “strand displacement” that breaks the bonds between the complementary strands. There are some initial prototype implementations. These are still rather complicated, needing multiple chemical species to implement relatively simple state transitions. However, it is early days yet, and more efficient approaches may well be discovered.
Next was the workshop on Membrane systems (mostly P-Systems). Rudolf Freund started off with a tutorial, helping to introduce the concepts to people not that familiar with the area. Then on to the technical talks, covering a wide set of membrane computing topics.
Finally was the afternoon technical session. We started with a talk on Affine Automata: these use an underlying logic that is partway between classical probabilistic automata and quantum automata. Next was a talk about languages (sets of strings) arising from finite walks on Sierpinsky gaskets. And finally we had a talk on Matrix Ins-del (insertion deletion) systems (although I think a better name would be List Ins-del systems). These three combined nicely as a range of different ways of looking at language (in the CS sense) recognisers.
Then off to The Great Wall Chinese restaurant, for a very nice duck in ginger.
First up was Masami Hagiya with an invited tutorial on “Gellular Automata”. These are a form of cellular automata implemented using gels and chemical reactions. The walls between cells can be “decomposed” or “composed” using chemical reactions – or instead can “swell” or “unswell” forming a valve. This allows chemicals to move between cells. There are theoretical results demonstrating these systems can in principle implement certain kinds of CAs. The tutorial moved on to talking about implementations. Most of the manipulations involve a form of DNA chemical computing: using complementary strands to form networks of polymers, or to control diffusion by attaching anchors. These processes can be controlled by the DNA technique of “strand displacement” that breaks the bonds between the complementary strands. There are some initial prototype implementations. These are still rather complicated, needing multiple chemical species to implement relatively simple state transitions. However, it is early days yet, and more efficient approaches may well be discovered.
Next was the workshop on Membrane systems (mostly P-Systems). Rudolf Freund started off with a tutorial, helping to introduce the concepts to people not that familiar with the area. Then on to the technical talks, covering a wide set of membrane computing topics.
Finally was the afternoon technical session. We started with a talk on Affine Automata: these use an underlying logic that is partway between classical probabilistic automata and quantum automata. Next was a talk about languages (sets of strings) arising from finite walks on Sierpinsky gaskets. And finally we had a talk on Matrix Ins-del (insertion deletion) systems (although I think a better name would be List Ins-del systems). These three combined nicely as a range of different ways of looking at language (in the CS sense) recognisers.
Then off to The Great Wall Chinese restaurant, for a very nice duck in ginger.
Monday, 11 July 2016
UCNC day 1
Labels:
computer,
conference,
Manchester,
research,
robots,
simulation
Another week, another conference.
I have moved from Cancun, Mexico, at the ALife conference, to Manchester, UK, for the conference on Unconventional Computation and Natural Computation (UCNC). It is very weird for me to be at a conference in the UK with “wrong way” jet lag!
The first day was a half day, starting with lunch – very civilised. The first talk was a tutorial from Jon Timmis on Swarm Robotics. This subject has multiple simple automomous robots working together with no global control, to produce an emergent behaviour and capability that none has individually. The tutorial covered the history of the subject, showing how some of the original constraints have become irrelevant: today’s “simple” robots are actually quite sophisticated compared to those at the discipline’s inception; and the original “nature inspiration” is no longer so prominent: use it if it helps, ignore it if it doesn’t. There are a couple of issues that make the subject difficult. The first is, how to design the local, individual robot rules that produce the desired emergent behaviour (and doesn’t produce undesired behaviours also)? This often reduces to an iterative design: suggest, test, refine, which can be automated in a search algorithm, such as an evolutionary search. This leads to the second issue: this search is most efficiently done in simulation, but there is a “reality gap” in simulation: the simulated physics is often too simplistic, leading to “overfitting” to the simulation and the solution then not working on the embodied physical robots. There are lots of fascinating results addressing these issues: the next challenge is moving this research out of the lab into the real world.
Then on to the technical session, with four talks. First up was my student, talking about using reservoir computing as an unconventional virtual machine for computing with carbon nanotubes: evolving the carbon nanotube system into a “good” reservoir, then training that reservoir to perform various tasks, rather than evolving the tasks directly. Next was another carbon nanotube talk, here having them in liquid crystal rather than frozen in polymer, allowing them to move to form clusters, to help their computational performance. The third talk changed tack, on to memristor logic. It appears that memristors natural support a ternary logic rather than the classical binary logic, and naturally implement different kinds of gates. Finally, we had a talk that started with Zuse’s mechanical computer, and ended up with a “three cog, one gate” universal computer.
A great start to the conference.
I have moved from Cancun, Mexico, at the ALife conference, to Manchester, UK, for the conference on Unconventional Computation and Natural Computation (UCNC). It is very weird for me to be at a conference in the UK with “wrong way” jet lag!
The first day was a half day, starting with lunch – very civilised. The first talk was a tutorial from Jon Timmis on Swarm Robotics. This subject has multiple simple automomous robots working together with no global control, to produce an emergent behaviour and capability that none has individually. The tutorial covered the history of the subject, showing how some of the original constraints have become irrelevant: today’s “simple” robots are actually quite sophisticated compared to those at the discipline’s inception; and the original “nature inspiration” is no longer so prominent: use it if it helps, ignore it if it doesn’t. There are a couple of issues that make the subject difficult. The first is, how to design the local, individual robot rules that produce the desired emergent behaviour (and doesn’t produce undesired behaviours also)? This often reduces to an iterative design: suggest, test, refine, which can be automated in a search algorithm, such as an evolutionary search. This leads to the second issue: this search is most efficiently done in simulation, but there is a “reality gap” in simulation: the simulated physics is often too simplistic, leading to “overfitting” to the simulation and the solution then not working on the embodied physical robots. There are lots of fascinating results addressing these issues: the next challenge is moving this research out of the lab into the real world.
Then on to the technical session, with four talks. First up was my student, talking about using reservoir computing as an unconventional virtual machine for computing with carbon nanotubes: evolving the carbon nanotube system into a “good” reservoir, then training that reservoir to perform various tasks, rather than evolving the tasks directly. Next was another carbon nanotube talk, here having them in liquid crystal rather than frozen in polymer, allowing them to move to form clusters, to help their computational performance. The third talk changed tack, on to memristor logic. It appears that memristors natural support a ternary logic rather than the classical binary logic, and naturally implement different kinds of gates. Finally, we had a talk that started with Zuse’s mechanical computer, and ended up with a “three cog, one gate” universal computer.
A great start to the conference.
Sunday, 10 July 2016
film review: Inside Out
11-year-old Riley and her family move from their home in Minnesota to San Francisco.
Riley leaves behind all her friends, and her beloved ice hockey.
It is an emotional time for her.
Like everyone else, Riley is guided/controlled by her five emotion homunculi: Joy, Sadness, Disgust, Anger, and Fear. The young Riley is mostly governed by Joy, and is a cheerful child. But Joy has a hard time keeping her human happy after the move. Then disaster strikes: an accident sucks Joy and Sadness out of headquarters, and they become lost in long term memory. Now Fear, Anger and Disgust are left in charge, and Riley turns into a surly brat. Then it gets worse: in their attempt to help Riley, the remaining emotions get locked out of their control panel, and now Riley can’t feel anything at all.
After a slow start with a lot of info-dumping about how the interior world works (as a sort of cross between a child’s fairy tale and a cognitive neuroscience text), things ramp up once Joy and Sadness are lost, and trying to get home. The adventure is wildly imaginative, with memory stores, dream sequences, imaginary friends, forgetting, the subconscious, abstraction, and trying to catch the train of thought. In the end, Joy learns an important lesson about Sadness, becoming a more mature emotion, and Riley grows up, as her homunculi’s control console is upgraded to parallel working.
At several points in the internal journey, Joy feels some Sadness of her own, and also Fear (or at least Alarm). Embodied emotions having emotions! Does this mean that Joy has five even smaller emotion homunculi in her own head? My brain kept trying to infinitely regress at several points.
This passes the Bechdel test: Joy and Sadness are female, and spend nearly all the time talking about Riley. In the external world, Riley’s teacher is female (although I don’t believe she is named?), and speaks to Riley, asking her to introduce herself; also Riley video chats with her friend Meg back in Minnesota about their hockey team.
Yet there is a curious asymmetry here. During the film we see inside three other human heads: Riley’s mother and father, with more mature homunculi, and briefly a boy of Riley’s age, whose juvenile homunculi are freaked by being close to a girl. The end credits give us views into a few more people, a dog, and, hysterically, a cat. All these other homunculi are the same sort as their “owners”: five mature females guiding the mother, five arguably less mature and somewhat stereotypical males for the father, five young males for the boy, five very cat-like cats in the cat, and so on. Yet in Riley, although Joy, Sadness and Disgust are female, both Fear and Anger are portrayed as male. If the film had a boy main character, would any of his emotions have been portrayed as female, I wonder?
Overall, after that slow start, this is a marvellously imaginative romp through a brain, and it is interesting that the obligatory happy ending is one of growing up and the realisation that Sadness is important.
For all my film reviews, see my main website.
Like everyone else, Riley is guided/controlled by her five emotion homunculi: Joy, Sadness, Disgust, Anger, and Fear. The young Riley is mostly governed by Joy, and is a cheerful child. But Joy has a hard time keeping her human happy after the move. Then disaster strikes: an accident sucks Joy and Sadness out of headquarters, and they become lost in long term memory. Now Fear, Anger and Disgust are left in charge, and Riley turns into a surly brat. Then it gets worse: in their attempt to help Riley, the remaining emotions get locked out of their control panel, and now Riley can’t feel anything at all.
After a slow start with a lot of info-dumping about how the interior world works (as a sort of cross between a child’s fairy tale and a cognitive neuroscience text), things ramp up once Joy and Sadness are lost, and trying to get home. The adventure is wildly imaginative, with memory stores, dream sequences, imaginary friends, forgetting, the subconscious, abstraction, and trying to catch the train of thought. In the end, Joy learns an important lesson about Sadness, becoming a more mature emotion, and Riley grows up, as her homunculi’s control console is upgraded to parallel working.
At several points in the internal journey, Joy feels some Sadness of her own, and also Fear (or at least Alarm). Embodied emotions having emotions! Does this mean that Joy has five even smaller emotion homunculi in her own head? My brain kept trying to infinitely regress at several points.
This passes the Bechdel test: Joy and Sadness are female, and spend nearly all the time talking about Riley. In the external world, Riley’s teacher is female (although I don’t believe she is named?), and speaks to Riley, asking her to introduce herself; also Riley video chats with her friend Meg back in Minnesota about their hockey team.
Yet there is a curious asymmetry here. During the film we see inside three other human heads: Riley’s mother and father, with more mature homunculi, and briefly a boy of Riley’s age, whose juvenile homunculi are freaked by being close to a girl. The end credits give us views into a few more people, a dog, and, hysterically, a cat. All these other homunculi are the same sort as their “owners”: five mature females guiding the mother, five arguably less mature and somewhat stereotypical males for the father, five young males for the boy, five very cat-like cats in the cat, and so on. Yet in Riley, although Joy, Sadness and Disgust are female, both Fear and Anger are portrayed as male. If the film had a boy main character, would any of his emotions have been portrayed as female, I wonder?
Overall, after that slow start, this is a marvellously imaginative romp through a brain, and it is interesting that the obligatory happy ending is one of growing up and the realisation that Sadness is important.
For all my film reviews, see my main website.
Friday, 8 July 2016
ALife day 5
Labels:
Cancun,
chemistry,
complexity,
conference,
Mexico,
research,
simulation
ALife day 5; last but not least.
The day started as usual with a fascinating keynote: today it was Linda Smith on “We need a developmental theory of environments”. Linda’s work is on development in human babies. She has gathered a rich corpus of information on babies’ perceived environments over their first two years of life. This has been gathered from head-mounted cameras (which today are so small they are just a chip in a headband), and demonstrate convincingly that the baby’s perceived environment changes dramatically over time, and that those changes are deeply embedded in its development. Early on, there are lots of close up faces, of a few adults. Later on, the baby’s view moves to hands: watching others, and its own, manipulating objects. Different experiments demonstrate the essential nature of the body / brain / environment feedback loop. What is in this loop changes as the baby grows, and we need to understand when and how. And, of course (unless you are purely into how to experiment on babies for fun and profit), what does this tell us about developmental artificial life? The (perceived) environment is crucial to development.
I then went to the morning technical session on Artificial Chemistries, a potential substrate for ALife. We started with a talk on a novel replicator system based on a chemistry of functional combinators, with conservation of mass. The crucial design tradeoff is not to make the underlying artificial physics so strong that replication is trivial (a “copy organism” operation in the physics), nor to make it so sparse that replication is computationally infeasible. One way to strike the happy medium is to ensure the “functional units” are composed of a few “primitive units”, giving the system a small but crucial distance from the “atoms”. Next we heard about an extension of Hutton’s original replicator AChem, adding kinetics under the Gillespie algorithm, to find a “sweet spot” where a rich set of reaction occur in a computationally feasible time. Then we heard about “messy chemistries”, those that produce a wide range of uncontrolled products, and the conditions for one of the products to come to dominate, suggesting a “selection-first” AChem route to ALife. Then we had a description of a reaction-diffusion system incorporating energetics, and how a combination of exothermic and endothermic reaction systems can stabilise temperature across a region. Finally, we heard about taking mathematics seriously in order to use algebraic concepts, particularly non-associative algebras, to design a novel sub-symbolic AChem.
Then on to the closing keynote of the conference: Katie Bentley on “Do Endothelial Cells Dream of Eclectic Shape?” She explained the title: her work is about computational modelling of real biological systems, based on computational complex systems approaches. She had been warned biologists wouldn’t read something with the word “computational” in the title, so needed to use just biological words. But she wanted to signal to the CS-types that this might be of interest to them too, so used the punning title. She asked us if we got the pun: all but one hand went up. She then asked that person if they had seen the film; yes. She told us that if this was a straight biological conference, no one would have got the pun, and hardly anyone would have seen the film. Divided communities indeed. She went on to describe her computational model of vascular growth, in normal tissue and in tumours. Agent Based modelling, combined with real data and close interactions with biologists (who know which published results to trust, and which not), have resulted in several predictions that have been tested and confirmed in the wet lab. Mostly information flows from biology to ALife; this work demonstrates a great feedback from ALife into biology.
Then it was all over bar the closing ceremony: information about the International Society for ALife, the next two ALife conferences (ECAL 2017 in Lyon, France; ALife 2018 in Japan), and a variety of awards for best papers, lifetime achievements, and contributions to the community.
A truly excellent conference, in content and in organisation. I had a wonderful time, and my head is buzzing with ideas and connections. My neural pathways have been exercised and reconfigured. I need to go home and process all this information further.
Next year in Lyon.
The day started as usual with a fascinating keynote: today it was Linda Smith on “We need a developmental theory of environments”. Linda’s work is on development in human babies. She has gathered a rich corpus of information on babies’ perceived environments over their first two years of life. This has been gathered from head-mounted cameras (which today are so small they are just a chip in a headband), and demonstrate convincingly that the baby’s perceived environment changes dramatically over time, and that those changes are deeply embedded in its development. Early on, there are lots of close up faces, of a few adults. Later on, the baby’s view moves to hands: watching others, and its own, manipulating objects. Different experiments demonstrate the essential nature of the body / brain / environment feedback loop. What is in this loop changes as the baby grows, and we need to understand when and how. And, of course (unless you are purely into how to experiment on babies for fun and profit), what does this tell us about developmental artificial life? The (perceived) environment is crucial to development.
I then went to the morning technical session on Artificial Chemistries, a potential substrate for ALife. We started with a talk on a novel replicator system based on a chemistry of functional combinators, with conservation of mass. The crucial design tradeoff is not to make the underlying artificial physics so strong that replication is trivial (a “copy organism” operation in the physics), nor to make it so sparse that replication is computationally infeasible. One way to strike the happy medium is to ensure the “functional units” are composed of a few “primitive units”, giving the system a small but crucial distance from the “atoms”. Next we heard about an extension of Hutton’s original replicator AChem, adding kinetics under the Gillespie algorithm, to find a “sweet spot” where a rich set of reaction occur in a computationally feasible time. Then we heard about “messy chemistries”, those that produce a wide range of uncontrolled products, and the conditions for one of the products to come to dominate, suggesting a “selection-first” AChem route to ALife. Then we had a description of a reaction-diffusion system incorporating energetics, and how a combination of exothermic and endothermic reaction systems can stabilise temperature across a region. Finally, we heard about taking mathematics seriously in order to use algebraic concepts, particularly non-associative algebras, to design a novel sub-symbolic AChem.
Then on to the closing keynote of the conference: Katie Bentley on “Do Endothelial Cells Dream of Eclectic Shape?” She explained the title: her work is about computational modelling of real biological systems, based on computational complex systems approaches. She had been warned biologists wouldn’t read something with the word “computational” in the title, so needed to use just biological words. But she wanted to signal to the CS-types that this might be of interest to them too, so used the punning title. She asked us if we got the pun: all but one hand went up. She then asked that person if they had seen the film; yes. She told us that if this was a straight biological conference, no one would have got the pun, and hardly anyone would have seen the film. Divided communities indeed. She went on to describe her computational model of vascular growth, in normal tissue and in tumours. Agent Based modelling, combined with real data and close interactions with biologists (who know which published results to trust, and which not), have resulted in several predictions that have been tested and confirmed in the wet lab. Mostly information flows from biology to ALife; this work demonstrates a great feedback from ALife into biology.
Then it was all over bar the closing ceremony: information about the International Society for ALife, the next two ALife conferences (ECAL 2017 in Lyon, France; ALife 2018 in Japan), and a variety of awards for best papers, lifetime achievements, and contributions to the community.
A truly excellent conference, in content and in organisation. I had a wonderful time, and my head is buzzing with ideas and connections. My neural pathways have been exercised and reconfigured. I need to go home and process all this information further.
Next year in Lyon.
Thursday, 7 July 2016
ALife day 4
Labels:
Cancun,
complexity,
conference,
Mexico,
research,
simulation,
systems
ALife day 4. I’m at that point in conference-going where I keep having to check what day it is, as I’ve lost track. Apparently it’s Thursday. I never could get the hang of Thursdays.
This particular Thursday started brilliantly. Keynote speaker Randall Beer talked about “Autopoiesis and Enaction in the Game of Life”. Autopoiesis, or system self-production and self-maintenance, can be a slippery concept to explain. Beer takes the Game of Life, and uses it as a “toy” system, to get a handle on the concepts. The key is to look at the GoL from a process perspective (a toy “chemistry”), rather than from the more usual automaton perspective (the lower-level GoL toy “physics”). (This new modelling perspective is emergent under our definition, as it is actually a new meta-model.) In this different view, a glider can be seen as a very primitive autopoietic system, a network of processes constituting a well-defined identity that is self-maintained. In a marvellous tour de force, Beer showed how all the components fit together, and how GoL can be used to explain and illuminate concepts from autopoiesis in a beautifully clear and elegant manner.
The morning technical session was on Development (although not all the papers were). We started off with a discussion of the relationship between developmental encodings and hierarchical modular structure. Next; simulation is crucial for many ALife experiments, and MecaCell is one being built for developmental experiments. Then our paper; Bioreflective architectures generalise and combine the concepts of computational reflection and von Neumann’s Universal Constructor. Finally; it can be difficult to evolve heterogeneous specialist cooperative behaviours unless the specialists can recognise their partners.
The afternoon keynote was Francisco Santos talking about “Climate Change Governance, Cooperation and Self-organization”. This was an application of game theory in large finite populations to cooperation and coordination problems. He showed some counter-intuitive results: it is easier to get global cooperation via small groups initially, and it is better for small groups to invoke actions than to leave enforcement to a global organisation. In the end, the results show the old adage: think globally, act locally”, and that there is yet hope for cooperation.
I then went to the technical session on “Living technology and Human-Computer interaction”. First there were a couple of talks on the EvoBot, a modular liquid handling robot, covering both the hardware and the software. This system, being built as part of the EvoBliss EU project, is an open source design that costs two orders of magnitude less than current laboratory systems. It allows programmable chemical experiments, with precise, repetitive, complex operations. Real-time droplet identification allows complex operations to be specified. For example, it can be programmed to apply a droplet, wait for the droplet to start moving with a certain speed, then suck the droplet back up again; or to apply a chemical once an array of droplets has clustered. Next we learned about developing a computational agent to “play” a cooperative herding game with novice humans: some of the humans learned how to play from the agent, some never did, and most thought they were playing with another person. Finally we heard about a NetLogo-based approach to teaching school children about complex systems principles.
My brain is full, and there is still a day to go!
This particular Thursday started brilliantly. Keynote speaker Randall Beer talked about “Autopoiesis and Enaction in the Game of Life”. Autopoiesis, or system self-production and self-maintenance, can be a slippery concept to explain. Beer takes the Game of Life, and uses it as a “toy” system, to get a handle on the concepts. The key is to look at the GoL from a process perspective (a toy “chemistry”), rather than from the more usual automaton perspective (the lower-level GoL toy “physics”). (This new modelling perspective is emergent under our definition, as it is actually a new meta-model.) In this different view, a glider can be seen as a very primitive autopoietic system, a network of processes constituting a well-defined identity that is self-maintained. In a marvellous tour de force, Beer showed how all the components fit together, and how GoL can be used to explain and illuminate concepts from autopoiesis in a beautifully clear and elegant manner.
The morning technical session was on Development (although not all the papers were). We started off with a discussion of the relationship between developmental encodings and hierarchical modular structure. Next; simulation is crucial for many ALife experiments, and MecaCell is one being built for developmental experiments. Then our paper; Bioreflective architectures generalise and combine the concepts of computational reflection and von Neumann’s Universal Constructor. Finally; it can be difficult to evolve heterogeneous specialist cooperative behaviours unless the specialists can recognise their partners.
The afternoon keynote was Francisco Santos talking about “Climate Change Governance, Cooperation and Self-organization”. This was an application of game theory in large finite populations to cooperation and coordination problems. He showed some counter-intuitive results: it is easier to get global cooperation via small groups initially, and it is better for small groups to invoke actions than to leave enforcement to a global organisation. In the end, the results show the old adage: think globally, act locally”, and that there is yet hope for cooperation.
I then went to the technical session on “Living technology and Human-Computer interaction”. First there were a couple of talks on the EvoBot, a modular liquid handling robot, covering both the hardware and the software. This system, being built as part of the EvoBliss EU project, is an open source design that costs two orders of magnitude less than current laboratory systems. It allows programmable chemical experiments, with precise, repetitive, complex operations. Real-time droplet identification allows complex operations to be specified. For example, it can be programmed to apply a droplet, wait for the droplet to start moving with a certain speed, then suck the droplet back up again; or to apply a chemical once an array of droplets has clustered. Next we learned about developing a computational agent to “play” a cooperative herding game with novice humans: some of the humans learned how to play from the agent, some never did, and most thought they were playing with another person. Finally we heard about a NetLogo-based approach to teaching school children about complex systems principles.
My brain is full, and there is still a day to go!
Wednesday, 6 July 2016
ALife day 3
Labels:
Cancun,
cognition,
complexity,
conference,
evolution,
Mexico,
philosophy
Day 3 of ALife, and the great science continues!
First up was the keynote by Jorge M. Pacheco, on “Linking Individual to Collective Behavior in Complex Adaptive Networks”. A nice discussion of investigating iterated prisoner-dilemma cooperation-defection situations where the agents are distributed over social networks. Agents can change their behaviours (from Cooperator to Defector or v.v, by copying the strategy of their most successful neighbours), or change their network (both Cs and Ds want to drop connections to Ds, but Ds want to make connections to Cs). Interestingly, it seems that changing the network (changing who your friends are) is more effective than changing your strategy (changing what you do). So it’s best to isolate defectors. (Hmm.)
Next was the morphology session. Although several of the speakers admitted their work wasn’t truly about morphology, all the talks were all interesting. We heard about difficulties of co-evolving morphology and body controllers. Morphology seems to converge quickly, because if it changes, the co-evolving brain can’t adapt fast enough. The speaker had some suggestions on how to improve the situation. Next we heard about evolving soft body robots, exploiting “passive dynamics” and using this capability as a sort of embodied “computational reservoir”. Then there was an examination of how the shape of space (a “donut” shaped torus v a “bicycle tyre” shaped torus) affects iterated prisoner dilemma: donuts are better. Then finally there was a nice talk about co-evolving predator field of view and prey “swarminess”, with interesting Red Queen style oscillations: prey evolve to swarm to confuse the predator, which evolves a narrower field of view to avoid confusion, so the prey then evolve to scatter to hide from the focussed predator, which evolves a wider field of view, and so on.
The second keynote of the day was Mark Bickhard, talking on “Cognition and the Brain”. In a brilliant talk, he covered what it means to be an anticipatory system (having a set of possible future actions to choose from), and how that can allow representation to be true or false (to see if a representation is true, wait and see what the future brings). The talk wove together this philosophy with details about microstructures in the brain, particularly the possible role of glial cells being large scale slow processes that modify the attractor landscape of the brain to influence smaller scale faster neural processes. The whole approach sits within a process philosophy, which permits the emergence that duality is designed to make impossible. I now want to go away and simulate the nested oscillator modulatable resonant architecture he speculated might underlie these processes.
The final session of the day was on computational biology, with a range of talks covering the self-organisation of badger latrines, chopping the tails off tadpoles, making C.Elegans models that swim correctly, a multiscale simulation of E.Coli (from molecular to Petri dish scales), and experiments on the evolution of genetic networks. That’s quite a varied bunch!
First up was the keynote by Jorge M. Pacheco, on “Linking Individual to Collective Behavior in Complex Adaptive Networks”. A nice discussion of investigating iterated prisoner-dilemma cooperation-defection situations where the agents are distributed over social networks. Agents can change their behaviours (from Cooperator to Defector or v.v, by copying the strategy of their most successful neighbours), or change their network (both Cs and Ds want to drop connections to Ds, but Ds want to make connections to Cs). Interestingly, it seems that changing the network (changing who your friends are) is more effective than changing your strategy (changing what you do). So it’s best to isolate defectors. (Hmm.)
Next was the morphology session. Although several of the speakers admitted their work wasn’t truly about morphology, all the talks were all interesting. We heard about difficulties of co-evolving morphology and body controllers. Morphology seems to converge quickly, because if it changes, the co-evolving brain can’t adapt fast enough. The speaker had some suggestions on how to improve the situation. Next we heard about evolving soft body robots, exploiting “passive dynamics” and using this capability as a sort of embodied “computational reservoir”. Then there was an examination of how the shape of space (a “donut” shaped torus v a “bicycle tyre” shaped torus) affects iterated prisoner dilemma: donuts are better. Then finally there was a nice talk about co-evolving predator field of view and prey “swarminess”, with interesting Red Queen style oscillations: prey evolve to swarm to confuse the predator, which evolves a narrower field of view to avoid confusion, so the prey then evolve to scatter to hide from the focussed predator, which evolves a wider field of view, and so on.
The second keynote of the day was Mark Bickhard, talking on “Cognition and the Brain”. In a brilliant talk, he covered what it means to be an anticipatory system (having a set of possible future actions to choose from), and how that can allow representation to be true or false (to see if a representation is true, wait and see what the future brings). The talk wove together this philosophy with details about microstructures in the brain, particularly the possible role of glial cells being large scale slow processes that modify the attractor landscape of the brain to influence smaller scale faster neural processes. The whole approach sits within a process philosophy, which permits the emergence that duality is designed to make impossible. I now want to go away and simulate the nested oscillator modulatable resonant architecture he speculated might underlie these processes.
The final session of the day was on computational biology, with a range of talks covering the self-organisation of badger latrines, chopping the tails off tadpoles, making C.Elegans models that swim correctly, a multiscale simulation of E.Coli (from molecular to Petri dish scales), and experiments on the evolution of genetic networks. That’s quite a varied bunch!
Tuesday, 5 July 2016
ALife day 2
Labels:
Cancun,
conference,
evolution,
Mexico,
philosophy,
research,
systems
Day 2 of ALife, and more great science!
First was Ezequiel Di Paolo’s keynote on “Gilbert Simondon and the enactive conception of life and mind”. There are a lot of French philosophers whose work is relevant to complexity and systems and ALife who I have heard of but not read (Gilles Deleuze, Maurice Merleau-Ponty, Edgar Morin), but Gilbert Simondon was one I hadn’t heard of (and so presumably haven’t read, either). As I understand the presentation, Simondon was critiquing the duality of matter and form: form is intrinsically embodied in the rich physical material. Examples range from bricks, turbines, to life itself. The systems more and more exploit the rich embodied properties of their material embodiment. There is a process of individuation, or taking of form, but it is the process, not the end-point, that is key. There is a pre-individual state full of potentialities, which are exploited during the individuation process, and the fully individuated system has no more potentialities, and so is “dead”. Matter is more than mere stuff, it includes potentialities, transformations, operations, and changes. There was a lot more in the talk, all fascinating. I may have to start reading this particular French philosopher!
I didn’t get to go to any of the morning technical sessions, as I was in a committee meeting. So the next thing, after perusing the posters in the reception area, was the next keynote, Alexandra Penn’s, on “Artificial Life and Society: Philosophies and Tools for Experiencing, Interacting with and Managing Real World Complex Adaptive Systems”. Alex described how her group was using a participative approach to systems modelling, including diverse stakeholders. The models built are deliberately rough and ready, partly because there is no hard data, but partly to make it easier for the stakeholders to challenge them. Modelling allows for the discovery of relevant factors, and for diverse stakeholders to appreciate each other’s concerns. Analysis of the resulting models then allows the exploration of scenarios and identification of system levers. Since the system will respond to manipulations, there needs to be a continual modelling and monitoring process. The metaphor is system steering, rather than system control.
Then it was off to the snappily titled “Synthesising Concepts from Biology and Computer Science” (SCBCS) workshop. This was a bunch of short presentations about potentially suitable areas for writing review articles: diversity, fitness, open-ended evolution, self-modification, plasticity, modularity, recombination, co-evolution, Each of these areas is important in computer science (“natural” computing) and in biology. What could each discipline learn from the other? I find review articles that engage in deep synthesis some of the most valuable publications: they bring together small patches of research, and actually build the subject area. A good review is not a mere “annotated bibliography”: it is a constructive part of science itself. I am not qualified to contribute to many of these proposed articles, but I certainly want to read all of them! If only half the proposals were to be taken forward, it would be an extremely valuable contribution to the relevant disciplines.
So, another day, another ton of thoughts to process.
First was Ezequiel Di Paolo’s keynote on “Gilbert Simondon and the enactive conception of life and mind”. There are a lot of French philosophers whose work is relevant to complexity and systems and ALife who I have heard of but not read (Gilles Deleuze, Maurice Merleau-Ponty, Edgar Morin), but Gilbert Simondon was one I hadn’t heard of (and so presumably haven’t read, either). As I understand the presentation, Simondon was critiquing the duality of matter and form: form is intrinsically embodied in the rich physical material. Examples range from bricks, turbines, to life itself. The systems more and more exploit the rich embodied properties of their material embodiment. There is a process of individuation, or taking of form, but it is the process, not the end-point, that is key. There is a pre-individual state full of potentialities, which are exploited during the individuation process, and the fully individuated system has no more potentialities, and so is “dead”. Matter is more than mere stuff, it includes potentialities, transformations, operations, and changes. There was a lot more in the talk, all fascinating. I may have to start reading this particular French philosopher!
I didn’t get to go to any of the morning technical sessions, as I was in a committee meeting. So the next thing, after perusing the posters in the reception area, was the next keynote, Alexandra Penn’s, on “Artificial Life and Society: Philosophies and Tools for Experiencing, Interacting with and Managing Real World Complex Adaptive Systems”. Alex described how her group was using a participative approach to systems modelling, including diverse stakeholders. The models built are deliberately rough and ready, partly because there is no hard data, but partly to make it easier for the stakeholders to challenge them. Modelling allows for the discovery of relevant factors, and for diverse stakeholders to appreciate each other’s concerns. Analysis of the resulting models then allows the exploration of scenarios and identification of system levers. Since the system will respond to manipulations, there needs to be a continual modelling and monitoring process. The metaphor is system steering, rather than system control.
Then it was off to the snappily titled “Synthesising Concepts from Biology and Computer Science” (SCBCS) workshop. This was a bunch of short presentations about potentially suitable areas for writing review articles: diversity, fitness, open-ended evolution, self-modification, plasticity, modularity, recombination, co-evolution, Each of these areas is important in computer science (“natural” computing) and in biology. What could each discipline learn from the other? I find review articles that engage in deep synthesis some of the most valuable publications: they bring together small patches of research, and actually build the subject area. A good review is not a mere “annotated bibliography”: it is a constructive part of science itself. I am not qualified to contribute to many of these proposed articles, but I certainly want to read all of them! If only half the proposals were to be taken forward, it would be an extremely valuable contribution to the relevant disciplines.
So, another day, another ton of thoughts to process.
Monday, 4 July 2016
Open Ended Evolution workshop
Labels:
conference,
research
Today at ALife was OEE2 – the second Open Ended Evolution workshop. There were 12 presentations followed by a discussion.
I got to go first (which is always the best, as I can then relax and concentrate on the rest of the talks). I was presenting our recent work on defining open-endedness by giving a definition of novelty in terms of the model and meta-model of the system being observed. It turns out that there may be a connection between this definition and some definitions of creativity. I’ll have to chase up some more references.
There were some interesting themes running through the day. I’ll pick out just a few that I found resonated particularly well with me. One theme was on infinite (or maybe better unbounded) scalability: make sure there are no limits designed into the system, because they will, sooner or later, stop open endedness. There were also several talks on using discrete dynamical systems as the basis for either theoretical or experimental investigations. One of these, given by Alyssa Adams, used coupled elementary cellular automata to investigate how using an “environmental” CA (E) to change the “organism” CA (O) rules could result in O displaying certain kinds of novel behaviour. Particularly interesting was that just allowing E to override O’s rules, or overriding them randomly, wasn’t that interesting, but allowing a combination of E and O’s current state to override the rules gave the interesting novel behaviours. A very elegant idea. I think it would be interesting to allow the higher level rule that changes the CA rule also to be changeable (maybe it already is; I need to read the paper). And connecting to the unbounded scalability theme: maybe there should be a rule governing the size of the CA to change, so that the size of its state space can grow unboundedly (although would then make the character of the current analysis somewhat different).
Nathaniel Virgo gave a talk about a modification to Fontana and Buss’ lambda-calculus artificial chemistry. The original in irreversible, and collapses to a boring system unless effort is made to exclude certain kinds of reactions. Nathaniel added an extra kind of reaction, making the overall system reversible (in that it doesn’t lose information about the kinds of particles originally present). This seems to have solved the boringness problem in a straightforward manner. Not only does the new system build interesting reaction networks, it builds interestingly different ones each run, rather than having some average behaviour. The intuition is that the reversibility allows the system to “backtrack” if it gets stuck somewhere, and explore a different path. Presumably the fact that the state space is so huge means that the new path can be significantly different in character form the old path. To start with, any approach that does not destroy information might be fine, but eventually, in order to build in an analogue of thermodynamics and Gibbs Free Energy, reversibility will be required.
The presentations were rounded off with a thought-provoking one by Ken Stanley. He was arguing that OEE systems need to be “interesting”, and that “interestingness” is subjective. As an example, he asked us to consider a though experiment. Suppose there was some agreed-upon complexity metric C that we could use to determine how complex or whatever a particular string is. Then do a random, or exhaustive, search over strings of length M to find the best on. Then repeat for strings of length M+1, M+2, …. The result will be a sequence of the best strings in ascending order of length: a form of open-endedness in some definitions. But would that sequence be interesting? He claimed not, and that there also needed to be some narrative about the process of discovering those strings to make the result interesting (to us). Some in the audience disagreed: they would be interested in those strings! Let’s think of something like a complex work of art. We might find it intrinsically interesting, knowing nothing about how it was produce. We might find it more interesting if we also knew the history: but that’s a “bigger” system, art work plus historical context, so it has the opportunity to be more interesting. Additionally, there’s the interestingness of the metric itself: how was that decided upon? Is it even computable? Then there’s the sheer scale of the problem: exhaustive search quickly runs afoul of combinatorial explosion: the process would never actually work. The narrative of how the strings were found in an evolutionary or other manner is interesting partly because it tells us how this combinatorial explosion of exhaustive search was avoided in this case.
The day finished off with some summary and discussion,and we all went away with our heads buzzing with new ideas, and old ideas looked at in a new way.
I got to go first (which is always the best, as I can then relax and concentrate on the rest of the talks). I was presenting our recent work on defining open-endedness by giving a definition of novelty in terms of the model and meta-model of the system being observed. It turns out that there may be a connection between this definition and some definitions of creativity. I’ll have to chase up some more references.
There were some interesting themes running through the day. I’ll pick out just a few that I found resonated particularly well with me. One theme was on infinite (or maybe better unbounded) scalability: make sure there are no limits designed into the system, because they will, sooner or later, stop open endedness. There were also several talks on using discrete dynamical systems as the basis for either theoretical or experimental investigations. One of these, given by Alyssa Adams, used coupled elementary cellular automata to investigate how using an “environmental” CA (E) to change the “organism” CA (O) rules could result in O displaying certain kinds of novel behaviour. Particularly interesting was that just allowing E to override O’s rules, or overriding them randomly, wasn’t that interesting, but allowing a combination of E and O’s current state to override the rules gave the interesting novel behaviours. A very elegant idea. I think it would be interesting to allow the higher level rule that changes the CA rule also to be changeable (maybe it already is; I need to read the paper). And connecting to the unbounded scalability theme: maybe there should be a rule governing the size of the CA to change, so that the size of its state space can grow unboundedly (although would then make the character of the current analysis somewhat different).
Nathaniel Virgo gave a talk about a modification to Fontana and Buss’ lambda-calculus artificial chemistry. The original in irreversible, and collapses to a boring system unless effort is made to exclude certain kinds of reactions. Nathaniel added an extra kind of reaction, making the overall system reversible (in that it doesn’t lose information about the kinds of particles originally present). This seems to have solved the boringness problem in a straightforward manner. Not only does the new system build interesting reaction networks, it builds interestingly different ones each run, rather than having some average behaviour. The intuition is that the reversibility allows the system to “backtrack” if it gets stuck somewhere, and explore a different path. Presumably the fact that the state space is so huge means that the new path can be significantly different in character form the old path. To start with, any approach that does not destroy information might be fine, but eventually, in order to build in an analogue of thermodynamics and Gibbs Free Energy, reversibility will be required.
The presentations were rounded off with a thought-provoking one by Ken Stanley. He was arguing that OEE systems need to be “interesting”, and that “interestingness” is subjective. As an example, he asked us to consider a though experiment. Suppose there was some agreed-upon complexity metric C that we could use to determine how complex or whatever a particular string is. Then do a random, or exhaustive, search over strings of length M to find the best on. Then repeat for strings of length M+1, M+2, …. The result will be a sequence of the best strings in ascending order of length: a form of open-endedness in some definitions. But would that sequence be interesting? He claimed not, and that there also needed to be some narrative about the process of discovering those strings to make the result interesting (to us). Some in the audience disagreed: they would be interested in those strings! Let’s think of something like a complex work of art. We might find it intrinsically interesting, knowing nothing about how it was produce. We might find it more interesting if we also knew the history: but that’s a “bigger” system, art work plus historical context, so it has the opportunity to be more interesting. Additionally, there’s the interestingness of the metric itself: how was that decided upon? Is it even computable? Then there’s the sheer scale of the problem: exhaustive search quickly runs afoul of combinatorial explosion: the process would never actually work. The narrative of how the strings were found in an evolutionary or other manner is interesting partly because it tells us how this combinatorial explosion of exhaustive search was avoided in this case.
The day finished off with some summary and discussion,and we all went away with our heads buzzing with new ideas, and old ideas looked at in a new way.
Sunday, 3 July 2016
out in the heat
I applied sun screen / insect repelant combo, and went for a stroll. I managed half an hour outside in the heat (despite staying in the shade where possible) before I needed to return to the inside coolth.
I saw some interesting birds, that looked like slim blackbirds with longer legs, and a long triangular tail:
Also, what looked like some kind of parasite growing on a palm tree:
That was enough heat, and I returned to the hotel, and went up onto the roof, where there is better view than from my window:
After a chat in the hotel with some other conference goers, it was time to venture out again, for lunch. Off to Mocambo, a Mexican seafood restaurant, with an "indoors" that was actually open, but under a thatched roof, with views over the sea, and accompanying welcome sea breeze. Delicious food, a great view, plus pelicans flying by!
I saw some interesting birds, that looked like slim blackbirds with longer legs, and a long triangular tail:
my Google-fu tells me this is probably a Great-tailed grackle |
Also, what looked like some kind of parasite growing on a palm tree:
parasite? with fruits? growing half way up a palm tree |
view from the roof; turquoise and blue sea |
view from a hotel window
Labels:
Cancun,
conference,
Gatwick,
Mexico
I arrived in Cancun, Mexico, yesterday evening, ready for the ALife conference starting on Monday. There were several ALifers on the flight...
The flight landed a 6pm local time, only half an hour late. (I say "only" because we took off an hour late from Gatwick, despite being all boarded on time.) Then there was a long queue at immigration. Then outside (bam! the heat!) to find the shuttle bus. "It will be the guy wearing an orange tabard, holding a sign." Am I grateful for that, as there were a zillion guys with signs, but only one with an orange tabard. Then off to the hotel, in a gloriously air-conditioned shuttle bus.
I was a bit zonked by the 10 hr flight, and 6 hour time difference, but 8 hours sleep plus breakfast plus coffee, and I'm feeling human again. Human enough to take the traditional photos from the window.
Now off to slather myself in sunscreen/insect repellant, then explore the beach/air-conditioned restaurants.
The flight landed a 6pm local time, only half an hour late. (I say "only" because we took off an hour late from Gatwick, despite being all boarded on time.) Then there was a long queue at immigration. Then outside (bam! the heat!) to find the shuttle bus. "It will be the guy wearing an orange tabard, holding a sign." Am I grateful for that, as there were a zillion guys with signs, but only one with an orange tabard. Then off to the hotel, in a gloriously air-conditioned shuttle bus.
I was a bit zonked by the 10 hr flight, and 6 hour time difference, but 8 hours sleep plus breakfast plus coffee, and I'm feeling human again. Human enough to take the traditional photos from the window.
view from the window (at an angle) |
view from the end of the corridor |
Saturday, 2 July 2016
my latest new toy
I have the teal keyboard. It is teal, not blue. Phone camera colours lie! |
I started off with a little netbook with Evernote. That worked well, and I later moved on to a more portable-friendly tablet, still with Evernote. That also worked very well for meetings, but I discovered that when I was away for a stretch, and needed to do more than just take notes, that something more powerful would be helpful.
I was pondering what to get next when I had a meeting with a colleague who was using his Surface. The meeting quickly disintegrated into a discussion of its pros (a laptop (so all the utils I need, including Evernote for Windows, which is better than Evernote for Android), with a full keyboard (including backlit keys, useful for note taking in darkened auditoria), and a touchscreen (with beautiful resolution), and a smart stylus, that you can use to take handwritten notes (useful for maths and figures), that magnetically clips to the device (yes, I'm that shallow), and...) and the cons (a bit heavier than a tablet, and, ... not much else I've noticed yet) So I got one.
I'm just off to a conference, so it will get a good workout as a day-note taker, and a general purpose workhorse. Time will tell.
Friday, 1 July 2016
stairs v bananas
Labels:
robots
Stairs no longer make you safe from Daleks, or Boston Dynamics robots. However, there’s always banana peel…
I for one welcome our new Kermit-headed overlords.
For all my social networking posts, see my Google+ page
I for one welcome our new Kermit-headed overlords.
For all my social networking posts, see my Google+ page
Subscribe to:
Posts (Atom)