Novel properties generated by interacting computational systems: A minimal model

In this draft paper, Fabio Boschetti and I<John Symons> address two questions: First, what is the smallest number of components a computational system needs in order to display genuine novelty? And second, can the novel features of such systems also exhibit novel causal powers? We’d be very grateful for any comments or criticism. The paper is herevia Novel properties generated by interacting computational systems: A minimal model « Objects and Arrows.

After reading John Symons and Fabio Boschetti draft paper I found myself thinking about some things:

  • Isn’t this model with the two machines and one IIM similar in some way to what optimization research has done with genetic algorithms (GA)? Or, aren’t genetic algorithms an example of this? If we take a careful look, a chromosome in a GA is something that maps its genome to itself behaving like a machine, and is subject to two (usually two) external interaction openness: crossover with another chromosome to produce a new population, and mutation to random change some of its genes.   These operators would then play the role of the interactive identity machine (IMM). It’s curious that GAs end up being so similar but have been studied under an optimization framework.
  • Other aspect that caught my mind is the problem of using an alphabet or memory positions as in machine D. This is something that in CS is natural as it’s represents the obvious pass by value vs. pass by reference. In CS practice each has its merits but it’s interesting to see this reflected in the paper.
  • The previous bullet remembers me the need of building agents with operations as generic as they can be… Imagine swap(int, int). It’s probably better to have swap(Obj, Obj) as in this case you might end having a generic operator that will allow your agents to face unknown situations, even if you only work with ints

R versus Matlab in…

R versus Matlab in <put your domain here> has been a long discussion. I end up using both, but I find that I use Matlab for more simple things and R for things where I want the highest quality in figure output or where the extra mile of not having a fine polished IDE is compensated in the end by the results.

I agree that the argument of R having a stranger IDE is annoying, but developing one might require some extra time… but as in Open source I think that you must iterate, and often, and R as become very good, stable and powerful. Many are flocking to R even without ever starting with Matlab. That’s good.

via R versus Matlab in Mathematical Psychology.

Visualizing the 2011 Middle-East Protests

This is a visualization of Wikipedia activity from December to yesterday, split into distinct time periods by color running from lighter to darker through the time period:via Visualizing the 2011 Middle-East Protests | Digital Humanities Specialist.

Are we starting to become visualization freaks? Infographics are everywhere as a way of dumbimg down information to a level of public understanding. Does science need to use the same strategy that political elites use when bringing society problems to it’s simpler text so the audience can “catch up”? I’m having mixed feelings about visualization in science. It  has its role but at the same time its being used sometimes to prove to “civilians” things that are really like that and that worries me.

Ants build cheapest networks

Supercolony trails follow mathematical Steiner tree.An interdisciplinary study of ant colonies that live in several, connected nests has revealed a natural tendency toward networks that require the minimum amount of trail.Researchers studied ‘supercolonies’ of Argentine ants with 500, 1000 or 2000 workers to identify methods for self-organising sensors, robots, computers, and autonomous cars.They put three or four nests of ants in empty, one-metre-wide circular arenas to observe how they went about connecting the nests.As with railway networks, directly connecting each nest to every other nest would allow individual ants to travel most efficiently, but required a large amount of trail to be established.Instead, the ants used central hubs in their networks – an arguably complex design for creatures that University of Sydney biologist Tanya Latty described as having “tiny brains and simple behaviours”.via Ants build cheapest networks – Networking – Technology – News – iTnews.com.au.

Ants do it better…

Three-level description of the domino cellular automaton

Inspired by the approach of kinetic theory of gases, a three-level description (microscopic, mesoscopic and macroscopic) of cellular automaton is presented. To provide an analytical treatment a simple domino cellular automaton with avalanches was constructed. Formulas concerning exact relations for density, clusters, avalanches and other parameters in an equilibrium state were derived. It appears that some relations are approximately valid for deviations from the equilibrium, so the adequate Ito equation could be constructed. The equation provides the time evolution description of some variable on the macroscopic level. The results also suggest a motive for applying of the procedure of construction of the Ito equation (from time series data) to natural time series.by Zbigniew Czechowski, Mariusz Białecki: [1012.5902] Three-level description of the domino cellular automaton.

The good thing about having a swipe book for your PhD is that some ideas come back when you need them most… Hm… I feel something forming inside my head now… brb.

The evolution of natural selection

The team therefore succeeded in producing the first model of a physically embodied evolutionary algorithm capable of open-ended evolution. Although the origin of nucleotides still holds many mysteries, the Selfref project massively exceeded the team’s expectations, with significant insights gained and follow-up research generated.

Very interesting project…

On Science Publishing – Are we ready for post publication peer reviewing?

The scientific publishing industry as we know it today represents a structure of the past. It is profoundly tied to the medium of print, which is itself an artifact of a technical revolution hundreds of years old. Moreover, its routines and structures are rooted in paper as a communications and archiving technology, and its business models are based on the costs of physical distribution and review by a select few.via On Science Publishing § SEEDMAGAZINE.COM.

The traditional model of peer-reviewing before publishing was a necessity. An economic necessity as journals where just the front end of an industry that moved trees into paper. In the digital age peer review is necessary but also has to adapt. The cost of production is not tied to physical trees anymore.

Furthermore, the number of scientists trying to publish is growing every year. We moved from the times when one would publish 4 papers in a life time to publishing 4+ papers per year. This makes the traditional scientific publishing totally irrelevant if it stays in the same old mold.

My belief is that it is possible to move peer-review to AFTER publication, the same way we are now using h-indexes to determine who is more respected or more “genius”. Let open publishing be the norm and then let these papers iterate over the scientific scrutiny. Journals, will then play the role of curators of certain collections that will be put together from an existing pool of scientific papers. Those journals that want to publish better papers will have to contract reviewers (something that doesn’t happen today) and will earn scientific respect.

The system will be based on reputation, and not on feudalistic rulings of pre-publishing peer-reviewing. I think that the internet will help change many mentalities but more important it will be necessary to change the economics of the process. Going from a business model based on shuffling paper around to selling intelligent, insightful and relevant science, is publishers major challenge. Let’s hope they don’t screw it… again.

New experiments in embodied evolutionary swarm robotics

Thus, not only do the real robot’s controllers evolve, but their internal models of themselves and their world, co-evolve. This, we believe, is the real advantage of embodied evolution.via Alan Winfield’s Web Log: New experiments in embodied evolutionary swarm robotics.

embodied evolutionary swarm robotics might be on to something… and for some reason this way of doing things is already percolating to science fiction… :) BSG, Caprica fans probably remember Zoe talking to the lab guy about making the robot co-evolve by interacting with the real world… :)