A simple introductory explanation to what Complexity Theory is. This wonderful world were the Newtonian view of the world fails and where large heterogenous entities create complex systems that need new mathematical tools to process and understand them. It is only 5 minutes but gets to the main points of the field.
Systems that are Sensitive to Initial Conditions
Systems that are sensitive to initial conditions are those that whose trajectories diverge in ways that are not predictable. One of those systems is the chaotic Lorenz attractor, but even simpler systems can show dependence on the initial conditions.
In this example the diverging behaviour is obtained by the totally deterministic algorithm 2 times modulo 1 or (2x%1) that you can even try out in your calculator.
- take a random number between 0 and 1 (e.g. 0.823)
- multiply that number by 2 (e.g. 2*0.823=1.646)
- calculate the modulo 1 (%) of this value (e.g. 1.646 % 1=0.646). modulo 1 corresponds in this case to make the integer part of the number 0.
- use the result obtained in 3. and repeat from 2.
Now start with a second number very similar to the first (e.g. 0.825) and repeat the process (in the animation the initial difference is just 0.01% between the two values).
You’ll see that for the first iterations the calculations (the trajectory) are similar but suddenly they jump all around. After some iterations you can’t predict the behavior of the second trajectory, even if you know the first trajectory. This clearly shows that the system is sensitive to initial conditions. A very simple and strange system indeed.
Porque é que os cães sacodem a água do pelo?
Planet of the Apes: Furry mammals evolved a tuned spin dry: “When a wet dog shakes himself dry, he does something amazing. He hits just the right rhythm to maximize the drying effect with minimal effort.”
É naturalmente tudo uma questão de balanço entre esforço despendido e água removida. Ou como quase tudo na vida, uma questão optimização e evolução.
Some memory and brain stories this week.
Last week showed up some interesting stories about the Brain secrets, how it works and it stores information:
What are memories made of?
our memories are not inert packets of data and they don’t remain constant. Even though every memory feels like an honest representation, that sense of authenticity is the biggest lie of all. — in Lapidarium notes
The Geometric Structure of the Brain Fiber Pathways
The cerebral fiber pathways formed a rectilinear three-dimensional grid continuous with the three principal axes of development. Cortico-cortical pathways formed parallel sheets of interwoven paths in the longitudinal and medio-lateral axes, in which major pathways were local condensations. — by Wedeen et al.
Segregation and Wiring in the Brain
A mosaic of hundreds of interconnected and microscopically identifiable areas in the human cerebral cortex controls cognition, perception, and behavior. Each area covers up to 40 cm2 of the cortical surface and consists of up to 750 million nerve cells. — by Zilles ans Amunts
Social Network Analysis em R e algum arrumar de casa
A área de Social Network Analysis está cada vez na actualidade científica e não só. Em 2010 leccionei numa Winter School uma cadeira sobre sobre Software para Análise de Redes Sociais no qual dei uma achega à utilização do R1 para análise de redes. O R não é só útil para análise de redes sociais, servindo para produção de documentos com gráficos de forma automática e reprodutível, análise estatística variada, manipulação de big data de forma rápida, etc… Na verdade o R é uma verdadeira mula de trabalho que se presta a diversas fases da manipulação e análise de dados.
Na área da Social Network Analysis (SNA) o R apresenta alguns packages que merecem ser analisados. Um deles é o package igraph que é possui muitas das funcionalidades necessárias para o estudo de redes, desde a produção de grafos segundo determinados modelos, análise de propriedades, detecção de comunidades… O próprio site do igraph tem um livro online sobre o igraph que pode ajudar quem se inicia neste package. Quem estiver a estudar SNA pela primeira vez pode ver também os tutoriais de Hanneman, embora em alguns casos não seja utilizado o R, mas outros softwares como o Ucinet ou o Pajek.
Para quem se estiver a iniciar no R no entanto há outros tutorias ou apresentações que ajudarão a entrar na linguagem. Se precisam de uma introdução em português vejam estes pdfs produzidos no IST aqui e aqui.
Is Big Data Killing Theory?
In other words, we no longer need to speculate and hypothesise; we simply need to let machines lead us to the patterns, trends, and relationships in social, economic, political, and environmental relationships.
The Guardian runs an opinion story in their Datablog about how big data could end theory. I think that the reduction of big data to simple a engineering problem of how to accommodate more data, process it in real time and monitoring the services on which the analysis runs, is not really near, neither will ever come to be.
We will always need theoretical scientists, but more important, we need philosophers, The idea that these big data analysis can be automated and the results applied without further explanation is terrifying and orwellian. Science, in it’s publish or perish race to top needs to bring back some thinking to itself and big data will need some big thinking on what it is producing to really understand and explain what society his. If it ended being just a computer output we’d all be in great danger as governments would make bad decisions based on ignorance and evil corporations would game the big data analysis for profit at society expenses.
Big Data is a great field to work right now, and will revolutionize our understanding of the society we live in, but it wont go far without someone being able to interpret and analyse its outputs, even if they are the most accurate ever produced. Society is not what we made, society is what we make.
A caminho da Conferência Europeia de Complexidade #ECCS11
COMEÇA SEGUNDA-FEIRA a Conferência Europeia de Complexidade em Viena.
Esta conferência, que no ano passado decorreu em Lisboa e da qual fiz parte da organização, é a maior conferência europeia (talvez mundial) da área das ciências da complexidade. Apresenta um espectro disciplinar bastante vasto e abrange comunidades empenhadas no estudo de sistemas complexos, que vão desde as ciências sociais à física, passando pela informática, matemática ou análise de redes.
O programa inclui diversos oradores e diversos temas a correr em paralelo pelo que vai ser impossível ver tudo, mas estou particularmente interessado nos temas de análise de redes sociais e computer science principalmente. A ver se consigo apanhar boas sessões.
Para além do que pretendo ver, estou também a organizar um Satellite Meeting para jovens investigadores que estejam a finalizar os seus doutoramentos. Será no dia 14, quarta feira e certamente que aqui estarei um pouco preso, mas tentarei espreitar outros que estejam por perto.
Claro que no meio disto tudo tenho que arranjar um tempinho para passear e conhecer a cidade porque ficar preso o tempo inteiro numa conferência é demais.
Tem algumas sugestões para visitar em Viena?
How to make money from an Open Science Project?
Is this possible? Being an engineer, I used to work with some proprietary software to do some science research, and at some times used to do some programing too. All researcher ends ups programing a bit, even if in that ugly FORTRAN 77 from classes… but we all do program at some level.
So. At a certain point how can you trust the results of a closed source, proprietary, software? And what if you have to review someone work and you don’t have the software that person used, because you can’t afford buying a license just for that.
This leads to open source. Open Source would solve this. But this would make the open source companies very poor. How could someone do some money with science open source?
My opinion is that open source companies have to be reliable, and go to their costumers and give them a full set of services and support. That person that has to review that paper could do the job alone, but if he didn’t have the software he probably would have an hill to climb and learn how to work with it. Science software is so specific that it would require a long time on the code. Here support would come in and sell some services.
Other thing people could do to “sell” the open-science is to release the software in 2 versions. One, in single station work, and another with support for multi processing. It’s a fact that todays science require very parallel processing, so this could be a solution.
Someone suggested advertisement as a solution, but I don’t like the idea of having a science software with some kind of popup or banner. It’s to lame for a science software and it would demean the credibility of it.