Force macports to upgrade all outdated ports

if you are a macports user you probably upgrade the ports you have in your machine with the command

port upgrade outdated

but sometimes some packages gets broken and can not be installed, leaving other macports packages that could be upgraded stuck at old versions. The only solution is to manually upgrade each one of them with port upgrade. The way I force the upgrade of all macports packages is just to list all outdated packages and then try to upgrade one by one with:

for l in `port outdated | awk '{print $1}'`  
do  
port upgrade $l  
done

this doesn’t stop packages from failing, but at least allows you to upgrade all macport packages that can be upgraded. The process will take time, and is a bit crude because it doesn’t take into consideration things like dependencies, but works and in the end I’ve found that sometimes it even solves some of the problems of broken packages. Give it a try.

Extra: After successful upgrades—and only if you don’t need the old versions that are still installed but not activated—you can just uninstall the unactivated macports with

port uninstall inactive

(All commands should be issued under superuser permissions either with sudo prefixed or using sudo -i)

Is Big Data Cause or Consequence

Our world has everything connected. Data is readily available. Humans spend their time producing digital breadcrumbs. And all this data can be collected and analysed in ways that are new and never before imagined.

Is Big Data Cause Or Consequence

When discussing big data sometimes we worry about the difficulty of managing data being produced. The difficulty lays because the data is heterogeneous, because it appears in high volumes and with variety. This led to the appearance of the ‘paragon’ 3V (Variety, Velocity, and Volume of data).

But this corresponds to two views of our society. One that is technical. Another is social.

The growth in data, the growth in the capacity to analyse it, and the growth of Echelon to gather communications in real time are inevitable. These are infrastructural problems. The view of the world as a technological system. A system that wants to extract more and more information, process it in a sequence of marginal gains to find and tell (usually the verb is sell) us the secret patterns we didn’t know about ourselves.

The socio-cultural factors show a picture of society changing. People produce more data because people want commodity. The commodity to press a button in the fridge and place automatic orders for milk on the local store of choice. Were cars will be autonomous and safer because they talk to each other. Where people want to share their photos with family. And in all these processes some (or many) metadata will be shared. The Internet of Things allows anyone to convert a simple electric toy with a microcontroller to repurpose it and connect it to the IoT. We are all happily contributing to the digitisation of everything.

The data produced has always been a problem for those that want to understand it. For them knowledge is paramount. They can be monks copying ancient manuscripts in the medieval ages being technological challenged when wanting to copy an entire library. Or they can be IBM deploying Apache-Spark as the next big hammer after Hadoop. There is always a limitation to the amount of data society can process. And there is always some limitation to the amount of information that one can extract from that data. There is always a limit, and although the term Big Data is new, the problem isn’t.

The digitisation of the world we know is a social phenomenon that is growing with each new device sold. The life simplification of digitisation causes the production of streams of data and metadata that some consider excessive. Security can be a problem, but changes will happen, not because technology imposed by big companies, but because society needs the changes to accommodate our new needs. We want to be connected and functional in the modern world.

This discussion leads me to think that we are dealing with two views of the Big Data. One, that treats Big Data as the technological challenge of gathering, processing, and acting upon the world. Where Data is increasingly being produced and promoted as fueling advances of the future. And for this, we’ll need to be able to understand those streams of data. Another is to look at this as a sociological phenomenon where the growing digitisation of the planet is producing changes at a faster pace. In it Big Data is just a side effect of this social change. So, is Big Data Cause or Consequence?

This seems to be a chicken and egg problem. Is Big Data the analysis of more and more data, and with that analysis the cause of new knowledge that is transformative to the society? Or is the digitisation of the world and the social changes observed, producing a drastic change in the output of their digital signatures, that Big Data is a side product of?

The truth is that the two phenomena might be interconnected in a positive feedback mechanism. The more we see behavioural changes in the population, the more data will be produced and the more the technicalities of Big Data will be publicised. The more Big Data guardians have knowledge about the world, the more will they act on it, asking for more data. If you imagine what society can be if you read an encyclopaedia, imagine what it can produce when Watson receives enough information to decide the next fashion trend.

The present is the inventor of the future. And the question comes down to individual participatory choices (at least in democratic worlds). You cannot escape the future, but you can choose how much you want to engage with it. Progress (as a transformative force) is upon us every day. But progress is a society concept, not a technical one. And how much you engage is always going to be relative to a moving baseline, and not to a fixed reference point. The digitisation of the world is making Earth a better place (even if will be too hot to live in it soon).

Today’s Big data challenges are just the same of the past. How society changes and technology evolves are like the two faces of a Möbius strip. Both interconnected, always affecting each other in eternal motion. Always on an eternal continuum like present and future.

Reading the Visual, AI, and social networks, worlds, in R.

— Do we trust algorithms? Sure we do, even in a time when testing computer programs for correctness is impossible. But should we? Namely when policy algorithms are making decisions based on black box models. Yes, data is making big look small every day, and black boxes (or even transparent ones) are becoming so large (almost said big there) that no one seems to understand how they work. We just rely blindly on what they output and an policy makers remove any emotional connection (and responsibility) from their decisions. Slate writes about some of the emerging dangers in law enforcement, welfare and child protection because of the use of these black box algorithms.

— And if we don’t trust algorithms, what about AI? Artificial INTELLIGENCE? Isn’t that more and more like ALIEN INTELLIGENCE? Ok, stop with the fun and go down to The Economist for another shot of your daily dose of FEAR. The article covers mostly what is being done in terms of Deep Learning — great stuff by the way and probably the best new thing after my mother in law’s chocolate cakes.

— Up next in the show, a few more connections between AI, Aartificial Birds and Aeroplanes. Even if the take home topic is Turing AI and the differences between fiction and reality — Isn’t that the real problem always?

— Next, a revisitation of one of the earliest Social Network works from the 1930s reveals interesting new insights on the network structure visualisation practices of one of the most studied works. Yey, this one is not AI related!

— Finally, but still speaking of Visualisation, in R, you can do it too… just enrol.

And now for some Tech reading . . .

Linux rootkit hidden in the GPUs? What? Crap. This is going to be tough. Jellyfish is a just a proof of concept, but … if a group of developers can do this imagine what a determined surveillance agency will do.

— A $9 computer that wants to replace the Raspberry Pi as the hacker platform of choice? The size is 6cm X 4cm and this is smaller than the 8.5cm X 5.6cm of the Pi. But is this important? For deployment the cost of the chip might make it a success, but for development maybe the Pi is going to be more practical. In any case, more options is always a good thing. Yeay! Their success is going to depend on the distribution channels. If availability of this computer is low . . .

Why landing the SpaceX Falcon 9 rocket is a DUMB decision?

Throw a pencil from a twenty story building and every kid knows that the probability of it landing on its head is close to zero. So, why is the SpaceX trying to do it?

As an engineer, I see this is one of those cases of failure by design. The rocket almost made it and that is amazing taking into account all the things wrong with the design they want to put in place. 

  • You don’t land a stick on its head. It lands on its side, so the design should have taken that into account. Even if the CRS–6 could approach land vertically, they should have devised a way to lay it down. That would even allow for greater STABILITY AFTER LANDING (They almost did it by accident).
  • Using the single rocket thruster to act as a fine attitude controller is dumb. You don’t use a hammer to perform eye surgery, but this landing project seems to be trying just that. This solution forces them to have one main engine with many variables to control power and direction. The uncontrollable aspects (degrees of freedom) are just too many.
  • Another aspect that I don’t understand in this solution is FUEL. This solution needs a LOT of it. It is extra weight at launch because you need the fuel to bring the rocket to a standstill. All the weight could be used for cargo capacity to put more stuff in orbit. In the ’60s, they solved this problem by using something like PARACHUTES. Amazing isn’t it?

I know that Rocket Science isn’t easy. Trying to achieve this is a scientific challenge, but, as a scientist you sometimes come up with dumb hypothesis and TEST them, and TEST them, and TEST them until you either prove your hypothesis or you abandon it and make a new one. In this case, landing the rocket vertically has a big SHOWSTOPPER: It cost a LOT of MONEY for something a 5th grader knows is impossible to do.  

Switching to Colemak – My ColemaP

During 2014 I decided to switch to Colemak keyboard layout. I just want now to give a brief overview of the process, and where I am right now. Here are a few pointers if you want to try it yourself.

ColemaP Keyboard Layout, Colemak derivative

Switching keyboard layouts is not easy. Many people give up and stick to qwerty forever. It is dumb but at least not as dumb as the old Portuguese hcesar keyboard layout.

COLEMAK OFFERS THE BEST OF TWO WORLDS, fewer keys change place—for example shortcuts keys like copy and paste stay in the same keyboard location—and more words are typed using the home row with less finger travelling.

PRACTICE. Typing fast is mainly a matter of practice and finger memory. I used Amphetype and Typeracer to improve over time. In Amphetype I downloaded a book from The Gutenberg Project and read it by typing it through.

I MADE SOME CHANGES to the original Colemak keyboard that have nothing to do with letters. I changed the numbers 6–0 to have the symbols of shifted 1–5 !@#$% that are very useful for programming, twitter, emails, latex, etc… and I didn’t want do be using shift to type them. I put 0 to the left of 1 and put 6–9 as shifted 1–4. I now need to press shift to enter those numbers, but I can use the number pad on the right of the keyboard. I AM STILL UNDECIDED ABOUT keeping this half system or to use the french azerty scheme of inverting all the numbers and shifted symbols (ColemaP figure above) — I used Ukelele for remapping Colemak to my version that I call ColemaP

DEFINE the caps lock key as a backspace. Even if you are still using qwerty. The caps lock is useless and having the backspace key on your left pinky is great. It takes some time to get used to but not having to rise and extend your right hand to reach the backspace is a time saver.

REVERTING BACK to qwerty is very easy as you don’t forget it that quickly. You lose some speed on qwerty but nothing that is that problematic. And when you go back you really feel it in your fingers how slow that layout is.

10Web – Podcast Redux

Dizem que a produção de podcasts está morta, que não há produtores independentes. Que as rádios enchem as playlists do iTunes de forma que o podcast independente não tem visibilidade nenhuma.

10Web Podcast

Talvez seja verdade, ou tudo não passa de uma conspiração para fazer o prazer da rádio. Tudo em torno de um conceito tão simples como uma simples conversa de café (e SIM, já estou a preparar caminho para um plug ao Triplo Expresso).

Os autores do 10Web convidaram-me para participar numa conversa sobre podcast — Esses programas de rádio da era digital que podem ser levados para todo o lado, ouvidos em todo o lado, e que infelizmente não chegam ainda a todos para serem realmente globais. O tema claro foi mesmo O podcast, e após andarmos algum tempo a tentar encontrar uma data comum, lá se gravou o podcast numa noite friorenta de Janeiro.

Foi um espetáculo para matar saudades e ainda para re-ouvir alguns conhecidos como o Pedro Telles que participou no Triplo Expresso nos tempos em que Eu, a Maria João Valente e o Phil arrasávamos as nossas noites de eventos Apple com Gin e Moscatel.

Muito obrigado, ao Ricardo e Vitor pelo convite e ao André e Pedro pela conversa.

MP3: https://archive.org/download/10web12/10web_12_metapodcast.mp3