Warning

Sometimes the only way to get a development tool installed is to compile it from source. In fact, many Open Source projects will assume this is how you intend to install. Some nice people out there might actually compile it for you and make the binaries available, but unless you have 100% trust in whoever is supplying these, compiling from source might be the best route. Perl, my Swiss Army Knife of choice, has many contributors offering modules that go through an elaborate compile/configure/test routine. Such installation processes can take a few minutes to complete, which is often a good excuse to go get a coffee.

The build scripts accompanying such from-source resources often include liberal dollops of commentary that appears briefly on screen before scrolling out of view. Since these are from tried-and-tested supplies, we expect such messages to be generally reassuring. After all, if there were problems then surely this material would not have been recommended for me to use.

During development it would be normal for the developer to be verbose in the commentary. Error messages in particular, especially as such a message may be the only one you will see before the whole build grinds to a halt, and so you’ll probably need a lot of precise information to help solve the problem that caused the error.

Warnings, on the other hand, don’t slam on the brakes. They just let you plough on through, hopefully leaving the developer with a hint regarding some possible problem that just might need to be examined before letting this material loose on an unsuspecting world. Of course, tests that are performed later in the build might confirm that the earlier warning was not really a problem after all. Good. One warning that can be ignored.

Usually what happens is the developer lets the warning continue, knowing that it’s not really a problem. A warning about loss of accuracy when assigning a floating point number to data type with less decimal places is not a problem when the developer knows that the data in question is only accurate to one decimal place anyway. A warning about an API call being an unofficial internal method is not a bother to a developer who is in fact an insider responsible for the API. Yes, the parameter’s documentation is missing, but it’s already explained in detail in the preamble. Sure that method is deprecated but there are solid reasons for using it in this case. Possible fall-through to default case? But of course, I intended it that way!

And on it goes. Warning after warning. Each being left in place with not even a cursory attempt to mollify the compiler. In most cases there are simple actions that can be taken, such as a small code adjustment or (at worst) inserting of a directive to tell the compiler not to complain about it. (Such adjustments should always be accompanied by developer comments to justify them.)

During development, a newly generated warning could get lost in a sea of existing warnings and thus lead to a later bug that could easily have been avoided.

Finally, the material lands in my lap. Or more correctly, cloned to my system from the master repository. To install it, I must run the build. The developers have already done their tests and the results show that all those warnings are not a problem. When building on my system, if there are problems specific to my context that warrants a warning, then I want to see if. Being told that a certain font is missing from my computer and may lead to garbled messages is the kind of warning that should attract my attention, though if the message is in Japanese then the font is the least of my problems. Sadly, instead of only seeing warnings that might be of use to me, as the user, I invariably get hundreds, perhaps thousands, of warnings that only make sense to the developer.

Why am I mentioning this now? Today I cloned and built Netbeans 10 from source. It’s a massive project with 100s of contributing developers. The build process took nearly an hour. I compiled it from within N8.2. The build log contains over 22,000 warnings (based on a quick grep and count). I tried skimming the log to see if any of these warnings might need my attention. Checking over 22,000? No chance. I gave up.

Warning: number of warnings has exceeded user tolerance.

Turmoil

/ˈtəːmɔɪl/A state of confusion, disturbance or uncertainty. A very apt word, not for the word games I was playing over the holidays, but for everything else that’s going on around us. Facing in one direction I find my British cousins, friends, co-workers, or at least neighbours, convulsed in the throes of Brexit. The consequences of this poorly formulated plan to extract the United Kingdom of Great Britain and Northern Ireland from the European Union project are vague at best and potentially disastrous at worst, and it seems that no matter what happens, roughly half of the UK citizens who express opinions on the matter will be disappointed. It’s an unhappy state of affairs, and not likely to get better any time soon.

If I do a 180 and look towards that other horizon I discover another world of chaos in the United States of America, that amazing place mostly located in the northern part of the American continent and comprising such a wonderful mix of people of all kinds, colours, creeds and outlook, all in perfectly balanced disagreement. As I write this, a sizeable chunk of their government has pulled down the shutters while various political factions argue over who is to blame and where their priorities lie.

The world abounds with instability. East, West, North and South. A lot of it is political. A lot is economic. It dredges up historical conflicts and gives them new, unwelcome, impetus. Like motes in a simmering pot, we are having our own personal Brownian motion experience, not knowing which way we’ll be going from one moment to the next.

While all this is going on, there’s another pot boiling and it’s also world-wide. In fact, it is the planet itself. All of the evidence is clear, at least to those who accept the veracity of such evidence, that our pale blue dot is having a bit of skin trouble. Skin? Well, the troposphere (where the weather happens) is about 12km high on a planet that is a bit over 12,000km in diameter, so what we consider to be the air around us is a mere 1/1000th of the planet’s thickness, which I liken to a quarter millimetre skin of paint on a basketball. Just picture that and contemplate how fragile our climate must be.

It’s no wonder that businesses, large and small, appear to be on a roller-coaster. Quick swings up as sudden opportunities arise from other people’s misfortune, and equally steep falls as those same misfortunes rebound in this hyper-connected world. Tariffs and barriers appear with alarming frequency, yet the negative consequences only seem to encourage more of the same. Critical thinking is in short supply.

Those with oodles of money in their pockets make bold claims about planning to leave the planet and settle on Mars, and while most of us laugh at such proclamations there’s an increasing number of people that genuinely wonder if this isn’t such a bad idea.

But…

There’s always a “but”. Usually to burst someone’s balloon, though in this case to point out that all is not necessarily as gloomy as the picture above would suggest.

While the reports surrounding Brexit might give the impression that the British have gone stark raving mad, there’s plenty of contrary evidence that many of HRH’s subjects have taken the opportunity of the past two years to be more informed and do some thinking. The implications of acting upon the advisory referendum of 2016 are now more clear, and an enlightened majority appear to be pressing hard to act upon this new-found knowledge to avoid the shambles we’re increasingly referring to as the “no deal exit”. It remains to be seen if this emerging momentum can sway the machinations of those who govern, and do so in time, but at least there is hope.

Meanwhile, shifts in the balance of power (or at least influence) in the American continent, the USA in particular but not exclusively, could see completely new dynamics at play over the next 2+ years which hopefully will lead to better relations with their neighbours and easing of trade tensions with just about everywhere.

The churning in the world of commerce is ever-present, and when one venture flounders another launches. Even the recent alarm over the faltering fortunes of Apple could be no more than growing pains as it morphs into a predominantly services-oriented company. It’s also inevitable that the nature of work itself needs a big re-think, especially when so many “traditional” jobs are being replaced by automation and artificial stupidity. (Calling it “intelligence” is still a bit of a stretch for me.) Already we can see many universities proclaiming that they are preparing young people for careers that don’t even exist yet. No negative thoughts there for sure.

As for the impending demise of the planet, even that isn’t all doom and gloom. It may seem to some that the faltering Paris Agreement and related initiatives may be too late, the focus on big-picture carbon emissions (mainly oil and coal) may have missed some tricks: food, education and fridges. The Drawdown Project examines all the ongoing initiatives that are both inherently desirable and positively impactful on the climate, and has compelling data that shows that if we combine and maximise the best of these initiatives we can indeed deal with the climate challenges. In the food area the reports highlight food waste, better use of plant-rich diets and the mixing of trees with traditional pasture lands. Electricity generation includes having solar panels on roofs and on land that’s not good for food production, and there’s strong support for onshore wind turbines. Properly dealing with refrigerants when disposing of fridges and air conditioners will, according to the Kigali Accord, be the most effective industrial action to deal with climate change.  But highest of all is the societal action of providing better education and family planning to millions of girls and young women who currently can only dream of such access, because the consequence of these actions would be to reduce population growth (and hence the carbon footprint), reduce maternal mortality, improve life-long health and increase productivity.

So, as we stumble into 2019 we find on the one hand that we have the potential to make a monumental mess of everything, and on the other hand we have the power to solve all our problems.

Or we could hitch a ride to Mars.

Thou art more… lovely?

[ see update below ]

Some days ago the world was aghast as it witnessed online the images of “Girl with Balloon” (framed by the artist, Banksy) self-destructing as the auctioneer’s gavel slammed down. The work of art, now reborn as a performance/conceptual work entitled “Love is in the Bin”, had been encased in a frame that the artist had fashioned years in advance for exactly this purpose, according to a video release shortly after the spectacle.

The videos and images make for amazing viewing, but to my eye they ask a few questions, as yet unanswered. My questions are:

  • Why does the shredding device in the video have far more blades than strips in the shredded picture?
  • Why are the shredded remains below the frame slightly to the left of the picture remaining in the frame?
  • Why is the leftmost shred wider than those to its right?

The misalignment of the hanging shreds might be explained if the shredded paper was closer to the wall than the paper remaining in the frame, and the photographer was taking the picture while standing to the left. However, this is unlikely. The photo appears to have been taken facing the picture head-on as there is no obvious trapezoidal distortion.

Note also that the overall width of the hanging portion matches the width of the frame’s window. If the picture had been slightly wider than the window, as would be reasonable, then surely the width of the hanging portion would have been wider than the window.

The extra thick leftmost shred might be explained by the corresponding (initial) blade being aligned that way with respect to the edge of the paper. But if that is the case, then all those extra unused blades, which appear to be equally spaced (based on the shreds and the video) would be extending too far to the other side of the frame. There are 27 cuts in the paper hanging below the frame, about four-fifths of the number of blades available for cutting.

The more I look, the more puzzling it becomes. But maybe this is what Banksy wanted.

(PS I know the video is showing the blades right-to-left as it’s from the back, but I’m not trying to match blades to cuts, just count them.)

Update (18 Oct):

That didn’t take long, did it? Banksy has uploaded a video that shows much more detail about the shredding mechanism, and demonstrates how it should have worked on the day.  It would now appear that the partially shredded work of art is in a state that the artist had not intended.

Clearly the performance was botched. But why did it fail? There’s a clue in Banksy’s latest video. In this partial from one of the video frames you can see that I’ve highlighted one of the strips wrapping around the roller. It’s quite possible that in the Love is in the Bin performance, one strip (presumably at the edge) wrapped around the roller until it became thick enough to stop the mechanism.

Binary left shift

The logical (and arithmetic) binary left shift effectively doubles a number by moving the digits one position to the left, and inserting a zero in the least significant position. In practice the number of bits in a processor’s arithmetic register is fixed, so individual bits fall off the left side with each shift, and if you place a 1 at the least significant position then it will grow with each shift until it too falls off the left side. You can imagine this happening all at once, or just one bit at a time, though in the latter’s case you need to start the movement from the left towards the right or you’ll blitz the entire register. The rule is that old bits on the left need to move away first to make space for the small young ones on the right.

While this sounds like an introduction to binary mathematics, it’s actually a metaphor for a philosophical thought. My offspring lost both their grandmothers this year. Life’s register is short. Barely three generations. Four if you are lucky. There’s now nobody to our left. And the clock is ticking.

On the plus side, there are some interesting characters to the right…

 

Sneak peek

In the past few days the tech community has gone into a panic over a discovery that computers have been vulnerable to a specific kind of attack for over 20 years. Despite being present for a very long time, it would seem that nobody has exploited the vulnerability. The details are complicated, but let’s consider a part of their discovery in more simple terms:

The problem is in the processor (CPU), the thing that does calculations using information in the computer’s main memory (RAM). Decades ago, CPU designers from companies like Intel, AMD and others, decided that they could speed up a computer if they could get it to do some calculations ahead of time, even if the results of those calculations were eventually ignored.

Imagine you are travelling along a road looking for a particular house and you are making note of the houses you have passed, when you come upon a fork in the road. You know that the house you are looking for is down one of these two choices but which do you pick? Suppose you go left and reach the end of the road without getting to your destination, then you know you made the wrong choice, have to backtrack and go to the right instead. The same could be true if you went right first. But if you could walk down both roads at the same time, you would find your destination in the fasted time and could pretend that you hadn’t walked down that other road at all.

The CPU does something similar when it gets to a decision point. Go left, or go right? Actually, it proceeds down both possibilities, and when it figures out which one was the correct path it just ignores anything it was doing in the other path.

Where’s this going? Well suppose the CPUs paths were Good and Evil. In the good path it doesn’t do anything it shouldn’t be doing, but in the evil path it attempts to perform a calculation using some data in a place in the RAM where the program is not allowed to see. We also arrange it so that even though there are two paths, only the good path will eventually be chosen. You could consider the activity of the CPU in the evil path to be like a ghost that should, in theory, have no impact on the real world.

Except it does have an impact. The CPU during its journey down the evil path was attempting to read memory from somewhere that it should not access, and during that activity it temporarily made a note of the supposedly inaccessible data that it found. The clever evil code then used that knowledge to read a value from one of two possible places in an accessible (permitted) location. We will call these places Hot and Cold. So, as the CPU was going down the evil path it used knowledge about some off-limits memory to decide whether to then look at Hot or Cold. And then, because the good path finally figured out that it was the one that should be chosen, the work that took place in the evil path is discarded.

Almost.

The fact that either Hot or Cold was accessed by the now-dead evil path means that the CPU has now temporarily loaded either Hot or Cold into its cache (a small place where it keeps copies of information it thinks it might need in the immediate future). That means that if the good path proceeds to check how long it takes to read Hot and Cold, whichever one it can read fastest must be the one that had been selected by the evil path. In this way, the good path can get some details from the ghost of the evil path.

So now, even though the evil path is always discarded, we can learn something about what it saw in the off-limits memory. There’s a good reason why some memory is off-limits to ordinary programs: that’s where important and sensitive information is kept, such as the keys and passwords to all your most valuable digital assets.

The researchers at Google were able to craft some code with the Good/Evil paths that could be used to slurp inaccessible memory at the rate of 2000 bytes per second. It wouldn’t take long for such a program to discover everything it needed to compromise your computer. No memory is off-limits to such a program. Woe is us!

I have massively simplified the details of this problem. The research work is far more involved than the narrative above. Nevertheless, at the core (no pun intended) it’s quite a simple hack.

Which makes me sceptical about the claim that it has not been exploited in the two decades that CPUs have been doing “speculative look-ahead processing”.

We await work-arounds at the software level that will mitigate these problems, but probably at a cost of slowing down our computers. Unfortunately, unlike software updates, you can’t change how your CPU is hardwired. You need a new CPU. Wait until the next generation of chips are in the market before buying a new computer.

Meanwhile, be prepared to watch your computer slow down after the next security patch is installed.