Some days ago the world was aghast as it witnessed online the images of “Girl with Balloon” (framed by the artist, Banksy) self-destructing as the auctioneer’s gavel slammed down. The work of art, now reborn as a performance/conceptual work entitled “Love is in the Bin”, had been encased in a frame that the artist had fashioned years in advance for exactly this purpose, according to a video release shortly after the spectacle.
The videos and images make for amazing viewing, but to my eye they ask a few questions, as yet unanswered. My questions are:
Why does the shredding device in the video have far more blades than strips in the shredded picture?
Why are the shredded remains below the frame slightly to the left of the picture remaining in the frame?
Why is the leftmost shred wider than those to its right?
The misalignment of the hanging shreds might be explained if the shredded paper was closer to the wall than the paper remaining in the frame, and the photographer was taking the picture while standing to the left. However, this is unlikely. The photo appears to have been taken facing the picture head-on as there is no obvious trapezoidal distortion.
Note also that the overall width of the hanging portion matches the width of the frame’s window. If the picture had been slightly wider than the window, as would be reasonable, then surely the width of the hanging portion would have been wider than the window.
The extra thick leftmost shred might be explained by the corresponding (initial) blade being aligned that way with respect to the edge of the paper. But if that is the case, then all those extra unused blades, which appear to be equally spaced (based on the shreds and the video) would be extending too far to the other side of the frame. There are 27 cuts in the paper hanging below the frame, about four-fifths of the number of blades available for cutting.
The more I look, the more puzzling it becomes. But maybe this is what Banksy wanted.
(PS I know the video is showing the blades right-to-left as it’s from the back, but I’m not trying to match blades to cuts, just count them.)
Update (18 Oct):
That didn’t take long, did it? Banksy has uploaded a video that shows much more detail about the shredding mechanism, and demonstrates how it should have worked on the day. It would now appear that the partially shredded work of art is in a state that the artist had not intended.
Clearly the performance was botched. But why did it fail? There’s a clue in Banksy’s latest video. In this partial from one of the video frames you can see that I’ve highlighted one of the strips wrapping around the roller. It’s quite possible that in the Love is in the Bin performance, one strip (presumably at the edge) wrapped around the roller until it became thick enough to stop the mechanism.
The logical (and arithmetic) binary left shift effectively doubles a number by moving the digits one position to the left, and inserting a zero in the least significant position. In practice the number of bits in a processor’s arithmetic register is fixed, so individual bits fall off the left side with each shift, and if you place a 1 at the least significant position then it will grow with each shift until it too falls off the left side. You can imagine this happening all at once, or just one bit at a time, though in the latter’s case you need to start the movement from the left towards the right or you’ll blitz the entire register. The rule is that old bits on the left need to move away first to make space for the small young ones on the right.
While this sounds like an introduction to binary mathematics, it’s actually a metaphor for a philosophical thought. My offspring lost both their grandmothers this year. Life’s register is short. Barely three generations. Four if you are lucky. There’s now nobody to our left. And the clock is ticking.
On the plus side, there are some interesting characters to the right…
In the past few days the tech community has gone into a panic over a discovery that computers have been vulnerable to a specific kind of attack for over 20 years. Despite being present for a very long time, it would seem that nobody has exploited the vulnerability. The details are complicated, but let’s consider a part of their discovery in more simple terms:
The problem is in the processor (CPU), the thing that does calculations using information in the computer’s main memory (RAM). Decades ago, CPU designers from companies like Intel, AMD and others, decided that they could speed up a computer if they could get it to do some calculations ahead of time, even if the results of those calculations were eventually ignored.
Imagine you are travelling along a road looking for a particular house and you are making note of the houses you have passed, when you come upon a fork in the road. You know that the house you are looking for is down one of these two choices but which do you pick? Suppose you go left and reach the end of the road without getting to your destination, then you know you made the wrong choice, have to backtrack and go to the right instead. The same could be true if you went right first. But if you could walk down both roads at the same time, you would find your destination in the fasted time and could pretend that you hadn’t walked down that other road at all.
The CPU does something similar when it gets to a decision point. Go left, or go right? Actually, it proceeds down both possibilities, and when it figures out which one was the correct path it just ignores anything it was doing in the other path.
Where’s this going? Well suppose the CPUs paths were Good and Evil. In the good path it doesn’t do anything it shouldn’t be doing, but in the evil path it attempts to perform a calculation using some data in a place in the RAM where the program is not allowed to see. We also arrange it so that even though there are two paths, only the good path will eventually be chosen. You could consider the activity of the CPU in the evil path to be like a ghost that should, in theory, have no impact on the real world.
Except it does have an impact. The CPU during its journey down the evil path was attempting to read memory from somewhere that it should not access, and during that activity it temporarily made a note of the supposedly inaccessible data that it found. The clever evil code then used that knowledge to read a value from one of two possible places in an accessible (permitted) location. We will call these places Hot and Cold. So, as the CPU was going down the evil path it used knowledge about some off-limits memory to decide whether to then look at Hot or Cold. And then, because the good path finally figured out that it was the one that should be chosen, the work that took place in the evil path is discarded.
The fact that either Hot or Cold was accessed by the now-dead evil path means that the CPU has now temporarily loaded either Hot or Cold into its cache (a small place where it keeps copies of information it thinks it might need in the immediate future). That means that if the good path proceeds to check how long it takes to read Hot and Cold, whichever one it can read fastest must be the one that had been selected by the evil path. In this way, the good path can get some details from the ghost of the evil path.
So now, even though the evil path is always discarded, we can learn something about what it saw in the off-limits memory. There’s a good reason why some memory is off-limits to ordinary programs: that’s where important and sensitive information is kept, such as the keys and passwords to all your most valuable digital assets.
The researchers at Google were able to craft some code with the Good/Evil paths that could be used to slurp inaccessible memory at the rate of 2000 bytes per second. It wouldn’t take long for such a program to discover everything it needed to compromise your computer. No memory is off-limits to such a program. Woe is us!
I have massively simplified the details of this problem. The research work is far more involved than the narrative above. Nevertheless, at the core (no pun intended) it’s quite a simple hack.
Which makes me sceptical about the claim that it has not been exploited in the two decades that CPUs have been doing “speculative look-ahead processing”.
We await work-arounds at the software level that will mitigate these problems, but probably at a cost of slowing down our computers. Unfortunately, unlike software updates, you can’t change how your CPU is hardwired. You need a new CPU. Wait until the next generation of chips are in the market before buying a new computer.
Meanwhile, be prepared to watch your computer slow down after the next security patch is installed.
What follows is by way of explanation for possible observers of an annual phenomenon.
JB sat in his chair one New Year’s Eve as the minutes ticked closer to another beat of the 1980s, surrounded by family and their their “young adult” friends, already well lubricated after a few hours of merriment. If you wanted to laugh until it hurt, this was the house to be in. As distant church bells started to greet the new year, we (for this is my family and I was there) heard a commotion outside in the street, vulgar language from some people who had obviously over-lubricated.
Leaning forward and raising himself from the chair like Old Man Time himself, JB made for the front door, proclaiming to all with a scowl: “I’ll teach them to shout obscenities up and down the street”. Anticipating a confrontation or some entertainment, or both, the family and guests followed JB out the door. The rowdy revellers had already drifted away on the road facing our house (it’s a T-junction outside) and as JB stepped through the low front gate by the lawn he suddenly turned sharply right to face up the street, raised his hands to his mouth like a megaphone and roared:
Then doing a quick 180 degree about-face he repeated this cry down the street, turned back to the gate while muttering “that’s how it’s done” and walked back into the house, grinning with the New Year bells echoing in the night air, past his audience now laid low by uproars of laughter, tears running down their faces.
The following year, recalling his previous year’s performance, the gathered guests repeated the Play in One Act to great amusement and acclaim. It was repeated on many New Year’s since, maybe without as much panache, but it was still a favourite joke we’d share around this time of year.
Time moved on, friends followed their paths in life that took them to far away places, and the end-of-year parties at that house settled down to being just some hot sausages, beer and watching the countdown on the television, and some quick phone calls to wish everyone well and perhaps say “obscenities” to get a chuckle.
JB passed away in 2010, and the following New Year’s Eve on the stroke of midnight outside a few houses around the city and environs one would have observed someone shouting “obscenities” up and down the street, in a bizarre but moving tribute.
This year, approaching 2018, I received a call from 1500 kilometres away shortly after midnight (in their time zone): “Obscenities!” Meanwhile, JB’s grandchildren were away at their own parties and even though they were not around when the original performance took place, they gave a quick rendition. As did I, outside my home with the rain coming down, possibly to the amusement of some bewildered neighbours.
My “day job” as a tech consultant brings me into contact with a lot of software companies, but occasionally I encounter an enterprise that is in the process of becoming a software company. This happens when the company determines that their traditional line of business not only needs to incorporate a software/online strategy, but actually needs to make this the new focus. Motivations range from survival in the face of tech-savvy competitors to expansion into a previously untapped market.
While I enjoy helping these companies explore technology options, it is equally rewarding to help them establish a development process, being a formal approach to the creation of their new products or services. The processes that we in the software world take for granted are often alien to these companies. To drop them into the middle of a full-blown Agile framework, for example, would have them run for the hills! Systems engineers will recognise this as the homeostatic nature of established processes, and it can be challenging to overcome. My preferred strategy is to ease slowly into a process that they can all adopt, gradually introducing select pieces of contemporary methodologies that have immediate and obvious benefits.
Documentation is key. See what you need to do, are doing and have done so that you can compare against your goals. Every member of the team should see what the other members are doing. Awareness of one’s role in meeting team objectives is important. Precision and measurement are highly valued. Effort and complexity often have greater significance than time spent. So what tools do I introduce to encourage all this?
Primarily I look to Agile principles and tools, and propose brief daily meetings (stand-ups) with their quick summaries and plans for the day ahead, the notion of the task backlog, the idea that the team should be allowed to get on with their work, while taking regular cues from customer/management feedback regarding the prioritisation of activity. I borrow from the Kanban approach because of the immediacy of the board and the way it communicates to the team and company at large. I focus on short iterations to refine the development, allowing the team to learn along the way. Tasks in the backlog are kept simple, with a title, one sentence description, completion criteria, an expression of complexity and an owner (or list of candidate owners if not yet active). I espouse the reduction of waste, team empowerment and willingness to drop an unproductive line of action, as promoted by the Lean approach. Scrum’s use of retrospectives allows the team to learn from recent work and use that to directly affect what happens next, so I hold a look-back session every two weeks. And there’s more where that came from.
This is rather piecemeal, certainly, but for companies that have no prior exposure to any of these approaches, this small handful of somewhat radical ideas can be inspiring. Each approach or tool that I select has immediate benefits that the entire team appreciates. Resistance is reduced when the value is easily understood. Typically we move through the company mission, to capturing the company goals that support that mission, then identifying the features of each goal and finally breaking down each feature into a set of ordered tasks for the backlog. Sometimes the goal has only one feature, achievable in just a few tasks in a certain order, but that’s often enough to establish a template for a development process. Sometimes there are multiple goals, and many features leading to hundreds of tasks (and many more being created as the team learns more about what needs to be done).
After a few iterations, the company has a process, and the team members have a taste for what the tools and techniques can do for them. They can either continue with their own process or, as they gain appreciation for the established processes from which theirs is derived, they could choose to adopt the whole of a particular process along with all its bells and whistles. They’ll probably need specialist input from that point, which gives me a chance to refocus on their tech.
Few companies that I encounter actually follow a development process 100%. Most have selected various bits that they believe are a good fit for their company and their people. By and large that appears to work. Regardless of which process (derivative) they use, one thing they all agree on is that it helps them see what they are doing. So when I’m tasked with helping a company become a software company, the first thing I try to do is help them see what they are doing, and then the journey begins.