Incremental

LUE

It is generally observed that advances (in tech) happen incrementally. By and large the increments are small and go unnoticed. The past half century or so I’ve been like a particle in a Brownian motion experiment constantly nudged in new directions by the latest incremental impact. Out of interest, I’ve tried to recall some of the nudges I’ve felt over the years, and here’s what I’ve come up with:

Analogue to Digital

The 1970s would be when I started my tech journey, and at that time most of the stuff I was tinkering with was made of thermionic valves, resistors as thick as a pencil and chunky capacitors that I was often (rightly) wary of touching. I delved eagerly into the workings of a simple superheterodyne vacuum tube receiver and had learned enough at a young age to be able to repair a small CRT oscilloscope, which served me well for another decade. The first technical increment that made a big impression on me was my first solid-state transistors. These came to me by way of my father who had acquired a piece of Russian equipment from a pilot. The exact provenance I cannot remember, but I certainly recall these small metal-encased three-legged wonders that were far more interesting than the hot glowing triodes I had been used to. These new wonders used voltages that wouldn’t kill me, had switching speeds that were incredible (and visible, now that I had a working oscilloscope) and didn’t break if I dropped them. I lost a good few valves that way. For me, those little Cyrillic-encrusted cylinders were my proper introduction to real electronics.

Discrete to Integrated

Transistor-Transistor Logic (TTL) had been invented just before I was born, and TI’s 74xx series of relatively inexpensive TTL integrated gate packages appeared on the market around the time I started school, but it wasn’t until my teens that I encountered my first 7400 (the 74LS00 to be precise), the quad NAND that is the workhorse of many early digital circuits. I had already gone through a phase of discrete components including valves and transistors but I was still dabbling in analogue electronics and then the 7400 introduced me to digital. That changed everything.

Circuitry to Code

A neighbour ran an electronics shop and from him I got most of my components to feed my hobby, and in my early teens he introduced me to the next step above the discrete logic circuits I had been working on. I had briefly been given a 4004-based processor – really just a calculator but by my reckoning it was pure magic. From this I learned the essence of low-level programming, but it wouldn’t be until around 1979 when I finally got access to a real computer, a Z80-based Cromemco multi-user system. I already had enough electronics knowledge to be dangerous, and now finally I was coding. I had been nudged from seeing logic as circuitry to seeing it as mathematics (i.e. algorithms).

Mainframe to Personal Computing

By the time I started in college (thinking that I wanted to be a physicist) I was already coding in several high-level languages, and as I was already spending much of my spare time running a club in which I could share that knowledge, I was invited to continue that teaching activity as part of the university’s student computer society. Through the society I got access to a mainframe, and much more. In fact, although I had access to an Apple before I started in college, it was via the computer society that I first encountered a “personal computer”. This green-screened PC was revolutionary and the impact would be felt worldwide.

Supercomputing to Portable

The next nudge for me happened shortly before graduation, when the focus had turned to parallel computing and I was tasked with creating an operating system for a bus-based multiprocessor. That piqued my interest in other such systems, leading me into hypercubes (for which I designed and built some prototype nodes), the Transputer and eventually a funded research project in which I had sole use of a massively parallel DAP supercomputer, which I coded in a variant of Fortran. OK, maybe not that super, as it only had 1024 simple processors but it was quite a radical machine nevertheless. Parallel processing, algorithm design and digital electronics were building blocks of my PhD, and would remain my primary research interest until the close of my academic career when I moved from thinking about large-scale computing to thinking of the smallest: mobile.

At this point I must also mention that there was one other nudge that happened before I moved into industry, that would later have a massive impact on me: the World Wide Web. Networking was a topic in which I had become proficient, to the point of lecturing in the subject and providing national system administration, and I was engaged in the research networks of the early 1990s when the Web was launched and many of us tried out NCSA Mosaic as an alternative to using tools like Gopher to find the research material we wanted. I saw the Web as a useful tool, but had no idea how much of an impact it was going to have on the World, and on me personally.

Desktop to Mobile

The move from academia to industry was a bit of a jolt. Whereas in academia new ideas would stir ideas about how to share, popularise, teach, publish…, in industry any new idea was captured, contained, documented for the patent/IP people, probed by Sales/Marketing, and promptly shelved in secret because “the market isn’t ready” or something like that. I had gone from thinking in terms of supercomputers and global networks to thinking about small portable computing devices. I had a mobile phone during my time as a national sys-admin, but the only thing “digital” about it was the occasional SMS (often about some networking emergency in the dead of night). People in my new social circle were beginning to see possibilities for digital communications, and I would eventually lend my weight to their effort to create a new business in this emerging space. While with them the next nudge became obvious: the Web was going mobile.

On behalf of my company and colleagues I was part of the World Wide Web Consortium (W3C), and had several senior roles within that organisation over a period of about a decade. In that time the teams around me created powerful adaptive solutions that involved clever on-premises deployments to deliver content to a wide variety of clients, including Web and Web-adjacent (e.g. WAP) devices. These deployments gradually increased in complexity and demanded more technical know-how of our customers than they were willing (or able) to manage, until eventually it became clear that we should host the service on behalf of the customers. We moved from local on-premises into the Cloud.

Local to Cloud

Once again I was back in computational parallelism, this time involving clusters of virtual machines located in various cloud service providers. Clouds offered all kinds of benefits, notably the ease of scaling, fault tolerance, security and deployment efficiency. As my time with the various teams and customers was coming to an end, the backend server stack was still the most complex part of the overall multi-channel solutions. By the time I had launched my tech consultancy, the Web browser ecosystem had become significantly more competent, more efficient and more effective. Thanks to various standardisation efforts, intelligent client-side solutions started to take over the burden previously the preserve of the servers. We were being nudged away from server-side towards client-side.

Backend to Frontend

Client-side frameworks for Web-based solutions really kicked off shortly after HTML5 hit the headlines. Up to that point I was still immersed in things like Servlets, Spring, Wicket, Struts and JSF. The browser still wasn’t seen as a place for anything more complex than form data validation, but the introduction of jQuery and Ajax had shown that cross-browser scripting of the displayed content was viable. So by 2010 the stability of browser standards encouraged the emergence of frontend technologies like AngularJS, Bootstrap, Vue, Typescript, React and more. Even the backend got some frontend attention with the creation of Node. I moved from the big company scene to independent consultancy around the time that React Native, Angular and Progressive Web Applications were taking off.

Corporate to Independent

This nudge was not so much one of technology itself, but of my relationship with it. Over several decades I have been immersed in technology as a hobbyist, as a student, as a researcher, as an educator, as an inventor, as a CTO, as a standards leader and finally as a consultant. I am fortunate to have amassed experience in a wide range of technologies over the years, and now I continue to put that knowledge to good use for my clients. Initially my clients were as varied as the journey that got me to this point, but gradually I have specialised in the technical needs of companion animal organisations and the veterinary space in general. This is a surprisingly complex space with very specific needs. My first foray into the space was back when hosted services were gathering pace, shortly before the Cloud came on our radars, just to provide some assistance keeping a basic online pet reunification service up-and-running. What started as something I’d do now and then in my spare time, has become my main focus and I’m happy to be able to bring my experience to the table.

What will the next nudge be? Perhaps it is already happening and it’s too early for me to notice. I’m sure it’ll be fun, whatever it is.

Miserable

Coding, Technology

It’s not often that some piece of software makes me miserable, but over the past few weeks I’ve been subjected to an example of exceptionally bad software and I’m near breaking point. The culprit in the spotlight is EzCad3 (pr. “Easy CAD”), a graphics tool intended to control laser etching/cutting hardware. In fact, the version I have is a slightly customised (i.e. feature deprived) derivative of the official EzCad3 but I won’t mention the name as the suppliers of this kit are not to blame for the shoddy software. That honour belongs to the Beijing JCZ Technology Company, Ltd, who were founded 20 years ago and should know better by now.

EzCad3 is a 64-bit Windows application and it has reasonable implementations of all the 2D/3D features that you’d expect from something that controls some expensive industrial equipment. All the usual object manipulations, specific object types (barcodes, vectors, bitmaps etc.), extensive 2D hatching (think “planes”, including the “ink” of text) and more. It also has an impressive list of laser equipment it can support, including the fiber laser that I’ve been tasked to automate.

The problem is with the interfaces, both the UI and the SDK. The latter I abandoned because it is far too low-level. I am not trying to reinvent a CAD tool, I just want to pump some content into a template, resize any bits that go out-of-bounds, and start the laser marking process. I don’t want to be down at the level of directly manipulating the head axes, motors, energy source etc. So instead I want to have the automation up at the CAD level, where I can load objects, set their properies, arrange and orient them, then hit “go”.

Not possible, it would seem. Well, not normally possible, but if you are willing to sell your soul there are ways.

I’ve had to resort to emulating the gesures of a human user: mouse movements, clicks, keyboard interactions etc. This would be less of a nightmare if EzCad3 was consistent in its UI and at least provided a keyboard version of every action that currently requires a mouse. Sadly, it does not. In fact, very few of the mouse operations have keyboard equivalents. Some of the operations have menu equivalents, which can be navigated to via a sequence of keyboard right+down operations, but many are missing. Even using direct access to the underlying Win32 controls doesn’t always work. For example, there’s no way to select a particular object in a CAD file via the keyboard, and sending “select” commands to the ListView control merely causes the items to be highlighted, but crucially not selected. Without being selected, I have no access to the fields that can be used to set properties like the X,Y coordinates. My solution was to simulate a mouse click within the control at the position (possibly off-screen) where the object would be listed.

I have spent weeks creating Win32 automation work-arounds for many of the deficiencies in EzCad3. Today, for example, I found a work-around for the fact that it won’t refresh text from its source file. (Think “mail merge” but with CAD data.) I discovered that if I have a group of text objects that are bound to a source (e.g. a text file) and I apply a hatching to the group, EzCad3 will re-load the content from the text file(s). This is good because then I have the text objects set to the size that the loaded text dictates, and I can inspect the object properties to see if any are wider than the engraving zone, and resize if necessary.

An hour or more can go by while I bash my head against what appears to be an impossible problem, and then by accident I find a way past, only to be hit by the next speed bump.

The journey will end, hopefully soon, but it’s so, so miserable.

(Yes, I might document my findings, but not until I’ve had time to recover.)

Sleep

Coding, Operating Systems, Technology, Uncategorized

There are many ways to make a program/script pause for a few seconds. Here are some of my favourites.

Windows

There are two built-in sleep functions you can include in command scripts (.cmd, .bat and PowerShell):

pause

The pause command pauses until you press a key. It has no option to set the period of time. While pausing, it displays the message “Press any key to continue…” (or “Press Enter to continue…:” in PowerShell).

timeout /nobreak /t 20

This will sleep for 20 seconds, displaying a constantly refreshing message saying “Waiting for N seconds”. With the /nobreak option you have to use Ctrl-C to cancel the timer. If you omit the /nobreak then pressing any key will stop the timer.

The GNU utilities for Win32 include sleep.exe, which can be used like this:

sleep.exe 20

This will pause for 20 seconds with no on-screen message. Ctrl-C will interrupt. You can use timings like 20s, 20m and 20h to indicate seconds, hours and minutes.

Also in PowerShell you can use the following:

Start-Sleep -Seconds 20

This is much like sleep.exe in that it displays nothing on screen. PowerShell also uses sleep as an alias for Start-Sleep.

Unix/Linux

The sleep tool (/bin/sleep) is available to every command shell in Unix/Linux. The syntax for a 20 second sleep is just this:

sleep 20

This assumes the period is 20s (seconds). It also understands minutes, hours and days, using suffixes m, h and d, though a sleep for several days would be quite unusual! You can also specify more complex periods, such as “sleep 6h 20m 15s” which sleeps for six hours, 20 minutes and 15 seconds.

Pausing until a keypress occurs is a little more complex. This bash one-liner usually works:

read -n1 -r -p "Press any key to continue..." keypress

The key pressed by the user will be available in variable $keypress. If you want something that times out after 20 seconds, use this:

read -t20 -n1 -r -p "Press any key to continue..." keypress

This hack using /usr/bin/timeout is horrible, but it works:

timeout 20s tail -f /dev/null

Scripting

Obviously there are as many ways to sleep within a program as there are programming languages. More, if you include the many feature libraries that accompany these languages. Some languages have built-in sleep functions, and some of these can be accessed directly from the command line or a command-level script. This means that if you know that a certain scripting language is present, regardless of operating system, you have access to a sleep function. Scripting languages generally do not have on-screen message side-effects when sleeping, so if you want a message then output one before you do the sleep.

My favourite scripting language is Perl, and here is how to sleep for 20 seconds from the command line:

perl -e "sleep 20"

If you want to use Perl to pause until the user presses Enter, this should work:

perl -e "<>"

Python is a little more involved. The following sleeps for 20 seconds and can only be interrupted by Ctrl-C:

python3 -c "import time; time.sleep(20)"

You can also try this in Ruby:

ruby -e 'sleep(20)'

Note that most scripting languages can also access the underlying operating system and/or shell so they could invoke the system’s sleep tool, but that means the script is not OS-independent so I won’t discuss those options any further here.

Free beer is not OK

Coding, Operating Systems, Security, Technology []

The phrase “free, as in beer” is often used in connection with Open Source software, to indicate that the software is being given to users without any expectation of payment. This distinguishes it from “free, as in speech” which might erroneously suggest that the software could do whatever it liked.

Actually, were it not for Andres Freund’s recent discovery, a certain piece of software called xz utils might have actually become free to do whatever it liked (or more correctly, whatever its evil master desired). NIST gives it a criticality of 10/10. Freund announced his discovery a month after the tainted xz had been released, though thankfully before it had worked its way into production systems.

The xz utilities provide various data compression features that are widely used by many other software packages and notably sshd, the software responsible for providing secure access to a server by administrators. By compromising sshd, an attacker armed with a suitable digital key (matching the one injected into the poisoned xz utilities) could easily access the server and do absolutely anything. Steal data. Initiate fraudulent transactions. Forge identities. Plant additional malware. Encrypt or destroy everything on the server, and anything securely connected to the server. The ramifications are terrifying.

This was no ordinary attack. The attacker(s) created a number of personas as far back as 2022, notably one named Jia Tan, to gradually pressure the XZ Utils principal maintainer Lasse Collin into trusting the malicious contributors. Once trust had been established, a complex set of well-hidden modifications were made, and Tan released version 5.6.0 to the unsuspecting world. An attack so sophisticated suggests nation-state involvement, and fingers are pointing in many directions.

There is currently no universally accepted mechanism to determine the bona fides of open source contributors. Pressuring a lone project maintainer to let you into the inner circle, especially one who is exhausted/poor/vulnerable, is therefore a viable attack vector. Given the number of “one person” open source projects out there, many of which have roles in critical infrastructure, it would surprise nobody if it were to be revealed that other projects have also been subject to similar long-term attacks.

For now, the best we can hope for is increased vigilance, more lucky breaks like that of Andres Freund and perhaps better support/funding for the open source developers.

AI, AI captain

Legal and Political, Security, Technology []

Artificial Intelligence is appearing everywhere and it is increasingly difficult to stop it seeping into our lives. It learns and grows by observing everything we do, in our work, in our play, in our conversations, in everything we express to our communities and everything that community says to us. We are being watched. Many think it is just a natural progression from what we already created. To me, it is anything but natural.

Spellchecking: an AI precursor

Half a century ago, automatic spell-checking was introduced to word processing systems. Simple pattern matching built into the software enabled it to detect unknown words and suggest similar alternatives. By adding statistical information it could rearrange the alternatives so that the most likely correct word would be suggested first. Expand the statistics to include nearby words and the words typed to date and the accuracy of the spell-checking can become almost prescient. Nevertheless, it is all based on statistical information baked into your software.

But where did those statistics come from? We know that over a thousand years ago the military cryptographers were determining word frequency in various languages as an aid to deciphering battlefield communications. Knowledge of letter, word and phrase frequencies was a key component of the effort to defeat the Enigma machine during World War II. So by the time the word processor was commonplace, the statistical basis of spellchecking was also present. It evolved from hundreds of years of analysis, and one could not in any way discern any of the original analysed text from the resulting statistics.

Grammar checking: pseudo-intelligence

In time, spellcheckers were enhanced with the ability to parse sentences and detect syntactic errors. The language models, lexical analysers, pattern matchers and everything else that goes into a grammar checker can be self-contained. The rules and procedures are generally unchanging, though one could gradually build up some adjustments to the recorded statistics based on previous text that was exposed to the system. It appears somewhat intelligent but only because there is a level of complexity involved that a human might find challenging.

Predictive text: spooky cleverness

Things started to get interesting when predictive text systems became mainstream, especially among mobile device users where text entry was cumbersome. Once again, statistics played a huge role, but over time these systems were enhanced to update themselves based on contemporary analysis. Eventually the emergence of (large) language models “trained” on massive amounts of content (much of it from the Web) enabled these tools to make seemingly mind-reading predictions of the next words you would type. Accepting the predicted text could save time, but sometimes the predictions are wildly off base, or comically distracting. Worse, however, is the risk that as more and more people accept the predicted text the more we lose the unique voice of human writers.

Certain risks surface from the use of predictive text based on public and local content, notably plagiarism and loss of privacy. Unlike the simple letter/word counting of the military cryptographers of the ninth century, today’s writing assistance tools have been influenced by vast amounts of other people’s creative works beyond mere words and its suggestions can be near copies of substantial portions of this material.

While unintended plagiarism is worrying, the potential for one’s own content to become part of an AI’s corpus of knowledge is a major concern. In the AI industry’s endless quest for more training data, every opportunity is being exhausted, whether or not the original creators agree. In many cases the content was created by people long before feeding it to an AI became a realistic possibility. The authors would never have imagined how their work could be used (abused?), and many are no longer with us to voice their opinions on it. If they were asked, that is.

And what of your local content? You might not want to feed that to some AI in the cloud so that it influences what the AI delivers to other people. Maybe it is content that you must protect. Maybe you are both morally and legally obliged to protect it. In that case, knowing that an AI is nearby you would take precautions to not expose your sensitive content to such an AI. Right?

Embedded AI: the hidden danger

What if the AI were embedded in many of the tools at your disposal? Protecting your sensitive content (legal correspondence, medical reports etc.) from the “eyes” of an AI would be challenging. Your first task would to make yourself aware of its presence. That, unfortunately, is where it is getting harder every day.

Microsoft introduced Windows Copilot in 2023, including the business versions of their Office suite, meaning that AI is present in your computer’s operating system and your main productivity tools. Thankfully it’s either an optional feature or a paid-for feature so you are not forced to use it. But that may change.

A particularly worrying development, and the motivation behind this post, is Adobe’s recent announcement (Feb 2024) of its AI Assistant embedded into Acrobat and Reader. These are the tools that most people use to create and read PDF documents. It will allow the user to easily search through a PDF document for important information (not just simple pattern searching), create short summaries of the content and much more. Adobe states that the new AI is “governed by data security protocols and no customer document content is stored or used for training AI Assistant without their consent”. It’s currently in beta, and when it is finally released it will be a paid-for service.

Your consent regarding the use of AI is all-or-nothing because you accept (or reject) certain terms when you are installing/updating the software. Given how tempting the features are, granting consent could be commonplace. Today you might have nothing sensitive to worry about, so you grant consent. Some time later, when getting one-paragraph summaries of your PDFs seems a natural part of your daily workflow, you might receive something important, sensitive, perhaps something you are legally obliged to protect. You open the PDF and now the AI in the cloud has it too, and there is no way for you to re-cork the genie.

“No AI here”

We are entering choppy waters for sure. Maybe we need something we can add to our content that says “not for AI consumption”? Without such control by authors and readers alike we could be facing a lot more trouble.