Chaos

Legal and Political, LUE

It has been almost 100 days since I last published anything other than short form Mastodon posts and in that brief time it seems that chaos now reigns supreme, so I’m going to propose that the Lyapunov time of society (and politics in particular) is 100 days. OK, sure, that’s nonsense but look around, so is everything else. Read the rest of this entry »

Eeeeeeeeee….

LUE

Imagine this: someone clamps a wooden clothes peg onto the little finger of your left hand. (Or right, if you are left-handed.) It doesn’t really hurt, and doesn’t get in your way but it’s unpleasant and you’d rather it wasn’t there. After a few minutes you are engaged in conversation and you forget (almost) that you have the peg on your finger. But every now and then you are distracted by the pain, your attention is broken, you are annoyed.

You try reading a book, eating a meal, watching a programme on TV. You get some long periods where you almost forget the peg, but it’s hard to ignore.

Then at night, when there’s nothing to draw away your attention, the pain of the peg seems worse than it has been all day.

You wake in the morning, and somehow you don’t notice the peg. At least for a few minutes. Then, a twinge, and suddenly you are back to the constant irritation.

You ask for the peg to be removed. You are told “no”. You ask “why?”. They show you your finger, there is no peg there. It’s in your mind. Your brain has undergone some form of “adjustment” and the sensation of the peg on your finger is a manifestation of this. There’s no cure. It’s there forever. Every day. Every night. For the rest of your life. You can perform some actions now and then that will very briefly make the pain worse, but it will never get better.

Now, instead of it being a painful peg on your finger, suppose what went awry in your brain left you with the sensation of a scream, a high-pitched whistle, on the left side, close to your left ear, but on the inside of your head. 8kHz, to be precise. 50-60dB, about as loud as someone talking. Only they’re blowing a whistle. On the inside-left of your head.

That’s me. I suffer from tinnitus and that’s what it is like for me. All the time. Has been this way for decades, gradually getting louder, or so it seems to me.

I’ve been asked what it’s like. To demonstrate I use a tone generator on my phone, place it on their shoulder and turn it on. Some people can’t hear it because 8kHz is quite high and the older you get the harder it is to hear high frequencies. Unless the sound isn’t really sound, which is why I will always be able to hear it, even if I were to go completely deaf. Those who can hear the tone generator usually ask me to remove it after a few seconds. It’s unpleasant. Like a peg on your little finger. You wouldn’t want that there forever, would you?

Eeeeeeeee……!

Decrypting OpenSSL AES128CTR using Java

Coding, Security

Using OpenSSL I can apply a symmetric 128 bit AES block cipher (RFC3686) in “counter” mode using “testpwd” as the password to produce a salted encryption of the word “testing” encoded in Base64 as follows:

echo -n testing | openssl enc -e -base64 -aes-128-ctr -pbkdf2 -salt -k testpwd
U2FsdGVkX19l6/etNkl585d+Y1XgyPc=

I can then reverse the encryption using the same password as follows:

echo "U2FsdGVkX19l6/etNkl585d+Y1XgyPc=" | openssl enc -d -base64 -aes-128-ctr -pbkdf2 -salt -k testpwd 
testing

Automated decryption

If I ever need to store a value in secret I can encrypt it like this, and decrypt whenever I need it so long as I remember the password. If an automated process in possession of the password wants to know the secret value, it could invoke OpenSSL to get it.

For example, Perl can invoke OpenSSL via its library modules:

use Crypt::CBC;
use MIME::Base64;
$cipher = Crypt::CBC->new(
  -cipher => 'Crypt::OpenSSL::AES', -chain_mode => 'ctr', -keysize => 16, -pbkdf => 'pbkdf2',
  -pass => 'tespwd'
);
print $cipher->decrypt(decode_base64('U2FsdGVkX19l6/etNkl585d+Y1XgyPc='));

Recently I wanted to do the decryption via Java (v17) without calling OpenSSL directly, deferring to an OpenSSL library or using something like BouncyCastle. I wanted to do this using off-the-shelf Java. It turned out to be a little more convoluted than I had expected, and rather educational, so I’m sharing the answer here.

The algorithm

Let me first explain a few things about the OpenSSL encryption result (U2FsdGVkX19l6/etNkl585d+Y1XgyPc=) so we can understand how it will be decrypted.

If you repeat the OpenSSL encryption multiple times with the same input and password you will get a different result each time. All of these results will decrypt correctly to the same original plaintext input. The reason each encryption is different is because of the “-salt” parameter, which tells OpenSSL to include some random salt data in the encryption. To decrypt you will need to know both the password and the salt, but as you only have the password it is necessary for the salt to be included in the output. U2FsdGVkX19l6/etNkl585d+Y1XgyPc= is a salted result. The random salt used by OpenSSL is in there somewhere.

Interestingly, while every result will be different, they will all start with these characters: U2FsdGVkX1

To make the output legible as text and not something bizarre like a binary stream it has been encoded in Base64, and if you decode that U2FsdGVkX1 blob you find that it says: Salted__

That means that the rest of the result, ignoring the “=” Base64 padding at the end, 9l6/etNkl585d+Y1XgyPc must include the salt and the encrypted input.

In fact, to make this a little clearer I originally added the parameters “-v -p” to the OpenSSL command so that it would indicate the internal values involved in the encryption. Here is the actual complete output:

echo -n testing | openssl enc -e -v -p -base64 -aes-128-ctr -pbkdf2 -salt -k testpwd
bufsize=8192
salt=65EBF7AD364979F3
key=A8BF4CE7307B05D7721712CF2EB8770A
iv =1A3E7BC156B33970BA08DA10B5A3B238
U2FsdGVkX19l6/etNkl585d+Y1XgyPc=
bytes read : 7
bytes written: 33

The salt, written out in Hex, is 65EBF7AD364979F3 (8 bytes). The OpenSSL aes-128-ctr algorithm requires an encryption key of 16 bytes, and it also has an Initialisation Vector of 16 bytes, which is an unpredictable value but not sensitive (unlike the password!). So where did all these big values come from, and how are these big values squeezed into that small encryption?

The random salt has to be included as-is in the generated encryption output. Let’s examine it in more detail:

Output from OpenSSL in Base64:
U2FsdGVkX19l6/etNkl585d+Y1XgyPc=

As text:
Salted__…unprintable…binary…

As Hex:
53616C7465645F5F65EBF7AD364979F3977E6355E0C8F7

You can see the salt within the Hex of the encryption:
53616C7465645F5F65EBF7AD364979F3977E6355E0C8F7

So that leaves the remaining part of the Hex to represent the original encrypted version of the plaintext:
977E6355E0C8F7

You will notice this is 14 hex digits, representing 7 bytes, and that the original plaintext (“testing”) was 7 characters. Under the hood, the encryption algorithm is applying XOR operations repeatedly and the eventual result will be the same length as the original. 7 characters in, 7 bytes out.

What about that 16 byte key, where did that come from? In fact, it’s produced using a hash of the password via a passphrase-based key derivation function, known as pbkdf2 (and also known as PBKDF2WithHmacSHA256 in Java, but more on that later). Early key derivation functions used iterations of MD5 as the underlying hashing function (or Message Digest), but current OpenSSL assumes by default that “-md sha256” has been specified and therefore uses the Secure Hashing Algorithm with 256 bits (SHA-256). This is important to know because a lot of the reference/sample material available online still assumes MD5 or SHA1 is lurking beneath the covers. Furthermore, OpenSSL assumes the hashing is repeated 10,000 times (by default) though you can change this via the “-iter” parameter.

Thus, in order to successfully decrypt the AES128CTR cipher you need to know the following additional things about how the plaintext was encrypted:

  • What hashing function was used? SHA-256
  • How many iterations were applied? 10,000
  • What key derivation was involved? pbkdf2 (e.g. RFC2898, PKCS#5v2.0)

Taking into account the current OpenSSL defaults, which will likely change in the future as we learn more about cryptographic practices, the complete command to generate the cipher (with random salt) would be:

echo -n testing | openssl enc -e -base64 -aes-128-ctr -pbkdf2 -salt -md sha256 -iter 10000 -k testpwd

Finally, what about the Initialisation Vector? This is an unpredictable 16 byte value associated with the key and it should come as no surprise that OpenSSL uses the key derivation function (KDF) to provide the IV. In fact, it applies the KDF to the password+salt to produce 32 bytes, the first 16 of which it uses for the key, and the remaining 16 it uses for the IV.

With all this in mind, we can now turn our attention to how to unravel an OpenSSL cipher using Java.

Java

The following solution uses Java 17 and the standard off-the-shelf libraries with which it is typically distributed. You should not need to add any additional resources to get this to work.

Here are the standard imports that are used:

import java.util.Arrays;
import java.util.Base64;
import javax.crypto.Cipher;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.PBEKeySpec;
import javax.crypto.spec.SecretKeySpec;

The javax.crypto package has been part of Java almost since the beginning, and has evolved along with the major developments in the cryptographic industry. Almost everything you might need is in there. Somewhere. Finding what you need can be a challenge. Finding good documentation more so.

Now some definitions, which you should recognise from the discussion above.

int prefixSize   = "Salted__".length(); // 8
int saltSize     = 8;                   // bytes
int keySize      = 128;                 // bits (16 bytes)
int ivSize       = 128;                 // bits (16 bytes)
int opensslIters = 10000;               // OpenSSL default for "iter"

We assume two String values, encrypted (the Base64 output from OpenSSL) and password (in the examples above this was “testpwd”). Knowing how the data in the cipher is arranged, we can now split it out into the various parts so that we can use it in the decryption algorithm:

byte[] prefixedCipherText = Base64.getDecoder().decode(encrypted);
byte[] saltedCipherText   = Arrays.copyOfRange(prefixedCipherText,prefixSize,prefixedCipherText.length);
byte[] salt               = Arrays.copyOfRange(saltedCipherText,0,saltSize);
byte[] cipherText         = Arrays.copyOfRange(saltedCipherText,salt.length,saltedCipherText.length);
char[] pwdChars           = password.toCharArray();

To use a SHA-256 KDF to generate the bytes of a secret key in Java we need to use an instance of the PBKDF2WithHmacSHA256 algorithm. The 32 generated bytes (256 bits) will actually be the key and the IV needed for the AES decryption. Here’s how we use the above data in the key generator:

byte[] ki = (SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256")).generateSecret(
  new PBEKeySpec(pwdChars, salt, opensslIters, keySize+ivSize)
).getEncoded();
byte[] keyBytes = Arrays.copyOfRange(ki,0,keySize/8);
byte[] ivBytes  = Arrays.copyOfRange(ki,keySize/8,(keySize+ivSize)/8);

Now we use Java’s implementation of the AES CTR encryption, without padding.

Cipher cipher = Cipher.getInstance("AES/CTR/NoPadding");
cipher.init(Cipher.DECRYPT_MODE, new SecretKeySpec(keyBytes, "AES"), new IvParameterSpec(ivBytes));
String plainText = new String(cipher.doFinal(cipherText),"UTF-8");

That’s it. We now have the original plaintext.

It’s far more involved than the equivalent operation in Perl, or the one-line OpenSSL command. You also can do the decryption in other languages, such as Javascript/Node, though you may need to install some additional libraries/modules and you’ll likely have many to choose from. There’s no “official” solution in many cases.

Obviously the next steps would be to streamline some of the array operations and upgrade to SHA-512 and more KDF iterations, but you need to make sure that the configurations used when generating the cipher match those applied during decryption. Conveying that additional metadata with your cipher is left as an exercise for the reader :)

Lessons

LUE

It is fair to assume that having spent almost half a century working with computers that a computer scientist like me would have learned a few lessons along the way. Could I list them all? I wish. (There might be a book in it if I could!) Could I even remember them all? Definitely not, and I’m not sure I would want to. As I mentioned previously, I’ve been involved in a lot of technical transitions but much of my early experiences of the world of computers would now be considered historical curiosities. I learned not to touch certain parts of the innards of my CRT oscilloscope but that lesson has almost no relevance today. Leaving a few unused bytes beyond the end of routines to allow for easy patching was a lesson that was relevant for me in the early 80s, but the last time someone paid me to do a low-level fix needing spare bytes was probably ’84 or ’85 and the last time I ever did any low-level code whatsoever was probably three years later. After that, everything I did involved high-level languages. Why spend time in assembly when a C compiler will produced something as good, possibly better?

Without a doubt, most of the tech lessons I picked up along the way are now practically useless to me. In a way it feels like my head is jammed full of useless information. That’s a lot of years to spend with nothing to show.

On the other hand, I look at current developments, practices, advances and the myriad products of our technological world and realise that at a more abstract level most lessons remain relevant.

Take my oscilloscope lesson. I learned that one the hard way while trying to adjust the sweep control using an uninsulated long-shaft screwdriver. The real lesson, however, is to explore the risks in advance and take appropriate precautions. Another lesson from that experience is that knowledge alone isn’t enough. I knew about the high voltages, but I was focussed on a separate part of the circuitry and so put the nearby exposed danger out of my mind. I was touching the screwdriver shaft to steady it while adjusting something (can’t remember exactly what, a potentiometer perhaps?) when it tipped against the tube circuit. Bam! That certainly got my attention. A few kV will do that to you.

In the spirit of looking back (which is what I have been doing lately), what other high-level lessons have I learned over the decades? I have a few favourites. There’s no point in trying to put these into chronological order as many of these were learned over a long period of time and I cannot possibly remember when the lesson started. So here goes, in no particular order:

Document everything

This may seem obvious. In fact, most of the best lessons seem obvious when you look back, but were not so obvious at the time. One of the things that I discovered about myself early on is that I have a terrible memory. Or, as I like to explain to people, I have a terrible lookup system in my brain. It’s like a library that has lost its Dewey cards, or a book that has lost the index at the back. I will remember things, but first I need some clue, some hook, to get the memory back. To overcome this lookup problem I would make short notes for myself, enough to trigger the memories, but over time, having learned that I don’t know how detailed the notes needed to be, I started to take copious notes. To my surprise I found that I got a lot of pleasure from documenting things. Many, many times over the years my documentation has been key to solving problems, avoiding repetition of past mistakes and the means of educating others. Write it all down, explaining to yourself as if you’d never encountered this before. Where do you start, what are the steps, why are you doing it, what are the danger signs, how do you know you are finished… all the questions that pop into your head the first time around, get them down onto paper so that the next time around you will have the answer right there in front of you.

Another benefit of documenting everything contemporaneously is that it makes you pause to think. Imagine if I had been documenting my efforts to repair my oscilloscope; the danger of the proximity of the exposed high-voltage connectors would have been writ large, and the lesson would have been more theoretical than theatrical.

Knowing the immense benefits of good documentation it really annoys me to see the poor (or missing!) documentation of today’s technology. Indeed, some of the documentation has become a minimalist art form. Think “IKEA build instructions” or “Apple setup guides“. OK, I admit they may be suitable for a lot of people, but there is no next level of documentation in many cases. If you get stuck, your next line of support is to try to speak to a customer support person, or a bot pretending to be a person. I honestly can’t imagine any of those would be helpful.

Speed is not the only metric

During my undergraduate years it was often the case that my contemporaries would boast about their solutions being the fastest. Some problem had been posed (often by a lecturer but we’d create problems of our own too) and there would be an informal competition to come up with the best solution. The measure of success in almost every case was the time taken to solve the problem. Performance equals speed.

The kinds of problems that I eventually found myself working on presented additional challenges. Embedded systems, for example, would have a scarcity of resources, memory in particular. If I needed a solution to work in my embedded system I would not only have to consider performance in terms of speed, but also in the amount of space it occupied. Did I really need the speed? A solution that decoded the data in twice the time but half the memory might still get the job done, but the benefit of the smaller footprint far outweighed the completion time because the saved space allowed other functions to be present. Over the years I’ve found that many metrics can be applied, according to the needs and constraints of the problem at hand. Among these are the following:

  • Speed. Or “time to completion”. This is the obvious one that I include here to get the ball rolling.
  • Space. How much space does the solution occupy. If you had a choice of sacrificing speed for space, would that be a good idea for the problem you are solving? Always worth asking the question.
  • Accuracy. Do you really need a solution that gives you a result to 100 decimal places? Maybe you only need two decimal places, in which case there might be better alternative solutions.
  • Understandablility. I invented that word, but you know what I mean. If the solution you have devised is going to be given to someone else, or needs to be maintained by other people in the future, could you sacrifice some of the other performance metrics (speed etc.) to make your solution easier to understand?
  • Accountability. If your solution presents an answer, can you explain why the answer is correct? A solution that relies on some magical incantations, or merely on weights from repeated pattern matching on specific samples, then it doesn’t have the necessary characteristic of accountability even if it appears to offer correct solutions. If proof is needed then all other performance metrics are irrelevant.
  • Portability. Another way of measuring the success of a solution is how well it can be re-implemented in other contexts. That may mean in different programming languages, different hardware architectures, different user environments etc. A solution that relies on specific features/quirks of a runtime environment may be sacrificing portability in favour of speed, space etc.

The overall lesson here is to properly understand the metrics of success. In general that depends on your objectives and the constraints within which you have to devise a solution.

Take a break

I don’t know why (I have theories) but if you’ve been focussed on a problem for a very long time without finding a solution, put it aside for a while. Somewhere in the back of your mind things will come together and when you next approach the problem you may find that the solution is right there in front of you.

For as long as I can remember I have always problem-solved while sleeping. Like many others in the tech space I have worked through the night, sometimes alone, sometimes with a team, but the real overnight problem-solving seems to happen while asleep. There is nothing more satisfying than gradually coming back to consciousness after a night’s sleep to the realisation that you have a solution to the problem. A solution that is so unlike what you previously thought would be the solution that it seems like the real solution is supernatural. It’s not. It’s just you thinking differently.

There’s more to coding than syntax

I can code in dozens of programming languages, a consequence of having many, many years to learn them all. Nevertheless, being able to construct a syntactically correct sequence in a programming language does not necessarily mean you are coding in that particular language. I’ve encountered programs written in C++ that were obviously written as if they were C, PHP that looks like Perl, C# and JavaScript in the style of Java. Instead of the many different syntaxes it is better to pay attention to the language types, such as Object Oriented, Functional, Procedural and Declarative. Most of the programming languages can be shoe-horned into doing most of these different styles of programming, though obviously excel at certain particular approaches. C++ is intended to be Object Oriented, but contains much of the elements needed for procedural programming. JavaScript (or any of its variants including ECMAScript) can handle pretty much any style of coding and is inherently a prototype-based OO language with exceptional support for Functional styles, but the syntax (curly braces and all) provide a comfortable environment for C/C++/Java coders. This “comfort” can unfortunately lead them to writing with JS in the style of their preferred language. Is this good or bad? Hard to tell. It could lead them to concentrating only on the characteristics that are familiar from their previous language (e.g. inheritance in C++) and miss the opportunities offered by the new language (e.g. prototypes in JS).

Incremental

LUE

It is generally observed that advances (in tech) happen incrementally. By and large the increments are small and go unnoticed. The past half century or so I’ve been like a particle in a Brownian motion experiment constantly nudged in new directions by the latest incremental impact. Out of interest, I’ve tried to recall some of the nudges I’ve felt over the years, and here’s what I’ve come up with:

Analogue to Digital

The 1970s would be when I started my tech journey, and at that time most of the stuff I was tinkering with was made of thermionic valves, resistors as thick as a pencil and chunky capacitors that I was often (rightly) wary of touching. I delved eagerly into the workings of a simple superheterodyne vacuum tube receiver and had learned enough at a young age to be able to repair a small CRT oscilloscope, which served me well for another decade. The first technical increment that made a big impression on me was my first solid-state transistors. These came to me by way of my father who had acquired a piece of Russian equipment from a pilot. The exact provenance I cannot remember, but I certainly recall these small metal-encased three-legged wonders that were far more interesting than the hot glowing triodes I had been used to. These new wonders used voltages that wouldn’t kill me, had switching speeds that were incredible (and visible, now that I had a working oscilloscope) and didn’t break if I dropped them. I lost a good few valves that way. For me, those little Cyrillic-encrusted cylinders were my proper introduction to real electronics.

Discrete to Integrated

Transistor-Transistor Logic (TTL) had been invented just before I was born, and TI’s 74xx series of relatively inexpensive TTL integrated gate packages appeared on the market around the time I started school, but it wasn’t until my teens that I encountered my first 7400 (the 74LS00 to be precise), the quad NAND that is the workhorse of many early digital circuits. I had already gone through a phase of discrete components including valves and transistors but I was still dabbling in analogue electronics and then the 7400 introduced me to digital. That changed everything.

Circuitry to Code

A neighbour ran an electronics shop and from him I got most of my components to feed my hobby, and in my early teens he introduced me to the next step above the discrete logic circuits I had been working on. I had briefly been given a 4004-based processor – really just a calculator but by my reckoning it was pure magic. From this I learned the essence of low-level programming, but it wouldn’t be until around 1979 when I finally got access to a real computer, a Z80-based Cromemco multi-user system. I already had enough electronics knowledge to be dangerous, and now finally I was coding. I had been nudged from seeing logic as circuitry to seeing it as mathematics (i.e. algorithms).

Mainframe to Personal Computing

By the time I started in college (thinking that I wanted to be a physicist) I was already coding in several high-level languages, and as I was already spending much of my spare time running a club in which I could share that knowledge, I was invited to continue that teaching activity as part of the university’s student computer society. Through the society I got access to a mainframe, and much more. In fact, although I had access to an Apple before I started in college, it was via the computer society that I first encountered a “personal computer”. This green-screened PC was revolutionary and the impact would be felt worldwide.

Supercomputing to Portable

The next nudge for me happened shortly before graduation, when the focus had turned to parallel computing and I was tasked with creating an operating system for a bus-based multiprocessor. That piqued my interest in other such systems, leading me into hypercubes (for which I designed and built some prototype nodes), the Transputer and eventually a funded research project in which I had sole use of a massively parallel DAP supercomputer, which I coded in a variant of Fortran. OK, maybe not that super, as it only had 1024 simple processors but it was quite a radical machine nevertheless. Parallel processing, algorithm design and digital electronics were building blocks of my PhD, and would remain my primary research interest until the close of my academic career when I moved from thinking about large-scale computing to thinking of the smallest: mobile.

At this point I must also mention that there was one other nudge that happened before I moved into industry, that would later have a massive impact on me: the World Wide Web. Networking was a topic in which I had become proficient, to the point of lecturing in the subject and providing national system administration, and I was engaged in the research networks of the early 1990s when the Web was launched and many of us tried out NCSA Mosaic as an alternative to using tools like Gopher to find the research material we wanted. I saw the Web as a useful tool, but had no idea how much of an impact it was going to have on the World, and on me personally.

Desktop to Mobile

The move from academia to industry was a bit of a jolt. Whereas in academia new ideas would stir ideas about how to share, popularise, teach, publish…, in industry any new idea was captured, contained, documented for the patent/IP people, probed by Sales/Marketing, and promptly shelved in secret because “the market isn’t ready” or something like that. I had gone from thinking in terms of supercomputers and global networks to thinking about small portable computing devices. I had a mobile phone during my time as a national sys-admin, but the only thing “digital” about it was the occasional SMS (often about some networking emergency in the dead of night). People in my new social circle were beginning to see possibilities for digital communications, and I would eventually lend my weight to their effort to create a new business in this emerging space. While with them the next nudge became obvious: the Web was going mobile.

On behalf of my company and colleagues I was part of the World Wide Web Consortium (W3C), and had several senior roles within that organisation over a period of about a decade. In that time the teams around me created powerful adaptive solutions that involved clever on-premises deployments to deliver content to a wide variety of clients, including Web and Web-adjacent (e.g. WAP) devices. These deployments gradually increased in complexity and demanded more technical know-how of our customers than they were willing (or able) to manage, until eventually it became clear that we should host the service on behalf of the customers. We moved from local on-premises into the Cloud.

Local to Cloud

Once again I was back in computational parallelism, this time involving clusters of virtual machines located in various cloud service providers. Clouds offered all kinds of benefits, notably the ease of scaling, fault tolerance, security and deployment efficiency. As my time with the various teams and customers was coming to an end, the backend server stack was still the most complex part of the overall multi-channel solutions. By the time I had launched my tech consultancy, the Web browser ecosystem had become significantly more competent, more efficient and more effective. Thanks to various standardisation efforts, intelligent client-side solutions started to take over the burden previously the preserve of the servers. We were being nudged away from server-side towards client-side.

Backend to Frontend

Client-side frameworks for Web-based solutions really kicked off shortly after HTML5 hit the headlines. Up to that point I was still immersed in things like Servlets, Spring, Wicket, Struts and JSF. The browser still wasn’t seen as a place for anything more complex than form data validation, but the introduction of jQuery and Ajax had shown that cross-browser scripting of the displayed content was viable. So by 2010 the stability of browser standards encouraged the emergence of frontend technologies like AngularJS, Bootstrap, Vue, Typescript, React and more. Even the backend got some frontend attention with the creation of Node. I moved from the big company scene to independent consultancy around the time that React Native, Angular and Progressive Web Applications were taking off.

Corporate to Independent

This nudge was not so much one of technology itself, but of my relationship with it. Over several decades I have been immersed in technology as a hobbyist, as a student, as a researcher, as an educator, as an inventor, as a CTO, as a standards leader and finally as a consultant. I am fortunate to have amassed experience in a wide range of technologies over the years, and now I continue to put that knowledge to good use for my clients. Initially my clients were as varied as the journey that got me to this point, but gradually I have specialised in the technical needs of companion animal organisations and the veterinary space in general. This is a surprisingly complex space with very specific needs. My first foray into the space was back when hosted services were gathering pace, shortly before the Cloud came on our radars, just to provide some assistance keeping a basic online pet reunification service up-and-running. What started as something I’d do now and then in my spare time, has become my main focus and I’m happy to be able to bring my experience to the table.

What will the next nudge be? Perhaps it is already happening and it’s too early for me to notice. I’m sure it’ll be fun, whatever it is.