Imagine this: someone clamps a wooden clothes peg onto the little finger of your left hand. (Or right, if you are left-handed.) It doesn’t really hurt, and doesn’t get in your way but it’s unpleasant and you’d rather it wasn’t there. After a few minutes you are engaged in conversation and you forget (almost) that you have the peg on your finger. But every now and then you are distracted by the pain, your attention is broken, you are annoyed.
You try reading a book, eating a meal, watching a programme on TV. You get some long periods where you almost forget the peg, but it’s hard to ignore.
Then at night, when there’s nothing to draw away your attention, the pain of the peg seems worse than it has been all day.
You wake in the morning, and somehow you don’t notice the peg. At least for a few minutes. Then, a twinge, and suddenly you are back to the constant irritation.
You ask for the peg to be removed. You are told “no”. You ask “why?”. They show you your finger, there is no peg there. It’s in your mind. Your brain has undergone some form of “adjustment” and the sensation of the peg on your finger is a manifestation of this. There’s no cure. It’s there forever. Every day. Every night. For the rest of your life. You can perform some actions now and then that will very briefly make the pain worse, but it will never get better.
Now, instead of it being a painful peg on your finger, suppose what went awry in your brain left you with the sensation of a scream, a high-pitched whistle, on the left side, close to your left ear, but on the inside of your head. 8kHz, to be precise. 50-60dB, about as loud as someone talking. Only they’re blowing a whistle. On the inside-left of your head.
That’s me. I suffer from tinnitus and that’s what it is like for me. All the time. Has been this way for decades, gradually getting louder, or so it seems to me.
I’ve been asked what it’s like. To demonstrate I use a tone generator on my phone, place it on their shoulder and turn it on. Some people can’t hear it because 8kHz is quite high and the older you get the harder it is to hear high frequencies. Unless the sound isn’t really sound, which is why I will always be able to hear it, even if I were to go completely deaf. Those who can hear the tone generator usually ask me to remove it after a few seconds. It’s unpleasant. Like a peg on your little finger. You wouldn’t want that there forever, would you?
Using OpenSSL I can apply a symmetric 128 bit AES block cipher (RFC3686) in “counter” mode using “testpwd” as the password to produce a salted encryption of the word “testing” encoded in Base64 as follows:
If I ever need to store a value in secret I can encrypt it like this, and decrypt whenever I need it so long as I remember the password. If an automated process in possession of the password wants to know the secret value, it could invoke OpenSSL to get it.
For example, Perl can invoke OpenSSL via its library modules:
Recently I wanted to do the decryption via Java (v17) without calling OpenSSL directly, deferring to an OpenSSL library or using something like BouncyCastle. I wanted to do this using off-the-shelf Java. It turned out to be a little more convoluted than I had expected, and rather educational, so I’m sharing the answer here.
The algorithm
Let me first explain a few things about the OpenSSL encryption result (U2FsdGVkX19l6/etNkl585d+Y1XgyPc=) so we can understand how it will be decrypted.
If you repeat the OpenSSL encryption multiple times with the same input and password you will get a different result each time. All of these results will decrypt correctly to the same original plaintext input. The reason each encryption is different is because of the “-salt” parameter, which tells OpenSSL to include some random salt data in the encryption. To decrypt you will need to know both the password and the salt, but as you only have the password it is necessary for the salt to be included in the output. U2FsdGVkX19l6/etNkl585d+Y1XgyPc= is a salted result. The random salt used by OpenSSL is in there somewhere.
Interestingly, while every result will be different, they will all start with these characters: U2FsdGVkX1
To make the output legible as text and not something bizarre like a binary stream it has been encoded in Base64, and if you decode that U2FsdGVkX1 blob you find that it says: Salted__
That means that the rest of the result, ignoring the “=” Base64 padding at the end, 9l6/etNkl585d+Y1XgyPc must include the salt and the encrypted input.
In fact, to make this a little clearer I originally added the parameters “-v -p” to the OpenSSL command so that it would indicate the internal values involved in the encryption. Here is the actual complete output:
The salt, written out in Hex, is 65EBF7AD364979F3 (8 bytes). The OpenSSL aes-128-ctr algorithm requires an encryption key of 16 bytes, and it also has an Initialisation Vector of 16 bytes, which is an unpredictable value but not sensitive (unlike the password!). So where did all these big values come from, and how are these big values squeezed into that small encryption?
The random salt has to be included as-is in the generated encryption output. Let’s examine it in more detail:
Output from OpenSSL in Base64: U2FsdGVkX19l6/etNkl585d+Y1XgyPc=
As text: Salted__…unprintable…binary…
As Hex: 53616C7465645F5F65EBF7AD364979F3977E6355E0C8F7
You can see the salt within the Hex of the encryption: 53616C7465645F5F65EBF7AD364979F3977E6355E0C8F7
So that leaves the remaining part of the Hex to represent the original encrypted version of the plaintext: 977E6355E0C8F7
You will notice this is 14 hex digits, representing 7 bytes, and that the original plaintext (“testing”) was 7 characters. Under the hood, the encryption algorithm is applying XOR operations repeatedly and the eventual result will be the same length as the original. 7 characters in, 7 bytes out.
What about that 16 byte key, where did that come from? In fact, it’s produced using a hash of the password via a passphrase-based key derivation function, known as pbkdf2 (and also known as PBKDF2WithHmacSHA256 in Java, but more on that later). Early key derivation functions used iterations of MD5 as the underlying hashing function (or Message Digest), but current OpenSSL assumes by default that “-md sha256” has been specified and therefore uses the Secure Hashing Algorithm with 256 bits (SHA-256). This is important to know because a lot of the reference/sample material available online still assumes MD5 or SHA1 is lurking beneath the covers. Furthermore, OpenSSL assumes the hashing is repeated 10,000 times (by default) though you can change this via the “-iter” parameter.
Thus, in order to successfully decrypt the AES128CTR cipher you need to know the following additional things about how the plaintext was encrypted:
What hashing function was used? SHA-256
How many iterations were applied? 10,000
What key derivation was involved? pbkdf2 (e.g. RFC2898, PKCS#5v2.0)
Taking into account the current OpenSSL defaults, which will likely change in the future as we learn more about cryptographic practices, the complete command to generate the cipher (with random salt) would be:
Finally, what about the Initialisation Vector? This is an unpredictable 16 byte value associated with the key and it should come as no surprise that OpenSSL uses the key derivation function (KDF) to provide the IV. In fact, it applies the KDF to the password+salt to produce 32 bytes, the first 16 of which it uses for the key, and the remaining 16 it uses for the IV.
With all this in mind, we can now turn our attention to how to unravel an OpenSSL cipher using Java.
Java
The following solution uses Java 17 and the standard off-the-shelf libraries with which it is typically distributed. You should not need to add any additional resources to get this to work.
The javax.crypto package has been part of Java almost since the beginning, and has evolved along with the major developments in the cryptographic industry. Almost everything you might need is in there. Somewhere. Finding what you need can be a challenge. Finding good documentation more so.
Now some definitions, which you should recognise from the discussion above.
int prefixSize = "Salted__".length(); // 8
int saltSize = 8; // bytes
int keySize = 128; // bits (16 bytes)
int ivSize = 128; // bits (16 bytes)
int opensslIters = 10000; // OpenSSL default for "iter"
We assume two String values, encrypted (the Base64 output from OpenSSL) and password (in the examples above this was “testpwd”). Knowing how the data in the cipher is arranged, we can now split it out into the various parts so that we can use it in the decryption algorithm:
To use a SHA-256 KDF to generate the bytes of a secret key in Java we need to use an instance of the PBKDF2WithHmacSHA256 algorithm. The 32 generated bytes (256 bits) will actually be the key and the IV needed for the AES decryption. Here’s how we use the above data in the key generator:
byte[] ki = (SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256")).generateSecret(
new PBEKeySpec(pwdChars, salt, opensslIters, keySize+ivSize)
).getEncoded();
byte[] keyBytes = Arrays.copyOfRange(ki,0,keySize/8);
byte[] ivBytes = Arrays.copyOfRange(ki,keySize/8,(keySize+ivSize)/8);
Now we use Java’s implementation of the AES CTR encryption, without padding.
Cipher cipher = Cipher.getInstance("AES/CTR/NoPadding");
cipher.init(Cipher.DECRYPT_MODE, new SecretKeySpec(keyBytes, "AES"), new IvParameterSpec(ivBytes));
String plainText = new String(cipher.doFinal(cipherText),"UTF-8");
That’s it. We now have the original plaintext.
It’s far more involved than the equivalent operation in Perl, or the one-line OpenSSL command. You also can do the decryption in other languages, such as Javascript/Node, though you may need to install some additional libraries/modules and you’ll likely have many to choose from. There’s no “official” solution in many cases.
Obviously the next steps would be to streamline some of the array operations and upgrade to SHA-512 and more KDF iterations, but you need to make sure that the configurations used when generating the cipher match those applied during decryption. Conveying that additional metadata with your cipher is left as an exercise for the reader :)
It is generally observed that advances (in tech) happen incrementally. By and large the increments are small and go unnoticed. The past half century or so I’ve been like a particle in a Brownian motion experiment constantly nudged in new directions by the latest incremental impact. Out of interest, I’ve tried to recall some of the nudges I’ve felt over the years, and here’s what I’ve come up with:
Analogue to Digital
The 1970s would be when I started my tech journey, and at that time most of the stuff I was tinkering with was made of thermionic valves, resistors as thick as a pencil and chunky capacitors that I was often (rightly) wary of touching. I delved eagerly into the workings of a simple superheterodyne vacuum tube receiver and had learned enough at a young age to be able to repair a small CRT oscilloscope, which served me well for another decade. The first technical increment that made a big impression on me was my first solid-state transistors. These came to me by way of my father who had acquired a piece of Russian equipment from a pilot. The exact provenance I cannot remember, but I certainly recall these small metal-encased three-legged wonders that were far more interesting than the hot glowing triodes I had been used to. These new wonders used voltages that wouldn’t kill me, had switching speeds that were incredible (and visible, now that I had a working oscilloscope) and didn’t break if I dropped them. I lost a good few valves that way. For me, those little Cyrillic-encrusted cylinders were my proper introduction to real electronics.
Discrete to Integrated
Transistor-Transistor Logic (TTL) had been invented just before I was born, and TI’s 74xx series of relatively inexpensive TTL integrated gate packages appeared on the market around the time I started school, but it wasn’t until my teens that I encountered my first 7400 (the 74LS00 to be precise), the quad NAND that is the workhorse of many early digital circuits. I had already gone through a phase of discrete components including valves and transistors but I was still dabbling in analogue electronics and then the 7400 introduced me to digital. That changed everything.
Circuitry to Code
A neighbour ran an electronics shop and from him I got most of my components to feed my hobby, and in my early teens he introduced me to the next step above the discrete logic circuits I had been working on. I had briefly been given a 4004-based processor – really just a calculator but by my reckoning it was pure magic. From this I learned the essence of low-level programming, but it wouldn’t be until around 1979 when I finally got access to a real computer, a Z80-based Cromemco multi-user system. I already had enough electronics knowledge to be dangerous, and now finally I was coding. I had been nudged from seeing logic as circuitry to seeing it as mathematics (i.e. algorithms).
Mainframe to Personal Computing
By the time I started in college (thinking that I wanted to be a physicist) I was already coding in several high-level languages, and as I was already spending much of my spare time running a club in which I could share that knowledge, I was invited to continue that teaching activity as part of the university’s student computer society. Through the society I got access to a mainframe, and much more. In fact, although I had access to an Apple before I started in college, it was via the computer society that I first encountered a “personal computer”. This green-screened PC was revolutionary and the impact would be felt worldwide.
Supercomputing to Portable
The next nudge for me happened shortly before graduation, when the focus had turned to parallel computing and I was tasked with creating an operating system for a bus-based multiprocessor. That piqued my interest in other such systems, leading me into hypercubes (for which I designed and built some prototype nodes), the Transputer and eventually a funded research project in which I had sole use of a massively parallel DAP supercomputer, which I coded in a variant of Fortran. OK, maybe not that super, as it only had 1024 simple processors but it was quite a radical machine nevertheless. Parallel processing, algorithm design and digital electronics were building blocks of my PhD, and would remain my primary research interest until the close of my academic career when I moved from thinking about large-scale computing to thinking of the smallest: mobile.
At this point I must also mention that there was one other nudge that happened before I moved into industry, that would later have a massive impact on me: the World Wide Web. Networking was a topic in which I had become proficient, to the point of lecturing in the subject and providing national system administration, and I was engaged in the research networks of the early 1990s when the Web was launched and many of us tried out NCSA Mosaic as an alternative to using tools like Gopher to find the research material we wanted. I saw the Web as a useful tool, but had no idea how much of an impact it was going to have on the World, and on me personally.
Desktop to Mobile
The move from academia to industry was a bit of a jolt. Whereas in academia new ideas would stir ideas about how to share, popularise, teach, publish…, in industry any new idea was captured, contained, documented for the patent/IP people, probed by Sales/Marketing, and promptly shelved in secret because “the market isn’t ready” or something like that. I had gone from thinking in terms of supercomputers and global networks to thinking about small portable computing devices. I had a mobile phone during my time as a national sys-admin, but the only thing “digital” about it was the occasional SMS (often about some networking emergency in the dead of night). People in my new social circle were beginning to see possibilities for digital communications, and I would eventually lend my weight to their effort to create a new business in this emerging space. While with them the next nudge became obvious: the Web was going mobile.
On behalf of my company and colleagues I was part of the World Wide Web Consortium (W3C), and had several senior roles within that organisation over a period of about a decade. In that time the teams around me created powerful adaptive solutions that involved clever on-premises deployments to deliver content to a wide variety of clients, including Web and Web-adjacent (e.g. WAP) devices. These deployments gradually increased in complexity and demanded more technical know-how of our customers than they were willing (or able) to manage, until eventually it became clear that we should host the service on behalf of the customers. We moved from local on-premises into the Cloud.
Local to Cloud
Once again I was back in computational parallelism, this time involving clusters of virtual machines located in various cloud service providers. Clouds offered all kinds of benefits, notably the ease of scaling, fault tolerance, security and deployment efficiency. As my time with the various teams and customers was coming to an end, the backend server stack was still the most complex part of the overall multi-channel solutions. By the time I had launched my tech consultancy, the Web browser ecosystem had become significantly more competent, more efficient and more effective. Thanks to various standardisation efforts, intelligent client-side solutions started to take over the burden previously the preserve of the servers. We were being nudged away from server-side towards client-side.
Backend to Frontend
Client-side frameworks for Web-based solutions really kicked off shortly after HTML5 hit the headlines. Up to that point I was still immersed in things like Servlets, Spring, Wicket, Struts and JSF. The browser still wasn’t seen as a place for anything more complex than form data validation, but the introduction of jQuery and Ajax had shown that cross-browser scripting of the displayed content was viable. So by 2010 the stability of browser standards encouraged the emergence of frontend technologies like AngularJS, Bootstrap, Vue, Typescript, React and more. Even the backend got some frontend attention with the creation of Node. I moved from the big company scene to independent consultancy around the time that React Native, Angular and Progressive Web Applications were taking off.
Corporate to Independent
This nudge was not so much one of technology itself, but of my relationship with it. Over several decades I have been immersed in technology as a hobbyist, as a student, as a researcher, as an educator, as an inventor, as a CTO, as a standards leader and finally as a consultant. I am fortunate to have amassed experience in a wide range of technologies over the years, and now I continue to put that knowledge to good use for my clients. Initially my clients were as varied as the journey that got me to this point, but gradually I have specialised in the technical needs of companion animal organisations and the veterinary space in general. This is a surprisingly complex space with very specific needs. My first foray into the space was back when hosted services were gathering pace, shortly before the Cloud came on our radars, just to provide some assistance keeping a basic online pet reunification service up-and-running. What started as something I’d do now and then in my spare time, has become my main focus and I’m happy to be able to bring my experience to the table.
What will the next nudge be? Perhaps it is already happening and it’s too early for me to notice. I’m sure it’ll be fun, whatever it is.
It’s not often that some piece of software makes me miserable, but over the past few weeks I’ve been subjected to an example of exceptionally bad software and I’m near breaking point. The culprit in the spotlight is EzCad3 (pr. “Easy CAD”), a graphics tool intended to control laser etching/cutting hardware. In fact, the version I have is a slightly customised (i.e. feature deprived) derivative of the official EzCad3 but I won’t mention the name as the suppliers of this kit are not to blame for the shoddy software. That honour belongs to the Beijing JCZ Technology Company, Ltd, who were founded 20 years ago and should know better by now.
EzCad3 is a 64-bit Windows application and it has reasonable implementations of all the 2D/3D features that you’d expect from something that controls some expensive industrial equipment. All the usual object manipulations, specific object types (barcodes, vectors, bitmaps etc.), extensive 2D hatching (think “planes”, including the “ink” of text) and more. It also has an impressive list of laser equipment it can support, including the fiber laser that I’ve been tasked to automate.
The problem is with the interfaces, both the UI and the SDK. The latter I abandoned because it is far too low-level. I am not trying to reinvent a CAD tool, I just want to pump some content into a template, resize any bits that go out-of-bounds, and start the laser marking process. I don’t want to be down at the level of directly manipulating the head axes, motors, energy source etc. So instead I want to have the automation up at the CAD level, where I can load objects, set their properies, arrange and orient them, then hit “go”.
Not possible, it would seem. Well, not normally possible, but if you are willing to sell your soul there are ways.
I’ve had to resort to emulating the gesures of a human user: mouse movements, clicks, keyboard interactions etc. This would be less of a nightmare if EzCad3 was consistent in its UI and at least provided a keyboard version of every action that currently requires a mouse. Sadly, it does not. In fact, very few of the mouse operations have keyboard equivalents. Some of the operations have menu equivalents, which can be navigated to via a sequence of keyboard right+down operations, but many are missing. Even using direct access to the underlying Win32 controls doesn’t always work. For example, there’s no way to select a particular object in a CAD file via the keyboard, and sending “select” commands to the ListView control merely causes the items to be highlighted, but crucially not selected. Without being selected, I have no access to the fields that can be used to set properties like the X,Y coordinates. My solution was to simulate a mouse click within the control at the position (possibly off-screen) where the object would be listed.
I have spent weeks creating Win32 automation work-arounds for many of the deficiencies in EzCad3. Today, for example, I found a work-around for the fact that it won’t refresh text from its source file. (Think “mail merge” but with CAD data.) I discovered that if I have a group of text objects that are bound to a source (e.g. a text file) and I apply a hatching to the group, EzCad3 will re-load the content from the text file(s). This is good because then I have the text objects set to the size that the loaded text dictates, and I can inspect the object properties to see if any are wider than the engraving zone, and resize if necessary.
An hour or more can go by while I bash my head against what appears to be an impossible problem, and then by accident I find a way past, only to be hit by the next speed bump.
The journey will end, hopefully soon, but it’s so, so miserable.
(Yes, I might document my findings, but not until I’ve had time to recover.)
There are many ways to make a program/script pause for a few seconds. Here are some of my favourites.
Windows
There are two built-in sleep functions you can include in command scripts (.cmd, .bat and PowerShell):
pause
The pause command pauses until you press a key. It has no option to set the period of time. While pausing, it displays the message “Press any key to continue…” (or “Press Enter to continue…:” in PowerShell).
timeout /nobreak /t 20
This will sleep for 20 seconds, displaying a constantly refreshing message saying “Waiting for N seconds”. With the /nobreak option you have to use Ctrl-C to cancel the timer. If you omit the /nobreak then pressing any key will stop the timer.
This will pause for 20 seconds with no on-screen message. Ctrl-C will interrupt. You can use timings like 20s, 20m and 20h to indicate seconds, minutes and hours.
Also in PowerShell you can use the following:
Start-Sleep -Seconds 20
This is much like sleep.exe in that it displays nothing on screen. PowerShell also uses sleep as an alias for Start-Sleep.
Unix/Linux
The sleep tool (/bin/sleep) is available to every command shell in Unix/Linux. The syntax for a 20 second sleep is just this:
sleep 20
This assumes the period is 20s (seconds). It also understands minutes, hours and days, using suffixes m, h and d, though a sleep for several days would be quite unusual! You can also specify more complex periods, such as “sleep 6h 20m 15s” which sleeps for six hours, 20 minutes and 15 seconds.
Pausing until a keypress occurs is a little more complex. This bash one-liner usually works:
read -n1 -r -p "Press any key to continue..." keypress
The key pressed by the user will be available in variable $keypress. If you want something that times out after 20 seconds, use this:
read -t20 -n1 -r -p "Press any key to continue..." keypress
This hack using /usr/bin/timeout is horrible, but it works:
timeout 20s tail -f /dev/null
Scripting
Obviously there are as many ways to sleep within a program as there are programming languages. More, if you include the many feature libraries that accompany these languages. Some languages have built-in sleep functions, and some of these can be accessed directly from the command line or a command-level script. This means that if you know that a certain scripting language is present, regardless of operating system, you have access to a sleep function. Scripting languages generally do not have on-screen message side-effects when sleeping, so if you want a message then output one before you do the sleep.
My favourite scripting language is Perl, and here is how to sleep for 20 seconds from the command line:
perl -e "sleep 20"
If you want to use Perl to pause until the user presses Enter, this should work:
perl -e "<>"
Python is a little more involved. The following sleeps for 20 seconds and can only be interrupted by Ctrl-C:
python3 -c "import time; time.sleep(20)"
You can also try this in Ruby:
ruby -e 'sleep(20)'
Note that most scripting languages can also access the underlying operating system and/or shell so they could invoke the system’s sleep tool, but that means the script is not OS-independent so I won’t discuss those options any further here.