Installing Java Application As a Windows Service

June 26, 2016

It sounds like something you’d never need, but sometimes, when you distribute end-user software, you may need to install a java program as a Windows service. I had to do it because I developed a tool for civil servants to automatically convert and push their Excel files to the opendata portal of my country. The tool has to run periodically, so it’s a prime candidate for a service (which would make the upload possible even if the civil servant forgets about this task altogether, and besides, repetitive manual upload is a waste of time).

Even though there are numerous posts and stackoverflow answers on the topic, it still took me a lot of time because of minor caveats and one important prerequisite that few people seemed to have – having a bundled JRE, so that nobody has to download and install a JRE (would complicate the installation process unnecessarily, and the target audience is not necessarily tech-savvy).

So, with maven project with jar packaging, I first thought of packaging an exe (with launch4j) and then registering it as a service. The problem with that is that the java program uses a scheduled executor, so it never exits, which makes starting it as a process impossible.

So I had to “daemonize” it, using commons-daemon procrun. Before doing that, I had to assemble every component needed into a single target folder – the fat jar (including all dependencies), the JRE, the commons-daemon binaries, and the config file.

You can see the full maven file here. The relevant bits are (where ${installer.dir} is ${project.basedir}/target/installer}):

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>2.3.2</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
    </configuration>
</plugin>
<plugin>
    <artifactId>maven-assembly-plugin</artifactId>
    <executions>
        <execution>
            <id>assembly</id>
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
                <finalName>opendata-ckan-pusher</finalName>
                <appendAssemblyId>false</appendAssemblyId>
            </configuration>
        </execution>
    </executions>
</plugin>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>1.7</version>
    <executions>
        <execution>
            <id>default-cli</id>
            <phase>package</phase>
            <goals>
                <goal>run</goal>
            </goals>
            <configuration>
                <target>
                    <copy todir="${installer.dir}/jre1.8.0_91">
                        <fileset dir="${project.basedir}/jre1.8.0_91" />
                    </copy>
                    <copy todir="${installer.dir}/commons-daemon">
                        <fileset dir="${project.basedir}/commons-daemon" />
                    </copy>
                    <copy file="${project.build.directory}/opendata-ckan-pusher.jar" todir="${installer.dir}" />
                    <copy file="${project.basedir}/install.bat" todir="${installer.dir}" />
                    <copy file="${project.basedir}/uninstall.bat" todir="${installer.dir}" />
                    <copy file="${project.basedir}/config/pusher.yml" todir="${installer.dir}" />
                    <copy file="${project.basedir}/LICENSE" todir="${installer.dir}" />
                </target>
            </configuration>
        </execution>
    </executions>
</plugin>

You will notice the installer.bat and uninstaller.bat which are the files that use commons-daemon to manage the service. The installer creates the service. Commons-daemon has three modes: exe (which allows you to wrap an arbitrary executable), Java (which is like exe, but for java applications) and jvm (which runs the java application in the same process; I don’t know how exactly though).

I could use all three options (including the launch4j created exe), but the jvm allows you to have a designated method to control your running application. The StartClass/StartMethod/StopClass/StopMethod parameters are for that. Here’s the whole installer.bat:

commons-daemon\prunsrv //IS//OpenDataPusher --DisplayName="OpenData Pusher" --Description="OpenData Pusher"^
     --Install="%cd%\commons-daemon\prunsrv.exe" --Jvm="%cd%\jre1.8.0_91\bin\client\jvm.dll" --StartMode=jvm --StopMode=jvm^
     --Startup=auto --StartClass=bg.government.opendatapusher.Pusher --StopClass=bg.government.opendatapusher.Pusher^
     --StartParams=start --StopParams=stop --StartMethod=windowsService --StopMethod=windowsService^
     --Classpath="%cd%\opendata-ckan-pusher.jar" --LogLevel=DEBUG^ --LogPath="%cd%\logs" --LogPrefix=procrun.log^
     --StdOutput="%cd%\logs\stdout.log" --StdError="%cd%\logs\stderr.log"
     
     
commons-daemon\prunsrv //ES//OpenDataPusher

A few clarifications:

  • The Jvm parameter points to the jvm dll
  • The StartClass/StartMethod/StopClass/StopMethod point to a designated method for controlling the running application. In this case, starting would just call the main method, and stopping would shutdown the scheduled executor, so that the application can exit
  • The classpath parameter points to the fat jar
  • Using %cd% is risky for determining the path to the current directory, but since the end-users will always be starting it from the directory where it resides, it’s safe in this case.

The windowsService looks like that:

public static void windowsService(String args[]) throws Exception {
     String cmd = "start";
     if (args.length > 0) {
        cmd = args[0];
    }

    if ("start".equals(cmd)) {
        Pusher.main(new String[]{});
    } else {
        executor.shutdownNow();
        System.exit(0);
    }
}

One important note here is the 32-bit/64-bit problem you may have. That’s why it’s safer to bundle a 32-bit JRE and use the 32-bit (default) prunsrv.exe.

I then had an “installer” folder with jre and commons-daemon folders and two bat files and one fat jar. I could then package that as an self-extractable archive and distribute it (with a manual, of course). I looked into IzPack as well, but couldn’t find how to bundle a JRE (maybe you can).

That’s a pretty niche scenario – usually we develop for deploying to a Linux server, but providing local tools for a big organization using Java may be needed every now and then. In my case the long-running part was a scheduled executor, but it can also run a jetty service that serves a web interface. Why would it do that, instead of providing a URL – in cases where access to the local machine matters. It can even be a distributed search engine (like that) or another p2p software that you want to write in Java.

1

Why I Prefer Merge Over Rebase

June 17, 2016

There are many ways to work with git. The workflows vary depending on the size of the team, organization, and on the way of working – is it distributed, is it sprint-based, is it a company, or an open-source project, where a maintainer approves pull requests.

You can use vanilla-git, you can use GitHub, BitBucket, GitLab, Stash. And then on the client side you can use the command line, IDE integration, or stand-alone clients like SourceTree.

The workflows differ mostly in the way you organize you branches and the way you merge them. Do you branch off branches? Do you branch off other people’s branches, which are work-in-progress. Do you push or stay local? Do you use it like SVN (perfectly fine for a single developer on a pet project), or you delve into more “arcane” features like --force-with-lease.

This is all decided by each team, but I’d like to focus on one very debated topic – rebasing vs merging. While you can get tons of results discussing rebasing vs merging, including the official git documentation, it has become more of a philosophical debate, rather than a practical one.

I recently asked a practical question about a rebase workflow. In short, by default rebasing seems not to favour pushing stuff to the central repo. If you do that before rebasing, you’d always need to force-push. And force-pushing may make it very hard for people that are based on your branch. Two questions that you are already asking:

  • Why do you need to push if something isn’t ready? Isn’t it the point of the “D” in “DVCS” to be able to commit locally and push only when ready? Well, even if you don’t use git as SVN, there are still plenty of use-cases for pushing every change to your own feature branch remote – you may be working from different machines, a colleague may want to pick up where you left (before leaving for holiday or falling sick), or even hard drive failures and theft. I think basically you have to push right before you log off, or even more often. THe “distributed” allows for working offline, or even without a central repo (if it goes down), but it is not the major benefit of git.
  • Why would anyone be based on your work-in-progress branch? Because it happens. Sometimes tasks are not split that strictly and have dependencies – you write a piece of functionality, which you then realize should be used by your teammates who work on another task within the same story/feature. You aren’t yet finished (e.g. still polishing, testing), but they shouldn’t wait. Even a single person may want to base his next task on the previous one, while waiting for code review comments. The tool shouldn’t block you from doing this from time to time, even though it may not be the default workflow scenario.

Also, you shouldn’t expect every team member to be a git guru, who rewrites history for breakfast. A basic set of commands (even GUIs) should be sufficient for a git workflow, including the edge cases. Git is complicated and the task of each team is to make it work for them, rather than against them. Probably there is one article for each git command or concept with a title “X considered harmful”, and going through that maze is not trivial for an inexperienced git user. As Linus Torvalds once allegedly said:

Git has taken over where Linux left off separating the geeks into know-nothings and know-it-alls. I didn’t really expect anyone to use it because it’s so hard to use, but that turns out to be its big appeal.

Back to the rebase vs merge – merge (with pull requests) feels natural for the above. You branch often, you push often. Rebase can work in the above use-cases (which I think are necessary). You can force-push after each rebase, and you can make sure your teammates resolve that. But what’s the point?

The practical argument is that the graph that shows the history of the repo is nice and readable. Which I can’t argue with, because I’ve never had a case when I needed a cleaner and better graph. No matter how many merge commits and ugliness there’s in the graph, you can still find your way (if you ever need to). Besides, a certain change can easily be traced even without the graph (e.g. git annotate).

If you are truly certain you can’t go without a pretty graph, and your teammates are all git gurus who can resolve a force-push in minutes, then rebasing is probably fine.

But I think a merge-only workflow is the more convenient way of work that accounts for more real-world scenarios.

I realize this is controversial, and I’m certainly a git n00b (I even use SourceTree rather than the command line for basic commands, duh). But I have used both merge and rebase workflows and I find the merge one more straightforward (after all, force-pushing being part of the regular workflow sounds suspicious?).

Git is the scala of VCS – it gives you many ways to do something, but there is no “right way”. This isn’t necessarily bad, as indeed there are many different scenarios where git can be used. For the ones I’ve had (regular project in a regular company, with a regular semi-automated release & deployment cycle, doing regular agile), I’d always go for merge, with pull requests.

1

E-Government Architecture [presentation]

June 11, 2016

After working for a year on the matter, I did a presentation on a small conference about the options for architecture of e-government solutions. A 40 minute talk could not cover everything, and it is presented in the context of Bulgaria (hence 2 graphs with cyrillic script in the slides), so I hope it’s useful anyway.

It follows a previous post of mine that proposes an architecture, which I’ve expanded here.

Here are the slides:

The main points are:

  • all data registers must be integrated somehow
  • the integration should preferably not rely on a centralized ESB-like system
  • privacy must be addressed by strict access control and audit logs, including access for citizens to data about who read their data and why, including notifications
  • the technical challenge is only 20%, the rest is legal and organizational

Despite being government-related, it’s actually an interesting technical task that not many have solved properly.

0

Identity in the Digital World

May 21, 2016

“Identity” is a set of features that allow unique identification of a person and distinguishing them from others. That sounds simple enough, but it turns out to have a lot of implications in the modern, connected, global world.

Identity today is government managed. You are nobody if a government hasn’t confirmed that you are indeed somebody. The procedures in the countries vary, but after you are born, you get issued a birth certificate and your name (and possibly number) are entered into a database (either centralized or decentralized). From then on you have an “identity”, which you can later prove using some sort of a document (ID card, passport, driving license, social security number, etc.)

It is not that the government owns your identity, because you are far more than your ID card, but certain attributes of your identity are recorded by the government, and then it certifies (via a document and the relevant database) that this is indeed you. These attributes include your names, which have been used to identify people since forever, your address, your photo, height, eye color. Possibly your fingerprints and your iris. But we’ll get to these biometric attributes later.

Why is all this important? Except for cases of people living in small isolated tribes, where they probably don’t even need names for identifying others, the so called “civilized world” needs to be able to differentiate one person from another for all sorts of reasons. Is the driver capable of driving, is the pilot capable of flying a plane — they may show a certificate, but is it really them that were certified (“Catch me if you can” shows how serious this can be)? Who owns a given property? Is it this one, claiming to be John Smith, or that one, also claiming to be John Smith? The ownership certificate may be lost, but there is a record somewhere that holds the information. We just have to identify the real John Smith.

Traveling is another case — although rather suboptimal, the current world has countries and borders, and various traveling restrictions. You have to prove that you are you, and that you have the right to travel. You have to prove you are American, or that you have a visa, if you want to enter the United States.

There are many other cases — crime-fighting, getting a bank loan, getting employed, etc.

You may argue that you should be able to be totally anonymous and still do all of the above, but unfortunately, in a global society, fraud is too likely to allow us to deal with anonymous people. By that I’m not saying we should be identified for everything we do — not at all, it should be limited to where it makes practical sense. But there is a sufficient number of these use-cases.

Offline identity is one thing, but there’s also the notion of “online identity”. A way to prove who you are on the internet. That is most often (and rightly so) an anonymous registration process, rarely it uses some identity provider like Facebook or Twitter (where again, you don’t have to disclose your true identity), but when doing legally significant actions, or when communicating with governments in order to obtain some data or certificates about yourself, the service provider has to be able to prove it is really you. Here comes the “electronic identification” process, which was recently defined in an EU regulation, and which in most cases means you have a government-issues hardware token that only you own and know how to unlock.

But since identity exists, it can be stolen or forged. There is the so called “identity theft” and it’s used in multiple ways that are out of the scope of this post. But people do steal others identity — online, and offline.

One instance of identity theft is using another person’s identity document. Similarly, one can forge an identity document to say whatever they want it to say. And this may lead to dire consequences for unsuspecting citizens. So government and experts are trying to fight this problem. Let’s take a look at the two distinct use-cases.

Document forgery is addressed by making ever more complicated documents, with all sorts of security features, invisible components, laser engraved elements, using specific laser angels, and so on. This, of course, is imperfect, not only because it is “security through obscurity” (who guarantees that your government won’t leak the “secret sauce” for making its documents, or worse — supply the forgers with the raw materials needed to make a document), but also because a forged document can still pass inspection, as humans are not perfect when inspecting documents. To put it another way — if the one inspecting the document knows what to look for, surely the forger also knows that.

Document theft (including document copying) is addressed by comparing the picture. And that’s about it. If you look similarly to someone else and you get his identity document, you can safely pretend to be him for a long time.

None of the solutions seem good enough. So to the rescue come electronic documents. Passports are a somewhat universal identity document, and most passports are now eMRTD (Electronic machine readable travel document). Issues with them aside, the basic idea is that they have some information stored that a) guarantees the document is issued by a trusted authority and b) it belongs to the person holding it.

The first part is guaranteed via a public key infrastructure — the contents of the document are digitally signed by the issuing authority. So nobody can create his own passport or ID card, because he doesn’t have the private key of the issuing authority (and the private key cannot be extracted, because an HSM, where it is stored, doesn’t allow that).

The second part is trickier. It is currently addressed by storing your facial image and fingerprints on the chip and then comparing the image and fingerprints of the holder to the stored ones (remember that the content is certified by a digital signature, which is practically bulletproof for the time being). The facial image part is flawed, and at the moment barely anyone checks the fingerprint part, but this option exists and it is getting more and more traction “with all that terrorism”.

So starting from the somewhat intuitive concept of identity, we’ve come to the point where governments make databases of fingerprints. And then iris data, and DNA (as in Kuwait, for example).

Although everything above sounds logical, the end result is somewhat scary. People’s biometric information being stored in databases, potentially at risk of breaches, potentially misused by governments, sounds dystopian. As we are no longer the owners of our identity — someone else has collected our attributes — attributes that do not change throughout our entire life — and stores them for future use. For whatever use. That someone doesn’t have to store them for the sake of identification, as there are technologies that allow storing the data on a card that does the comparison internally, without reveling the stored data. But that option seems to be ignored, strengthening the dystopian feeling.

Recently I’ve been thinking on how to address all of these. How to make sure identity still does its job but without compromising privacy. Two hours after I’ve had some ideas, I spoke with someone with far more experience in identity technology than me, and turned out he had had quite similar ideas.

And here technology comes into play. We are a combination of our unchangeable traits — fingerprints, iris, DNA. You can differentiate even identical twins based on these attributes. You also have other, more volatile attributes — height, weight, names, address, favorite color even.

All of these represent your identity. And it can be managed by turning the essential, unchangeable parts of it, into a key. An anonymous key, that is derived using a one-way function, a so called “hash”. After you hash your fingerprints, iris and DNA, you’ll get a long value, e.g. fd4e1c67b2d28fced849ee1bb76e7391b93eb12, that represents you (read here about fingerprint hashing)

This will be you and you will be able to prove it, as every time someone needs you to prove your identity, you will get your fingerprints, iris and DNA scanned, and the result of applying the one-way function will be again 2fd4e1c67b2d28fced849ee1bb76e7391b93eb12.

Additionally, you can probably add some “secret” word to that identity. So that your identity is not only what you are (and cannot change), but also what you know. That would mean that nobody can come up with your identity unless you tell them your secret (sounds a little like “A Wizard of Earthsea”).

Of course, full identification will rarely be required. If you want to buy alcohol, only your age matters; if you want to get a contract for cable internet, only your name and address matter, and so on. For that, sub-identities can exist — they belong to a “parent” identity, but the verifier doesn’t need such a high level of assurance that it is indeed you. The sub-identity can be “just the fingerprints”, or even…a good old identity document. Each sub-identity can prove a set of attributes, certified by an authority — not necessarily a government authority.

Your sub-identity, a set of attributes, can be written on a document — something you carry around that certifies, with a significant level of certainty, that this is indeed you. It will hold your “hash”, so that anyone who wants to do a full check, can do so. The other option is the implant. Scary and dystopian, I know. It seems just a little different than an ID card — it is something you carry with you, and you have to carry with you. Provided that you control whether someone is allowed to read your implant, it becomes a slightly advanced identity card or a driver’s license.

Even when we have an identity string, the related data — owned properties, driving capabilities, travel visas, employment, bank loans — will be stored in databases, where the identity string is the lookup key. These databases are now government owned, but can very well be distributed, e.g. using a blockchain. Nobody can claim he’s you, as he cannot produce the same identity string based on his biometrics. The nodes on the blockchain network can be the implants, which hold encrypted information about you, and only you can decide when to decrypt it. That would make for a distributed human database where one is in full control of his data.

But is this feasible? The complexity of the system, and especially of managing one’s identity, may be too high. We can create a big, complex system, involving implants and biometrics, for solving a problem that is actually a tiny one. This is the first question we should ask before proceeding to such a thing. Not whether governments should manage identities, not whether we should be identifiable, but whether we need a dramatic shift in the current system. Or an electronic ID card with match-on-card (not centrally stored) fingerprints and electronically signed contents solves 99% of the issues?

Although I’m finding it fascinating to envision a technological utopia, with cryptography heavily involved, and privacy guaranteed by technological means, I’m not sure we need that.

5

Cleanup Temp Files

May 17, 2016

I’ve been spring-cleaning some devices and the obviousness of the advice in the title seems questionable. I found tons of unused temp files which applications (android apps, desktop applications and even server application deployments) haven’t cleaned. This is taking up space and it means more manual maintenance.

For server-side applications the impact is probably smaller, as it is entirely under your control and you can regularly cleanup the data, or even don’t care, as you regularly re-create the machine (e.g. in an AWS deployment where each upgrade to the system means new machines get spawned and the old ones – deleted). But anyway, if you use temp files (and Java), use File.createTempFile(..) and don’t forget to call file.deleteOnExit().

For client-side applications (smartphone apps or desktop software) the carelessness of not deleting temp files leads to the users’ disappointment at some point in time, when they realize their storage is filled with your useless files. The delete-on-exist works again, but maybe you need the files to survive more than one run. So simply have a job, or a startup-check that checks whether temp files aren’t older than a certain period, and if they are – delete them.

The effect of this little thing being omitted by developers is that users have to analyze their storage with special tools (sometimes – paid) in order to find the “offenders”. And the offenders are not always obvious, and besides – users are not necessarily familiar with the concept of a temp file. Even I’m sometimes not sure of a given file that looks like a temp one isn’t actually necessary for the proper functioning.

Storage is cheap, but good practices should not be abandoned because of that.

2

Dirty Hacks Are OK

May 12, 2016

In practically every project you’ve used a “dirty hack”. setAccessbile(true), sun.misc.Unsafe, changing a final value with reflection, copy-pasting a class from a library to change just one line of wrong code. Even if you haven’t directly, a library that you are using most certainly contains some of these.

Whenever we do something like that, we are reminded (by stackoverflow answers and colleagues alike) that this is a hack and it’s not desirable. And that’s ok – the first thing we should think about when using such a hack, is whether there isn’t a better way. A more object-oriented way, a more functional way. A way that the language allows for, but might require a bit more effort. But too often there is no such way, or at least not one that isn’t a compromise with other aspects (code readability, reuse, encapsulation, etc.). And especially in cases where 3rd party libraries are being used and “hacked”.

Vendors are also trying to make us avoid them – changing the access to a field via reflection might not work in some environments (some JavaEE cases included), due to a security manager. And one of the most “arcane” hacks – sun.misc.Unsafe is even going to be deprecated by Oracle.

But since these “hacks” are everywhere, including the Unsafe magic, deprecating or blocking any of them will just make the applications stop working. As you can see in the article linked above, practically every project depends on a sun.misc.Unsafe. It wouldn’t be an understatement to say that such “dirty hacks” are the reason major frameworks and libraries in the Java ecosystem exist at all – hibernate, spring, guava are among the ones that use them heavily.

So deprecating them is not a good idea, but my point here is different. These hacks get things done. They work. With some caveats and risks, they do the task. If instead you’d need to fork a 3rd party library and support the fork? Or suggest a patch and it doesn’t get accepted for a while, but you deadline is soon, these tricks are actually working solutions. They are not “beautiful”, but they’re OK.

Too often 3rd party libraries don’t offer exactly what you need. Either there’s a bug, or some method doesn’t behave according to your expectations. If using setAccessible in order to change a field or invoke a private method works – it’s the better approach than forking (submit an improvement request, of course). But sometimes you have to change a body method – for these use cases I created my quickfix tool a few years ago. It’s dirty, but does the job, and together with the rest of these hacks, lets you move forward to delivering actual value, rather than wondering “should I use a visitor pattern here or “should we fork this library and support it in our repository and maven repository manager until they accept our pull request and release a new version”, or “should I write this with JNI”, or even “should we do this at all, it’s not possible without a hack”.

I know this is not the best advice I’ve given, and it’s certainly a slippery slope – too much of the “get it done quick and dirty, I don’t care” mentality is surely a disaster. But poison can be a cure in small doses, if applied with full understanding of the issue.

7

A Beginner’s Guide to Addressing Concurrency Issues

April 20, 2016

Inserts, updates and deletes. Every framework tutorial starts with these and they are seen as the most basic functionality that just works.

But what if two concurrent requests try to modify the same data? Or try to insert the same data that should be unique? Or the inserts and updates have side-effects that have to be stored in other tables (e.g. audit log).

“Transactions” you may say. Well, yes, and no. A transaction allows a group of queries to be executed together – either pass together of fail together. What happens with concurrent transactions depends on a specific property of transactions – their isolation level. And you can read here a very detailed explanation of how all of that works.

If you select the safest isolation level – serializable (and repeatable read), your system may become too slow. And depending on the database, transactions that happen at the same time may have to be retried by specific application code. And that’s messy. With other isolation levels you can have lost updates, phantom reads, etc.

Even if you get your isolation right, and you properly handle failed transactions, isolation doesn’t solve all concurrency problems. It doesn’t solve the problem of having an application-imposed data constraint (e.g. a uniqueness complex logic that can’t be expressed as a database unique constraint), it doesn’t solve the problem of inserting exact duplicates, it doesn’t solve other application-level concurrency issues, and it doesn’t perfectly solve the data modification issues. You may have to get into database locking, and locking is tedious. What is a write lock, a read-lock, what is an exclusive lock, and how not to end-up in a deadlock (or a livelock)? I’m sure that even developers with a lot of experience are not fluent with database locks, because you either don’t need them, or you have a bigger problem that you should solve first.

The duplicate submission problem is a bit offtopic, but it illustrates that not all concurrent request problems can be solved by the database alone. As many people suggest, is solved by a token that gets generated for each request and stored in the database using a unique constraint. That way two identical inserts (a result of a double-submission) cannot both go in the database. This gets a little more complicated with APIs, because you should rely on the user of the API to provide the proper token (and not generate it on the fly in their back-end). As for uniqueness – every article that I’ve read on the matter concludes that the only proper way to guarantee uniqueness is at the database level, using a unique constraint. But when there are complicated rules for that constraint, you are inclined to check in the application. And in this case concurrent requests will eventually allow for two records with the same values to be inserted.

Most of the problems are easy if the application runs on a single machine. You can utilize your language concurrency features (e.g. Java locks, concurrent collections) to make sure everything is properly serialized, that duplicates do not happen, etc. However, when you deploy to more than one machine (which you should), that becomes a lot harder problem.

So what are the approaches to address concurrency issues, apart from transactions? There are many, and here are a few of them (in no meaningful order).

  • There is Hazelcast, which lets you use distributed locks – the whole cluster follows the Lock semantics as if it was a single machine. That is language specific and setting up a hazelcast cluster of just a few usecases (because not all of your requests will need that) may be too much
  • You can use a message queue – push all requests to a message queue that is processed by a single (async) worker. That may be useful in some cases, and impractical in others (if you have to return some immediate response to the user, for example)
  • You can use Akka and its clustering capabilities – it guarantees that an actor (think “service”) is processing only one message at a time. But using akka for everything may not be a good idea, because it completely changes the paradigm, it is harder to read and trace, harder to debug, and is platform-specific (only JVM languages can make use of it).
  • You can use database-specific application level locks. That’s something quite useful, even though it is entirely RDBMS-dependent. Postgre has advisory locks, MySQL has get_lock, others probably have something similar. The idea here is that you use the database as your distributed lock mechanism. The locks are managed by the application, and don’t even need to have anything to do with your tables – you just ask for a lock for, say (entityType, entityId), and then no other application thread can enter a given piece of code, unless it successfully obtains that database lock. It is kind of like the hazelcast approach, but you get it “for free” with the database. Then you can have, for example, a @Before (spring) aspect that attaches to service methods and does the locking appropriate for the current application use-case, without using table locks.
  • You can use a CRDT. It’s a data structure that is idempotent – no matter what the order of the operation applied is, it ends up in the same state. It’s explained in more details in this presentation. How does a CRDT map to a relational database is an interesting question I don’t have an answer to, but the point is that if your operations are idempotent, you will probably have fewer issues.
  • Using the “insert-only” model. Databases like Datomic are using it internally, but you can use it with any database. You have no deletes, no updates – just inserts. Updating a record is inserting a new record with the “version” increased. That again relies on database features to make sure you don’t end up with two records with the same version, but you never lose data (concurrent updates will make it so that one is “lost”, because it’s not the latest version, but it’s stored and can be reverted to). And you get an audit log for free.

The overall problems is how to serialize requests without losing performance. And all the various lock mechanisms and queues, including non-blocking IO, address that. But what makes the task easier is having a data model that does not care about concurrency. If the latter is applicable, always go for it.

Whole books have been written on concurrency, and I realize such a blog post is rather shallow by definition, but I hope I’ve at least given a few pointers.

7

How To Read Your Passport With Android

April 5, 2016

As I’ve been researching machine readable travel documents, I decided to do a little proof-of-concept on reading ePassports using an NFC-enabled smartphone (Android).

The result is on GitHub, and is based on the jMRTD library, which provides all the necessary low-level details.

As I pointed out in my previous article, the standards for the ePassports have evolved a lot throughout the years – from no protection, to BAC, to EACv1, EACv2 and SAC (which replaces BAC). Security is still doubtful, as most of the passports and inspection systems require backward compatibility to BAC. That’s slowly going away, but even when BAC goes away, it will be sufficient to enter the CAN (Card Authentication Number) for the PACE protocol, so the app will still work with minor modifications.

What the app does is:

  1. Establishes NFC communication
  2. Authenticates to the passport using the pre-entered passport number, date of birth and expiry date (hardcoded in the app at the moment). Note that the low security of the protocol is due to the low entropy of this combination, and brute force is an option, as passports cannot be locked after successive failures.
  3. Reads mandatory data groups – all the personal information present in the passport, including the photo. In the example code only the first data group (DG1) is read, and the personal identifier is shown on the screen. The way to read data groups is as follows:
    InputStream is = ps.getInputStream(PassportService.EF_DG1);
    DG1File dg1 = (DG1File) LDSFileUtil.getLDSFile(PassportService.EF_DG1, is);
    
  4. Performs chip authentication – the first step of EAC, which makes sure that the chip is not cloned – it requires proof of ownership of a private key, which is stored in the protected area of the chip.

The code has some questionable coding practices – e.g. the InputStream handling (the IDE didn’t initially allow me to use Java 7, and I didn’t try much harder), but I hope they’ll be fixed if used in real projects.

One caveat – for Android there’s a need for SpongyCastle (which is a port of the BouncyCastle security provider). However it is not enough, so both have to be present for certain algorithms to be supported. Unfortunately, jMRTD has a hardcoded reference to BouncyCastle in one method, which leads to the copy-pasted method for chip authentication.

There is one more step of EAC – the terminal authentication, which would allow the app to read the fingerprints (yup, sadly there are fingerprints there). However, EAC makes it harder to do that. I couldn’t actually test it properly, because the chip rejects verifying even valid certificates, but anyway, let me explain. EAC relies on a big infrastructure (PKI) where each participating country has a Document Verifier CA, whose root certificate is signed by all other participating countries (as shown here). Then each country issues short-lived (1 day) certificates signed by the DVCA, which are used in the inspection system (border polices and automatic gates). The certificate chain now contains all countries root certificates, followed by the DVCA certificate, followed by the inspection system certificate. The chip has to verify that this chain is valid (by verifying that each signature on a certificate is indeed performed by the private key of the issuer). The chip itself has the root certificate of its own country, so it has the root of the chain and can validate it (which is actually the first step). Finally, in order to make sure that the inspection system certificate is really owned by the party currently performing the protocol, the chip sends a challenge to be signed by the terminal.

So, unless a collision is found and a fake certificate is attached to the chain, you can’t easily perform “terminal authentication”. Well, unless a key pair leaks from some inspection system somewhere in the world. Then, because the chip does not have a clock, even though the certificates are short-lived, they would still allow reading the fingerprints, because the chip can’t know they are expired (it syncs the time with each successful certificate validation, but that only happens when going through border control at airports). Actually, you could also try to spam the chip with a huge chain, and it will at some point “crash”, and maybe it will do something that it wouldn’t normally do, like release the fingerprints. I didn’t do that for obvious reasons.

But the point of the app is not to abuse the passports – there may be legitimate use-cases to allow reading the data from them, and I hope my sample code is useful for that purpose.

4

Software Can’t Live On Its Own

March 30, 2016

We’re building software in hope that some day we’ll leave it and it will live on its own. Or with minor supervision. But the other day when my father asked me to dig an old website, I did some thinking and realized auto-pilot software is almost never the case.

Software is either being supported, or is abandonware, or is too simple. We constantly have to “fix” something, on each piece of software. Basically, picking up an old project and running it is rather hard – it would most probably require upgrading a ton of components. For example:

  • Fixing edge cases, bugs, security issues. The software environment is dynamic, and no complex software is without uncovered edge cases. Security issues arise constantly and have to be patched. Unless we find a way to write bugless software with perfect security, we have to support all these.
  • Breaking upgrades:
    • browsers are being upgraded constantly, and old websites probably won’t work. Protocols remain backward compatible for a while, but then support is discontinued and one has to upgrade. Operating systems introduce breaking changes to software running on them – one clear example is Android, where with each major version something doesn’t work anymore (because it was deprecated 2 versions prior) or has to be done in a different way. We have to be there and tweak our code to accommodate these upgrades.
    • Frameworks and languages get upgrades as well – and sometimes we can’t even build our legacy software anymore. Even if we can, the target environment may not support our old versions. The aforementioned site was written in PHP 4. Shared hosting providers no longer offer PHP 4, so will that site work? Possibly will need tweaks.
    • Changes in 3rd party APIs. If you rely on something like a facebook API, or a Google API, chances are your 3-year-old project will no longer work.
  • New use-cases – the real world is dynamic, and software that supports some real-world activity has to change with it. Some features become obsolete, new features are needed. Vendors like to advertise “draw-it-yourself” tools that create new forms and business processes without any technical expertise, but that’s rarely working properly
  • Visual design becomes outdated. Remember Web King? Maybe that was the design of the 1995, but not anymore. We’ve gone through waves and waves of new design trends, and often it’s not okay to look outdated.

A piece of software is not like a building – you can’t it once, and it lives for decades, with just occasional repairs. It is not like a piece of kitchen appliance, you can’t just replace it with a newer version.

It isn’t like a building, because it’s too complex (not diminishing the role of real architects, but they have a limited set of use cases). And it isn’t like kitchen appliances, because kitchen appliances don’t have data.

And actually, data migration is one of the reasons legacy software exists – migrating it to something new is hard – one should fit it into a new structure, and into a new database. And even simple migration from an older database version to a newer database version is hard. Migrating structures and even usecases is horrible. I won’t even mention triggers, stored procedures to be migrated across vendors and so on.

So yes, keeping an old piece of software running requires a lot of effort; migrating it to a newer and better piece of software is often a doomed project and you’re stuck with your existing system forever.

That means there is a whole big branch of the IT market that focuses on that – providing software to clients and then keeping them bound to that software forever. With regular updates and support. There is another type of companies, where things are more straightforward – the “single product as a service” companies. The cool web 2.0 startups are mostly single-product-as-a-service companies and if the company dies, the product dies with it. If the company manages to make some money, you don’t care…until it dies, and then your migration to a new piece of software is the promised nightmare.

Leaving simple software aside (my computoser is running unsupervised for 2 years already; not that it’s simple, but its complexity is confined to the algorithm; I heard that the software for the trash cleaning company I’ve written when I was 16 is still in use in my hometown) everything else needs constant caring. And given that more and more software is being build, this leads us to the sad realization that we’ll have to support a lot of software. More and more of programmer’s work will be caring for what’s been already built, rather than building something new. And on one hand that’s sad. That means software for many is not “craftsmanship”, not “science”, not “making cool things”. It is a mundane support and gradual extension of old, clunky bulks of code.

Unless we learn to build self-supporting software. Software that automatically overcomes OS upgrades, framework and protocol upgrades. Software that allows extending without writing code (which many systems claim to do even now, but very few actually do). Until that time, I’m afraid we’re stuck with supporting our current projects, and in the best case – extending them to fit new needs and customers.

2

Take a Step Back

March 19, 2016

A software project can become “legacy” just after three months from its inception. I’ve recently seen many projects that look OK on the surface, but are in fact so “broken”, that they have to be rewritten. Well, they work, but continuing their support is a pain. And everyone has probably seen at least one such project, where you want to just throw it away and start from scratch. The problem is – starting from scratch won’t guarantee the good outcome either. Why things go that way?

I think it’s because developers tend to solve problems one at a time, achieving a “local maximum”. They don’t need to be “quick dirty fixes” – any even reasonably sounding fix for the particular problem at hand may yield de-facto legacy code. And if we view software development as constant problem fixing (as even introducing new features consists of fixing problems along the way), we have a problem.

My approach to that process is to take a step back and ask the question “is this really the problem I’m solving, or is there a bigger underlying problem”. And sometimes there is. There are some clear indications that there is such a problem. “Code smell” is a popular term for that, but I’d like to extend it – sometimes it’s not the thing that you do that makes things smell, but rather something done before makes you take stupid decisions later. Sometimes these decisions don’t even look wrong in the context that you’ve created with your previous decisions, but they are certainly wrong. And you can use them as indicators. Some examples:

If you have to copy-paste some piece of code to another part of the project, and that’s your best option, something’s wrong with the code. You should take a step back and refactor, rather than copy-pasting yet another piece.

If you have to rely on a full manual test to figure out that your application is not broken, the quick and easy “fix” for the problem is to just get a QA to manually test it. If you do that, instead of adding tests, quality degrades over time.

If you have to use business logic to overcome data model or infrastructure deficiencies, the easy fix is to just add a couple of if’s here and there. Then six months later you have unreadable code, full of bits irrelevant to the actual business logic. Instead, fix the data model or your infrastructure (in a wider sense, e.g. framework configuration).

If, given a bug report, tracing the program flow given requires knowing where things are, rather than finding them, it means the project is not well structured. Yes, you are probably working on it for a year now and you know where things are, but finding stuff using search (or call and class hierarchies) is the proper way to go – even for people experience with the project (not to mention newcomers).

If the addition of a data field or a component requires changes in the whole project, rather than just an isolated part of the project, then each new addition creates more complexity and more potential failures. E.g. pseudo-plugin systems that require changing the core with each plugin.

The list can go on, but the point is clear – if faced with an option to do something the wrong way, take a step back and rethink whether the problem should exist in the first place. Otherwise each fix becomes technical debt. And in three months you have a “legacy” project.

1