Biometric Identification [presentation]

August 21, 2016

Biometric identification is getting more common – borders, phones, doors. But I argue that it is not by itself a good approach. I tried to explain this in a short talk, and here are the slides

Biometric features can’t be changed, can’t be revoked – they are there forever. If someone gets hold of them (and that happens sooner or later), we are screwed. And now that we use our fingerprints to unlock our phones, for example, and at the same time we use our phone as the universal “2nd factor” for most online services, including e-banking in some cases, fraud is waiting to happen (or already happening).

As Bruce Schneier has said after an experiment that uses gummi bears to fool fingerprint scanners:

The results are enough to scrap the systems completely, and to send the various fingerprint biometric companies packing

On the other hand, it is not that useful and pleasant to biometric features for identification – just typing a PIN is just as good (but we can change the PIN).

I’ve previously discussed the risks related to electronic passports, which have fingerprint images in clear form and are read without a PIN thought a complex certificate management scheme. The bottom line is, they can leak from your passport without you understanding (if the central databases don’t leak before that). Fortunately, there are alternatives that would still guarantee that the owner of the passport is indeed the one it was issued to, an that it’s not fake.

But anyway, I think the biometric data can have some future applications. Near the end of the presentation I try to imagine how it can be used for a global, distributed anonymous electronic identification scheme. But the devil is always in the details. And so far we have failed with the details.

0

Writing Laws Is Quite Like Programming

August 7, 2016

In the past year I’ve taken the position of an adviser in the cabinet of a deputy prime minister and as a result of that I had the option to draft legislation. I’ve been doing that with a colleague, both with strong technical background, and it turned out we are not bad at it. Most of “our” laws passed, including the “open source law”, the electronic identification act, and the e-voting amendments to the election code (we were, of course, helped by legal professionals in the process, much a like a junior dev is helped by a senior one).

And law drafting felt to have much in common with programming – as a result “our” laws were succinct, well-structured and “to the point”, covering all use-cases. At first it may sound strange that people not trained in the legal profession would be able to do it at all, but writing laws is actually “legal programming”. Here’s what the two processes have in common:

  • Both rely on a formalized language. Programming languages are stricter, but “legalese” is also quite formalized and certain things are worded normally in a predefined way, in a way there are “keywords”.
  • There is a specification on how to use the formalized language and how it should behave. The “Law for normative acts” is the JLS (Java language specification) for law-drafting- it defines what is allowed, how laws should be structured and how they should refer to each other. It also defines the process of law-making.
  • Laws have a predefined structure, just as a class file, for example. There are sections, articles, references and modification clauses for other laws (much like invoking a state-changing function on another object).
  • Minimizing duplication is a strong theme in both law-drafting. Instead of copy-pasting shared code / sections, you simply refer to it by its unique identifier. You do that in a single law as well as across multiple laws, thus reusing definitions and statements.
  • Both define use-cases – a law tries to cover all the edge cases for a set of use-cases related to a given matter, much like programming. Laws, of course, also define principles, which is arguably there more important feature, but the definition is use-cases is pretty ubiquitous.
  • Both have if-clasues and loops. You literally say “in case of X, do Y”. And you can say “for all X, do Y”. Which is of course logical, as these programming constructs come from the real world.
  • There are versions and diffs. After it appears for the first time (“is pushed to the legal world”) every change is in the form of an amendment to the original text, in a quite formalized “diff” structure. Adding or removing articles, replacing words, sentences or whole sections. You can then replay all the amendments ontop of the original document to find the current active law. Sounds a lot like git.
  • There are “code reviews” – you send your draft to all the other institutions and their experts give you feedback, which you can accept or reject. Then the “pull request” is merged into master by the parliament.
  • There is a lot of “legacy code”. There are laws from 50 years ago that have rarely been amended and you have to cope with them.

And you end up with a piece of “code” that either works or doesn’t solve the real world problems and has to be fixed/amended. With programming it’s the CPU, and possibly a virtual machine that carry out the instructions, and with laws it’s the executive branch (and in some cases – the juridical).

It may seem like the whole legal framework can be written in a rules engine or in Prolog. Well, it can’t, because of the principles it defines and the interpretation (moral and ethical) that judges have to do. But that doesn’t negate the similarities in the process.

There is one significant difference though. In programming we have a lot of tools to make our lives easier. Build tools, IDEs, (D)VCS, issue tracking systems, code review systems. Legal experts have practically none. In most cases they use Microsoft Word and even without “Track changes” sometimes. They get the current version of the text from legal information systems or in many cases even from printed versions of the law. Collaboration is a nightmare, as Word documents are flying around via email. The more tech-savvy may opt for a shared document with Google Docs or Office365 but that’s rare. People have to manually write the “diff” based on track changes, and then manually apply the diff to get the final consolidated version. The process of consultation (“code review”) is based on sending paper mails and getting paper responses. Not to mention that once the draft gets in parliament, there are work groups and committees that make the process even more tedious.

Most of that can be optimized and automated. The UK, for example, has done some steps forward with legislation.gov.uk where each legal text is stored using LegalXML (afaik), so at least references and versioning can be handled easily. But legal experts that draft legislation would love to have the tools that we, programmers, have. They just don’t know they exist. The whole process, from idea, through work groups, through consultation, and multiple readings in parliament can be electronic. A GitHub for laws, if you wish, with good client-side tools to collaborate on the texts. To autocomplete references and to give you fine-tuned search. We have actually defined such a “thing” to be built in two years, and it will have to be open source, so even though the practices and rules vary from country to country, I hope it will be possible to reuse it.

As a conclusion, I think programming (or software engineering, actually), with its well defined structures and processes, can not only help in many diverse environments, but can also give you ideas on how to optimize them.

1

Custom Audit Log With Spring And Hibernate

July 18, 2016

If you need to have automatic auditing of all database operations and you are using Hibernate…you should use Envers or spring data jpa auditing. But if for some reasons you can’t use Envers, you can achieve something similar with hibernate event listeners and spring transaction synchronization.

First, start with the event listener. You should capture all insert, update and delete operations. But there’s a tricky bit – if you need to flush the session for any reason, you can’t directly execute that logic with the session that is passed to the event listener. In my case I had to fetch some data, and hibernate started throwing exceptions at me (“id is null”). Multiple sources confirmed that you should not interact with the database in the event listeners. So instead, you should store the events for later processing. And you can register the listener as a spring bean as shown here.

@Component
public class AuditLogEventListener
        implements PostUpdateEventListener, PostInsertEventListener, PostDeleteEventListener {

    @Override
    public void onPostDelete(PostDeleteEvent event) {
        AuditedEntity audited = event.getEntity().getClass().getAnnotation(AuditedEntity.class);
        if (audited != null) {
            AuditLogServiceData.getHibernateEvents().add(event);
        }
    }

    @Override
    public void onPostInsert(PostInsertEvent event) {
        AuditedEntity audited = event.getEntity().getClass().getAnnotation(AuditedEntity.class);
        if (audited != null) {
            AuditLogServiceData.getHibernateEvents().add(event);
        }
    }

    @Override
    public void onPostUpdate(PostUpdateEvent event) {
        AuditedEntity audited = event.getEntity().getClass().getAnnotation(AuditedEntity.class);
        if (audited != null) {
            AuditLogServiceData.getHibernateEvents().add(event);
        }
    }

    @Override
    public boolean requiresPostCommitHanding(EntityPersister persister) {
        return true; // Envers sets this to true only if the entity is versioned. So figure out for yourself if that's needed
    }
}

Notice the AuditedEntity – it is a custom marker annotation (retention=runtime, target=type) that you can put ontop of your entities.

To be honest, I didn’t fully follow how Envers does the persisting, but as I also have spring at my disposal, in my AuditLogServiceData class I decided to make use of spring:

/**
 * {@link AuditLogServiceStores} stores here audit log information It records all 
 * changes to the entities in spring transaction synchronizaton resources, which 
 * are in turn stored as {@link ThreadLocal} variables for each thread. Each thread 
 * /transaction is using own copy of this data.
 */
public class AuditLogServiceData {
    private static final String HIBERNATE_EVENTS = "hibernateEvents";
    @SuppressWarnings("unchecked")
    public static List<Object> getHibernateEvents() {
        if (!TransactionSynchronizationManager.hasResource(HIBERNATE_EVENTS)) {
            TransactionSynchronizationManager.bindResource(HIBERNATE_EVENTS, new ArrayList<>());
        }
        return (List<Object>) TransactionSynchronizationManager.getResource(HIBERNATE_EVENTS);
    }

    public static Long getActorId() {
        return (Long) TransactionSynchronizationManager.getResource(AUDIT_LOG_ACTOR);
    }

    public static void setActor(Long value) {
        if (value != null) {
            TransactionSynchronizationManager.bindResource(AUDIT_LOG_ACTOR, value);
        }
    }

    public void clear() {
       // unbind all resources
    }
}

In addition to storing the events, we also need to store the user that is performing the action. In order to get that we need to provide a method-parameter-level annotation to designate a parameter. The annotation in my case is called AuditLogActor (retention=runtime, type=parameter).

Now what’s left is the code that will process the events. We want to do this prior to committing the current transaction. If the transaction fails upon commit, the audit entry insertion will also fail. We do that with a bit of AOP:

@Aspect
@Component
class AuditLogStoringAspect extends TransactionSynchronizationAdapter {

    @Autowired
    private ApplicationContext ctx; 
    
    @Before("execution(* *.*(..)) && @annotation(transactional)")
    public void registerTransactionSyncrhonization(JoinPoint jp, Transactional transactional) {
        Logger.log(this).debug("Registering audit log tx callback");
        TransactionSynchronizationManager.registerSynchronization(this);
        MethodSignature signature = (MethodSignature) jp.getSignature();
        int paramIdx = 0;
        for (Parameter param : signature.getMethod().getParameters()) {
            if (param.isAnnotationPresent(AuditLogActor.class)) {
                AuditLogServiceData.setActor((Long) jp.getArgs()[paramIdx]);
            }
            paramIdx ++;
        }
    }

    @Override
    public void beforeCommit(boolean readOnly) {
        Logger.log(this).debug("tx callback invoked. Readonly= " + readOnly);
        if (readOnly) {
            return;
        }
        for (Object event : AuditLogServiceData.getHibernateEvents()) {
           // handle events, possibly using instanceof
        }
    }

    @Override
    public void afterCompletion(int status) {
	// we have to unbind all resources as spring does not do that automatically
        AuditLogServiceData.clear();
     }

In my case I had to inject additional services, and spring complained about mutually dependent beans, so I instead used applicationContext.getBean(FooBean.class). Note: make sure your aspect is caught by spring – either by auto-scanning, or by explicitly registering it with xml/java-config.

So, a call that is audited would look like this:

@Transactional
public void saveFoo(FooRequest request, @AuditLogActor Long actorId) { .. }

To summarize: the hibernate event listener stores all insert, update and delete events as spring transaction synchronization resources. An aspect registers a transaction “callback” with spring, which is invoked right before each transaction is committed. There all events are processed and the respective audit log entries are inserted.

This is very basic audit log, it may have issue with collection handling, and it certainly does not cover all use cases. But it is way better than manual audit log handling, and in many systems an audit log is mandatory functionality.

3

Spring-Managed Hibernate Event Listeners

July 15, 2016

Hibernate offers event listeners as part of its SPI. You can hook your listeners to a number of events, including pre-insert, post-insert, pre-delete, flush, etc.

But sometimes in these listeners you want to use spring dependencies. I’ve written previously on how to do that, but hibernate has been upgraded and now there’s a better way (and the old way isn’t working in the latest versions because of missing classes).

This time it’s simpler. You just need a bean that looks like this:

@Component
public class HibernateListenerConfigurer {
    
    @PersistenceUnit
    private EntityManagerFactory emf;
    
    @Inject
    private YourEventListener listener;
    
    @PostConstruct
    protected void init() {
        SessionFactoryImpl sessionFactory = emf.unwrap(SessionFactoryImpl.class);
        EventListenerRegistry registry = sessionFactory.getServiceRegistry().getService(EventListenerRegistry.class);
        registry.getEventListenerGroup(EventType.POST_INSERT).appendListener(listener);
        registry.getEventListenerGroup(EventType.POST_UPDATE).appendListener(listener);
        registry.getEventListenerGroup(EventType.POST_DELETE).appendListener(listener);
    }
}

It is similar to this stackoverflow answer, which however won’t work because it also relies on deprecated calsses.

You can also inject a List<..> of listeners (though they don’t share a common interface, you can define your own).

As pointed out in the SO answer, you can’t store new entities in the listener, though, so it’s no use injecting a DAO, for example. But it may come handy to process information that does not rely on the current session.

0

Installing Java Application As a Windows Service

June 26, 2016

It sounds like something you’d never need, but sometimes, when you distribute end-user software, you may need to install a java program as a Windows service. I had to do it because I developed a tool for civil servants to automatically convert and push their Excel files to the opendata portal of my country. The tool has to run periodically, so it’s a prime candidate for a service (which would make the upload possible even if the civil servant forgets about this task altogether, and besides, repetitive manual upload is a waste of time).

Even though there are numerous posts and stackoverflow answers on the topic, it still took me a lot of time because of minor caveats and one important prerequisite that few people seemed to have – having a bundled JRE, so that nobody has to download and install a JRE (would complicate the installation process unnecessarily, and the target audience is not necessarily tech-savvy).

So, with maven project with jar packaging, I first thought of packaging an exe (with launch4j) and then registering it as a service. The problem with that is that the java program uses a scheduled executor, so it never exits, which makes starting it as a process impossible.

So I had to “daemonize” it, using commons-daemon procrun. Before doing that, I had to assemble every component needed into a single target folder – the fat jar (including all dependencies), the JRE, the commons-daemon binaries, and the config file.

You can see the full maven file here. The relevant bits are (where ${installer.dir} is ${project.basedir}/target/installer}):

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>2.3.2</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
    </configuration>
</plugin>
<plugin>
    <artifactId>maven-assembly-plugin</artifactId>
    <executions>
        <execution>
            <id>assembly</id>
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
                <finalName>opendata-ckan-pusher</finalName>
                <appendAssemblyId>false</appendAssemblyId>
            </configuration>
        </execution>
    </executions>
</plugin>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>1.7</version>
    <executions>
        <execution>
            <id>default-cli</id>
            <phase>package</phase>
            <goals>
                <goal>run</goal>
            </goals>
            <configuration>
                <target>
                    <copy todir="${installer.dir}/jre1.8.0_91">
                        <fileset dir="${project.basedir}/jre1.8.0_91" />
                    </copy>
                    <copy todir="${installer.dir}/commons-daemon">
                        <fileset dir="${project.basedir}/commons-daemon" />
                    </copy>
                    <copy file="${project.build.directory}/opendata-ckan-pusher.jar" todir="${installer.dir}" />
                    <copy file="${project.basedir}/install.bat" todir="${installer.dir}" />
                    <copy file="${project.basedir}/uninstall.bat" todir="${installer.dir}" />
                    <copy file="${project.basedir}/config/pusher.yml" todir="${installer.dir}" />
                    <copy file="${project.basedir}/LICENSE" todir="${installer.dir}" />
                </target>
            </configuration>
        </execution>
    </executions>
</plugin>

You will notice the installer.bat and uninstaller.bat which are the files that use commons-daemon to manage the service. The installer creates the service. Commons-daemon has three modes: exe (which allows you to wrap an arbitrary executable), Java (which is like exe, but for java applications) and jvm (which runs the java application in the same process; I don’t know how exactly though).

I could use all three options (including the launch4j created exe), but the jvm allows you to have a designated method to control your running application. The StartClass/StartMethod/StopClass/StopMethod parameters are for that. Here’s the whole installer.bat:

commons-daemon\prunsrv //IS//OpenDataPusher --DisplayName="OpenData Pusher" --Description="OpenData Pusher"^
     --Install="%cd%\commons-daemon\prunsrv.exe" --Jvm="%cd%\jre1.8.0_91\bin\client\jvm.dll" --StartMode=jvm --StopMode=jvm^
     --Startup=auto --StartClass=bg.government.opendatapusher.Pusher --StopClass=bg.government.opendatapusher.Pusher^
     --StartParams=start --StopParams=stop --StartMethod=windowsService --StopMethod=windowsService^
     --Classpath="%cd%\opendata-ckan-pusher.jar" --LogLevel=DEBUG^ --LogPath="%cd%\logs" --LogPrefix=procrun.log^
     --StdOutput="%cd%\logs\stdout.log" --StdError="%cd%\logs\stderr.log"
     
     
commons-daemon\prunsrv //ES//OpenDataPusher

A few clarifications:

  • The Jvm parameter points to the jvm dll
  • The StartClass/StartMethod/StopClass/StopMethod point to a designated method for controlling the running application. In this case, starting would just call the main method, and stopping would shutdown the scheduled executor, so that the application can exit
  • The classpath parameter points to the fat jar
  • Using %cd% is risky for determining the path to the current directory, but since the end-users will always be starting it from the directory where it resides, it’s safe in this case.

The windowsService looks like that:

public static void windowsService(String args[]) throws Exception {
     String cmd = "start";
     if (args.length > 0) {
        cmd = args[0];
    }

    if ("start".equals(cmd)) {
        Pusher.main(new String[]{});
    } else {
        executor.shutdownNow();
        System.exit(0);
    }
}

One important note here is the 32-bit/64-bit problem you may have. That’s why it’s safer to bundle a 32-bit JRE and use the 32-bit (default) prunsrv.exe.

I then had an “installer” folder with jre and commons-daemon folders and two bat files and one fat jar. I could then package that as an self-extractable archive and distribute it (with a manual, of course). I looked into IzPack as well, but couldn’t find how to bundle a JRE (maybe you can).

That’s a pretty niche scenario – usually we develop for deploying to a Linux server, but providing local tools for a big organization using Java may be needed every now and then. In my case the long-running part was a scheduled executor, but it can also run a jetty service that serves a web interface. Why would it do that, instead of providing a URL – in cases where access to the local machine matters. It can even be a distributed search engine (like that) or another p2p software that you want to write in Java.

1

Why I Prefer Merge Over Rebase

June 17, 2016

There are many ways to work with git. The workflows vary depending on the size of the team, organization, and on the way of working – is it distributed, is it sprint-based, is it a company, or an open-source project, where a maintainer approves pull requests.

You can use vanilla-git, you can use GitHub, BitBucket, GitLab, Stash. And then on the client side you can use the command line, IDE integration, or stand-alone clients like SourceTree.

The workflows differ mostly in the way you organize you branches and the way you merge them. Do you branch off branches? Do you branch off other people’s branches, which are work-in-progress. Do you push or stay local? Do you use it like SVN (perfectly fine for a single developer on a pet project), or you delve into more “arcane” features like --force-with-lease.

This is all decided by each team, but I’d like to focus on one very debated topic – rebasing vs merging. While you can get tons of results discussing rebasing vs merging, including the official git documentation, it has become more of a philosophical debate, rather than a practical one.

I recently asked a practical question about a rebase workflow. In short, by default rebasing seems not to favour pushing stuff to the central repo. If you do that before rebasing, you’d always need to force-push. And force-pushing may make it very hard for people that are based on your branch. Two questions that you are already asking:

  • Why do you need to push if something isn’t ready? Isn’t it the point of the “D” in “DVCS” to be able to commit locally and push only when ready? Well, even if you don’t use git as SVN, there are still plenty of use-cases for pushing every change to your own feature branch remote – you may be working from different machines, a colleague may want to pick up where you left (before leaving for holiday or falling sick), or even hard drive failures and theft. I think basically you have to push right before you log off, or even more often. THe “distributed” allows for working offline, or even without a central repo (if it goes down), but it is not the major benefit of git.
  • Why would anyone be based on your work-in-progress branch? Because it happens. Sometimes tasks are not split that strictly and have dependencies – you write a piece of functionality, which you then realize should be used by your teammates who work on another task within the same story/feature. You aren’t yet finished (e.g. still polishing, testing), but they shouldn’t wait. Even a single person may want to base his next task on the previous one, while waiting for code review comments. The tool shouldn’t block you from doing this from time to time, even though it may not be the default workflow scenario.

Also, you shouldn’t expect every team member to be a git guru, who rewrites history for breakfast. A basic set of commands (even GUIs) should be sufficient for a git workflow, including the edge cases. Git is complicated and the task of each team is to make it work for them, rather than against them. Probably there is one article for each git command or concept with a title “X considered harmful”, and going through that maze is not trivial for an inexperienced git user. As Linus Torvalds once allegedly said:

Git has taken over where Linux left off separating the geeks into know-nothings and know-it-alls. I didn’t really expect anyone to use it because it’s so hard to use, but that turns out to be its big appeal.

Back to the rebase vs merge – merge (with pull requests) feels natural for the above. You branch often, you push often. Rebase can work in the above use-cases (which I think are necessary). You can force-push after each rebase, and you can make sure your teammates resolve that. But what’s the point?

The practical argument is that the graph that shows the history of the repo is nice and readable. Which I can’t argue with, because I’ve never had a case when I needed a cleaner and better graph. No matter how many merge commits and ugliness there’s in the graph, you can still find your way (if you ever need to). Besides, a certain change can easily be traced even without the graph (e.g. git annotate).

If you are truly certain you can’t go without a pretty graph, and your teammates are all git gurus who can resolve a force-push in minutes, then rebasing is probably fine.

But I think a merge-only workflow is the more convenient way of work that accounts for more real-world scenarios.

I realize this is controversial, and I’m certainly a git n00b (I even use SourceTree rather than the command line for basic commands, duh). But I have used both merge and rebase workflows and I find the merge one more straightforward (after all, force-pushing being part of the regular workflow sounds suspicious?).

Git is the scala of VCS – it gives you many ways to do something, but there is no “right way”. This isn’t necessarily bad, as indeed there are many different scenarios where git can be used. For the ones I’ve had (regular project in a regular company, with a regular semi-automated release & deployment cycle, doing regular agile), I’d always go for merge, with pull requests.

6

E-Government Architecture [presentation]

June 11, 2016

After working for a year on the matter, I did a presentation on a small conference about the options for architecture of e-government solutions. A 40 minute talk could not cover everything, and it is presented in the context of Bulgaria (hence 2 graphs with cyrillic script in the slides), so I hope it’s useful anyway.

It follows a previous post of mine that proposes an architecture, which I’ve expanded here.

Here are the slides:

The main points are:

  • all data registers must be integrated somehow
  • the integration should preferably not rely on a centralized ESB-like system
  • privacy must be addressed by strict access control and audit logs, including access for citizens to data about who read their data and why, including notifications
  • the technical challenge is only 20%, the rest is legal and organizational

Despite being government-related, it’s actually an interesting technical task that not many have solved properly.

0

Identity in the Digital World

May 21, 2016

“Identity” is a set of features that allow unique identification of a person and distinguishing them from others. That sounds simple enough, but it turns out to have a lot of implications in the modern, connected, global world.

Identity today is government managed. You are nobody if a government hasn’t confirmed that you are indeed somebody. The procedures in the countries vary, but after you are born, you get issued a birth certificate and your name (and possibly number) are entered into a database (either centralized or decentralized). From then on you have an “identity”, which you can later prove using some sort of a document (ID card, passport, driving license, social security number, etc.)

It is not that the government owns your identity, because you are far more than your ID card, but certain attributes of your identity are recorded by the government, and then it certifies (via a document and the relevant database) that this is indeed you. These attributes include your names, which have been used to identify people since forever, your address, your photo, height, eye color. Possibly your fingerprints and your iris. But we’ll get to these biometric attributes later.

Why is all this important? Except for cases of people living in small isolated tribes, where they probably don’t even need names for identifying others, the so called “civilized world” needs to be able to differentiate one person from another for all sorts of reasons. Is the driver capable of driving, is the pilot capable of flying a plane — they may show a certificate, but is it really them that were certified (“Catch me if you can” shows how serious this can be)? Who owns a given property? Is it this one, claiming to be John Smith, or that one, also claiming to be John Smith? The ownership certificate may be lost, but there is a record somewhere that holds the information. We just have to identify the real John Smith.

Traveling is another case — although rather suboptimal, the current world has countries and borders, and various traveling restrictions. You have to prove that you are you, and that you have the right to travel. You have to prove you are American, or that you have a visa, if you want to enter the United States.

There are many other cases — crime-fighting, getting a bank loan, getting employed, etc.

You may argue that you should be able to be totally anonymous and still do all of the above, but unfortunately, in a global society, fraud is too likely to allow us to deal with anonymous people. By that I’m not saying we should be identified for everything we do — not at all, it should be limited to where it makes practical sense. But there is a sufficient number of these use-cases.

Offline identity is one thing, but there’s also the notion of “online identity”. A way to prove who you are on the internet. That is most often (and rightly so) an anonymous registration process, rarely it uses some identity provider like Facebook or Twitter (where again, you don’t have to disclose your true identity), but when doing legally significant actions, or when communicating with governments in order to obtain some data or certificates about yourself, the service provider has to be able to prove it is really you. Here comes the “electronic identification” process, which was recently defined in an EU regulation, and which in most cases means you have a government-issues hardware token that only you own and know how to unlock.

But since identity exists, it can be stolen or forged. There is the so called “identity theft” and it’s used in multiple ways that are out of the scope of this post. But people do steal others identity — online, and offline.

One instance of identity theft is using another person’s identity document. Similarly, one can forge an identity document to say whatever they want it to say. And this may lead to dire consequences for unsuspecting citizens. So government and experts are trying to fight this problem. Let’s take a look at the two distinct use-cases.

Document forgery is addressed by making ever more complicated documents, with all sorts of security features, invisible components, laser engraved elements, using specific laser angels, and so on. This, of course, is imperfect, not only because it is “security through obscurity” (who guarantees that your government won’t leak the “secret sauce” for making its documents, or worse — supply the forgers with the raw materials needed to make a document), but also because a forged document can still pass inspection, as humans are not perfect when inspecting documents. To put it another way — if the one inspecting the document knows what to look for, surely the forger also knows that.

Document theft (including document copying) is addressed by comparing the picture. And that’s about it. If you look similarly to someone else and you get his identity document, you can safely pretend to be him for a long time.

None of the solutions seem good enough. So to the rescue come electronic documents. Passports are a somewhat universal identity document, and most passports are now eMRTD (Electronic machine readable travel document). Issues with them aside, the basic idea is that they have some information stored that a) guarantees the document is issued by a trusted authority and b) it belongs to the person holding it.

The first part is guaranteed via a public key infrastructure — the contents of the document are digitally signed by the issuing authority. So nobody can create his own passport or ID card, because he doesn’t have the private key of the issuing authority (and the private key cannot be extracted, because an HSM, where it is stored, doesn’t allow that).

The second part is trickier. It is currently addressed by storing your facial image and fingerprints on the chip and then comparing the image and fingerprints of the holder to the stored ones (remember that the content is certified by a digital signature, which is practically bulletproof for the time being). The facial image part is flawed, and at the moment barely anyone checks the fingerprint part, but this option exists and it is getting more and more traction “with all that terrorism”.

So starting from the somewhat intuitive concept of identity, we’ve come to the point where governments make databases of fingerprints. And then iris data, and DNA (as in Kuwait, for example).

Although everything above sounds logical, the end result is somewhat scary. People’s biometric information being stored in databases, potentially at risk of breaches, potentially misused by governments, sounds dystopian. As we are no longer the owners of our identity — someone else has collected our attributes — attributes that do not change throughout our entire life — and stores them for future use. For whatever use. That someone doesn’t have to store them for the sake of identification, as there are technologies that allow storing the data on a card that does the comparison internally, without reveling the stored data. But that option seems to be ignored, strengthening the dystopian feeling.

Recently I’ve been thinking on how to address all of these. How to make sure identity still does its job but without compromising privacy. Two hours after I’ve had some ideas, I spoke with someone with far more experience in identity technology than me, and turned out he had had quite similar ideas.

And here technology comes into play. We are a combination of our unchangeable traits — fingerprints, iris, DNA. You can differentiate even identical twins based on these attributes. You also have other, more volatile attributes — height, weight, names, address, favorite color even.

All of these represent your identity. And it can be managed by turning the essential, unchangeable parts of it, into a key. An anonymous key, that is derived using a one-way function, a so called “hash”. After you hash your fingerprints, iris and DNA, you’ll get a long value, e.g. fd4e1c67b2d28fced849ee1bb76e7391b93eb12, that represents you (read here about fingerprint hashing)

This will be you and you will be able to prove it, as every time someone needs you to prove your identity, you will get your fingerprints, iris and DNA scanned, and the result of applying the one-way function will be again 2fd4e1c67b2d28fced849ee1bb76e7391b93eb12.

Additionally, you can probably add some “secret” word to that identity. So that your identity is not only what you are (and cannot change), but also what you know. That would mean that nobody can come up with your identity unless you tell them your secret (sounds a little like “A Wizard of Earthsea”).

Of course, full identification will rarely be required. If you want to buy alcohol, only your age matters; if you want to get a contract for cable internet, only your name and address matter, and so on. For that, sub-identities can exist — they belong to a “parent” identity, but the verifier doesn’t need such a high level of assurance that it is indeed you. The sub-identity can be “just the fingerprints”, or even…a good old identity document. Each sub-identity can prove a set of attributes, certified by an authority — not necessarily a government authority.

Your sub-identity, a set of attributes, can be written on a document — something you carry around that certifies, with a significant level of certainty, that this is indeed you. It will hold your “hash”, so that anyone who wants to do a full check, can do so. The other option is the implant. Scary and dystopian, I know. It seems just a little different than an ID card — it is something you carry with you, and you have to carry with you. Provided that you control whether someone is allowed to read your implant, it becomes a slightly advanced identity card or a driver’s license.

Even when we have an identity string, the related data — owned properties, driving capabilities, travel visas, employment, bank loans — will be stored in databases, where the identity string is the lookup key. These databases are now government owned, but can very well be distributed, e.g. using a blockchain. Nobody can claim he’s you, as he cannot produce the same identity string based on his biometrics. The nodes on the blockchain network can be the implants, which hold encrypted information about you, and only you can decide when to decrypt it. That would make for a distributed human database where one is in full control of his data.

But is this feasible? The complexity of the system, and especially of managing one’s identity, may be too high. We can create a big, complex system, involving implants and biometrics, for solving a problem that is actually a tiny one. This is the first question we should ask before proceeding to such a thing. Not whether governments should manage identities, not whether we should be identifiable, but whether we need a dramatic shift in the current system. Or an electronic ID card with match-on-card (not centrally stored) fingerprints and electronically signed contents solves 99% of the issues?

Although I’m finding it fascinating to envision a technological utopia, with cryptography heavily involved, and privacy guaranteed by technological means, I’m not sure we need that.

5

Cleanup Temp Files

May 17, 2016

I’ve been spring-cleaning some devices and the obviousness of the advice in the title seems questionable. I found tons of unused temp files which applications (android apps, desktop applications and even server application deployments) haven’t cleaned. This is taking up space and it means more manual maintenance.

For server-side applications the impact is probably smaller, as it is entirely under your control and you can regularly cleanup the data, or even don’t care, as you regularly re-create the machine (e.g. in an AWS deployment where each upgrade to the system means new machines get spawned and the old ones – deleted). But anyway, if you use temp files (and Java), use File.createTempFile(..) and don’t forget to call file.deleteOnExit().

For client-side applications (smartphone apps or desktop software) the carelessness of not deleting temp files leads to the users’ disappointment at some point in time, when they realize their storage is filled with your useless files. The delete-on-exist works again, but maybe you need the files to survive more than one run. So simply have a job, or a startup-check that checks whether temp files aren’t older than a certain period, and if they are – delete them.

The effect of this little thing being omitted by developers is that users have to analyze their storage with special tools (sometimes – paid) in order to find the “offenders”. And the offenders are not always obvious, and besides – users are not necessarily familiar with the concept of a temp file. Even I’m sometimes not sure of a given file that looks like a temp one isn’t actually necessary for the proper functioning.

Storage is cheap, but good practices should not be abandoned because of that.

2

Dirty Hacks Are OK

May 12, 2016

In practically every project you’ve used a “dirty hack”. setAccessbile(true), sun.misc.Unsafe, changing a final value with reflection, copy-pasting a class from a library to change just one line of wrong code. Even if you haven’t directly, a library that you are using most certainly contains some of these.

Whenever we do something like that, we are reminded (by stackoverflow answers and colleagues alike) that this is a hack and it’s not desirable. And that’s ok – the first thing we should think about when using such a hack, is whether there isn’t a better way. A more object-oriented way, a more functional way. A way that the language allows for, but might require a bit more effort. But too often there is no such way, or at least not one that isn’t a compromise with other aspects (code readability, reuse, encapsulation, etc.). And especially in cases where 3rd party libraries are being used and “hacked”.

Vendors are also trying to make us avoid them – changing the access to a field via reflection might not work in some environments (some JavaEE cases included), due to a security manager. And one of the most “arcane” hacks – sun.misc.Unsafe is even going to be deprecated by Oracle.

But since these “hacks” are everywhere, including the Unsafe magic, deprecating or blocking any of them will just make the applications stop working. As you can see in the article linked above, practically every project depends on a sun.misc.Unsafe. It wouldn’t be an understatement to say that such “dirty hacks” are the reason major frameworks and libraries in the Java ecosystem exist at all – hibernate, spring, guava are among the ones that use them heavily.

So deprecating them is not a good idea, but my point here is different. These hacks get things done. They work. With some caveats and risks, they do the task. If instead you’d need to fork a 3rd party library and support the fork? Or suggest a patch and it doesn’t get accepted for a while, but you deadline is soon, these tricks are actually working solutions. They are not “beautiful”, but they’re OK.

Too often 3rd party libraries don’t offer exactly what you need. Either there’s a bug, or some method doesn’t behave according to your expectations. If using setAccessible in order to change a field or invoke a private method works – it’s the better approach than forking (submit an improvement request, of course). But sometimes you have to change a body method – for these use cases I created my quickfix tool a few years ago. It’s dirty, but does the job, and together with the rest of these hacks, lets you move forward to delivering actual value, rather than wondering “should I use a visitor pattern here or “should we fork this library and support it in our repository and maven repository manager until they accept our pull request and release a new version”, or “should I write this with JNI”, or even “should we do this at all, it’s not possible without a hack”.

I know this is not the best advice I’ve given, and it’s certainly a slippery slope – too much of the “get it done quick and dirty, I don’t care” mentality is surely a disaster. But poison can be a cure in small doses, if applied with full understanding of the issue.

7