A Beginner’s Guide to Addressing Concurrency Issues

April 20, 2016

Inserts, updates and deletes. Every framework tutorial starts with these and they are seen as the most basic functionality that just works.

But what if two concurrent requests try to modify the same data? Or try to insert the same data that should be unique? Or the inserts and updates have side-effects that have to be stored in other tables (e.g. audit log).

“Transactions” you may say. Well, yes, and no. A transaction allows a group of queries to be executed together – either pass together of fail together. What happens with concurrent transactions depends on a specific property of transactions – their isolation level. And you can read here a very detailed explanation of how all of that works.

If you select the safest isolation level – serializable (and repeatable read), your system may become too slow. And depending on the database, transactions that happen at the same time may have to be retried by specific application code. And that’s messy. With other isolation levels you can have lost updates, phantom reads, etc.

Even if you get your isolation right, and you properly handle failed transactions, isolation doesn’t solve all concurrency problems. It doesn’t solve the problem of having an application-imposed data constraint (e.g. a uniqueness complex logic that can’t be expressed as a database unique constraint), it doesn’t solve the problem of inserting exact duplicates, it doesn’t solve other application-level concurrency issues, and it doesn’t perfectly solve the data modification issues. You may have to get into database locking, and locking is tedious. What is a write lock, a read-lock, what is an exclusive lock, and how not to end-up in a deadlock (or a livelock)? I’m sure that even developers with a lot of experience are not fluent with database locks, because you either don’t need them, or you have a bigger problem that you should solve first.

The duplicate submission problem is a bit offtopic, but it illustrates that not all concurrent request problems can be solved by the database alone. As many people suggest, is solved by a token that gets generated for each request and stored in the database using a unique constraint. That way two identical inserts (a result of a double-submission) cannot both go in the database. This gets a little more complicated with APIs, because you should rely on the user of the API to provide the proper token (and not generate it on the fly in their back-end). As for uniqueness – every article that I’ve read on the matter concludes that the only proper way to guarantee uniqueness is at the database level, using a unique constraint. But when there are complicated rules for that constraint, you are inclined to check in the application. And in this case concurrent requests will eventually allow for two records with the same values to be inserted.

Most of the problems are easy if the application runs on a single machine. You can utilize your language concurrency features (e.g. Java locks, concurrent collections) to make sure everything is properly serialized, that duplicates do not happen, etc. However, when you deploy to more than one machine (which you should), that becomes a lot harder problem.

So what are the approaches to address concurrency issues, apart from transactions? There are many, and here are a few of them (in no meaningful order).

  • There is Hazelcast, which lets you use distributed locks – the whole cluster follows the Lock semantics as if it was a single machine. That is language specific and setting up a hazelcast cluster of just a few usecases (because not all of your requests will need that) may be too much
  • You can use a message queue – push all requests to a message queue that is processed by a single (async) worker. That may be useful in some cases, and impractical in others (if you have to return some immediate response to the user, for example)
  • You can use Akka and its clustering capabilities – it guarantees that an actor (think “service”) is processing only one message at a time. But using akka for everything may not be a good idea, because it completely changes the paradigm, it is harder to read and trace, harder to debug, and is platform-specific (only JVM languages can make use of it).
  • You can use database-specific application level locks. That’s something quite useful, even though it is entirely RDBMS-dependent. Postgre has advisory locks, MySQL has get_lock, others probably have something similar. The idea here is that you use the database as your distributed lock mechanism. The locks are managed by the application, and don’t even need to have anything to do with your tables – you just ask for a lock for, say (entityType, entityId), and then no other application thread can enter a given piece of code, unless it successfully obtains that database lock. It is kind of like the hazelcast approach, but you get it “for free” with the database. Then you can have, for example, a @Before (spring) aspect that attaches to service methods and does the locking appropriate for the current application use-case, without using table locks.
  • You can use a CRDT. It’s a data structure that is idempotent – no matter what the order of the operation applied is, it ends up in the same state. It’s explained in more details in this presentation. How does a CRDT map to a relational database is an interesting question I don’t have an answer to, but the point is that if your operations are idempotent, you will probably have fewer issues.
  • Using the “insert-only” model. Databases like Datomic are using it internally, but you can use it with any database. You have no deletes, no updates – just inserts. Updating a record is inserting a new record with the “version” increased. That again relies on database features to make sure you don’t end up with two records with the same version, but you never lose data (concurrent updates will make it so that one is “lost”, because it’s not the latest version, but it’s stored and can be reverted to). And you get an audit log for free.

The overall problems is how to serialize requests without losing performance. And all the various lock mechanisms and queues, including non-blocking IO, address that. But what makes the task easier is having a data model that does not care about concurrency. If the latter is applicable, always go for it.

Whole books have been written on concurrency, and I realize such a blog post is rather shallow by definition, but I hope I’ve at least given a few pointers.

6

How To Read Your Passport With Android

April 5, 2016

As I’ve been researching machine readable travel documents, I decided to do a little proof-of-concept on reading ePassports using an NFC-enabled smartphone (Android).

The result is on GitHub, and is based on the jMRTD library, which provides all the necessary low-level details.

As I pointed out in my previous article, the standards for the ePassports have evolved a lot throughout the years – from no protection, to BAC, to EACv1, EACv2 and SAC (which replaces BAC). Security is still doubtful, as most of the passports and inspection systems require backward compatibility to BAC. That’s slowly going away, but even when BAC goes away, it will be sufficient to enter the CAN (Card Authentication Number) for the PACE protocol, so the app will still work with minor modifications.

What the app does is:

  1. Establishes NFC communication
  2. Authenticates to the passport using the pre-entered passport number, date of birth and expiry date (hardcoded in the app at the moment). Note that the low security of the protocol is due to the low entropy of this combination, and brute force is an option, as passports cannot be locked after successive failures.
  3. Reads mandatory data groups – all the personal information present in the passport, including the photo. In the example code only the first data group (DG1) is read, and the personal identifier is shown on the screen. The way to read data groups is as follows:
    InputStream is = ps.getInputStream(PassportService.EF_DG1);
    DG1File dg1 = (DG1File) LDSFileUtil.getLDSFile(PassportService.EF_DG1, is);
    
  4. Performs chip authentication – the first step of EAC, which makes sure that the chip is not cloned – it requires proof of ownership of a private key, which is stored in the protected area of the chip.

The code has some questionable coding practices – e.g. the InputStream handling (the IDE didn’t initially allow me to use Java 7, and I didn’t try much harder), but I hope they’ll be fixed if used in real projects.

One caveat – for Android there’s a need for SpongyCastle (which is a port of the BouncyCastle security provider). However it is not enough, so both have to be present for certain algorithms to be supported. Unfortunately, jMRTD has a hardcoded reference to BouncyCastle in one method, which leads to the copy-pasted method for chip authentication.

There is one more step of EAC – the terminal authentication, which would allow the app to read the fingerprints (yup, sadly there are fingerprints there). However, EAC makes it harder to do that. I couldn’t actually test it properly, because the chip rejects verifying even valid certificates, but anyway, let me explain. EAC relies on a big infrastructure (PKI) where each participating country has a Document Verifier CA, whose root certificate is signed by all other participating countries (as shown here). Then each country issues short-lived (1 day) certificates signed by the DVCA, which are used in the inspection system (border polices and automatic gates). The certificate chain now contains all countries root certificates, followed by the DVCA certificate, followed by the inspection system certificate. The chip has to verify that this chain is valid (by verifying that each signature on a certificate is indeed performed by the private key of the issuer). The chip itself has the root certificate of its own country, so it has the root of the chain and can validate it (which is actually the first step). Finally, in order to make sure that the inspection system certificate is really owned by the party currently performing the protocol, the chip sends a challenge to be signed by the terminal.

So, unless a collision is found and a fake certificate is attached to the chain, you can’t easily perform “terminal authentication”. Well, unless a key pair leaks from some inspection system somewhere in the world. Then, because the chip does not have a clock, even though the certificates are short-lived, they would still allow reading the fingerprints, because the chip can’t know they are expired (it syncs the time with each successful certificate validation, but that only happens when going through border control at airports). Actually, you could also try to spam the chip with a huge chain, and it will at some point “crash”, and maybe it will do something that it wouldn’t normally do, like release the fingerprints. I didn’t do that for obvious reasons.

But the point of the app is not to abuse the passports – there may be legitimate use-cases to allow reading the data from them, and I hope my sample code is useful for that purpose.

4

Software Can’t Live On Its Own

March 30, 2016

We’re building software in hope that some day we’ll leave it and it will live on its own. Or with minor supervision. But the other day when my father asked me to dig an old website, I did some thinking and realized auto-pilot software is almost never the case.

Software is either being supported, or is abandonware, or is too simple. We constantly have to “fix” something, on each piece of software. Basically, picking up an old project and running it is rather hard – it would most probably require upgrading a ton of components. For example:

  • Fixing edge cases, bugs, security issues. The software environment is dynamic, and no complex software is without uncovered edge cases. Security issues arise constantly and have to be patched. Unless we find a way to write bugless software with perfect security, we have to support all these.
  • Breaking upgrades:
    • browsers are being upgraded constantly, and old websites probably won’t work. Protocols remain backward compatible for a while, but then support is discontinued and one has to upgrade. Operating systems introduce breaking changes to software running on them – one clear example is Android, where with each major version something doesn’t work anymore (because it was deprecated 2 versions prior) or has to be done in a different way. We have to be there and tweak our code to accommodate these upgrades.
    • Frameworks and languages get upgrades as well – and sometimes we can’t even build our legacy software anymore. Even if we can, the target environment may not support our old versions. The aforementioned site was written in PHP 4. Shared hosting providers no longer offer PHP 4, so will that site work? Possibly will need tweaks.
    • Changes in 3rd party APIs. If you rely on something like a facebook API, or a Google API, chances are your 3-year-old project will no longer work.
  • New use-cases – the real world is dynamic, and software that supports some real-world activity has to change with it. Some features become obsolete, new features are needed. Vendors like to advertise “draw-it-yourself” tools that create new forms and business processes without any technical expertise, but that’s rarely working properly
  • Visual design becomes outdated. Remember Web King? Maybe that was the design of the 1995, but not anymore. We’ve gone through waves and waves of new design trends, and often it’s not okay to look outdated.

A piece of software is not like a building – you can’t it once, and it lives for decades, with just occasional repairs. It is not like a piece of kitchen appliance, you can’t just replace it with a newer version.

It isn’t like a building, because it’s too complex (not diminishing the role of real architects, but they have a limited set of use cases). And it isn’t like kitchen appliances, because kitchen appliances don’t have data.

And actually, data migration is one of the reasons legacy software exists – migrating it to something new is hard – one should fit it into a new structure, and into a new database. And even simple migration from an older database version to a newer database version is hard. Migrating structures and even usecases is horrible. I won’t even mention triggers, stored procedures to be migrated across vendors and so on.

So yes, keeping an old piece of software running requires a lot of effort; migrating it to a newer and better piece of software is often a doomed project and you’re stuck with your existing system forever.

That means there is a whole big branch of the IT market that focuses on that – providing software to clients and then keeping them bound to that software forever. With regular updates and support. There is another type of companies, where things are more straightforward – the “single product as a service” companies. The cool web 2.0 startups are mostly single-product-as-a-service companies and if the company dies, the product dies with it. If the company manages to make some money, you don’t care…until it dies, and then your migration to a new piece of software is the promised nightmare.

Leaving simple software aside (my computoser is running unsupervised for 2 years already; not that it’s simple, but its complexity is confined to the algorithm; I heard that the software for the trash cleaning company I’ve written when I was 16 is still in use in my hometown) everything else needs constant caring. And given that more and more software is being build, this leads us to the sad realization that we’ll have to support a lot of software. More and more of programmer’s work will be caring for what’s been already built, rather than building something new. And on one hand that’s sad. That means software for many is not “craftsmanship”, not “science”, not “making cool things”. It is a mundane support and gradual extension of old, clunky bulks of code.

Unless we learn to build self-supporting software. Software that automatically overcomes OS upgrades, framework and protocol upgrades. Software that allows extending without writing code (which many systems claim to do even now, but very few actually do). Until that time, I’m afraid we’re stuck with supporting our current projects, and in the best case – extending them to fit new needs and customers.

2

Take a Step Back

March 19, 2016

A software project can become “legacy” just after three months from its inception. I’ve recently seen many projects that look OK on the surface, but are in fact so “broken”, that they have to be rewritten. Well, they work, but continuing their support is a pain. And everyone has probably seen at least one such project, where you want to just throw it away and start from scratch. The problem is – starting from scratch won’t guarantee the good outcome either. Why things go that way?

I think it’s because developers tend to solve problems one at a time, achieving a “local maximum”. They don’t need to be “quick dirty fixes” – any even reasonably sounding fix for the particular problem at hand may yield de-facto legacy code. And if we view software development as constant problem fixing (as even introducing new features consists of fixing problems along the way), we have a problem.

My approach to that process is to take a step back and ask the question “is this really the problem I’m solving, or is there a bigger underlying problem”. And sometimes there is. There are some clear indications that there is such a problem. “Code smell” is a popular term for that, but I’d like to extend it – sometimes it’s not the thing that you do that makes things smell, but rather something done before makes you take stupid decisions later. Sometimes these decisions don’t even look wrong in the context that you’ve created with your previous decisions, but they are certainly wrong. And you can use them as indicators. Some examples:

If you have to copy-paste some piece of code to another part of the project, and that’s your best option, something’s wrong with the code. You should take a step back and refactor, rather than copy-pasting yet another piece.

If you have to rely on a full manual test to figure out that your application is not broken, the quick and easy “fix” for the problem is to just get a QA to manually test it. If you do that, instead of adding tests, quality degrades over time.

If you have to use business logic to overcome data model or infrastructure deficiencies, the easy fix is to just add a couple of if’s here and there. Then six months later you have unreadable code, full of bits irrelevant to the actual business logic. Instead, fix the data model or your infrastructure (in a wider sense, e.g. framework configuration).

If, given a bug report, tracing the program flow given requires knowing where things are, rather than finding them, it means the project is not well structured. Yes, you are probably working on it for a year now and you know where things are, but finding stuff using search (or call and class hierarchies) is the proper way to go – even for people experience with the project (not to mention newcomers).

If the addition of a data field or a component requires changes in the whole project, rather than just an isolated part of the project, then each new addition creates more complexity and more potential failures. E.g. pseudo-plugin systems that require changing the core with each plugin.

The list can go on, but the point is clear – if faced with an option to do something the wrong way, take a step back and rethink whether the problem should exist in the first place. Otherwise each fix becomes technical debt. And in three months you have a “legacy” project.

1

Pretty Print JSON Per Request With Spring MVC

February 22, 2016

You will find a lot of posts and stackoverflow answers telling you how to pretty-print JSON responses. But sometimes you may need to tune the “prettiness” per request.

The use case for this is when you are using tools like curl or RESTClient to interact with the system and you want human-readable output. Of course, if you need human-readable output only for debug purposes, you should really consider whether you need JSON at all, or you should use some binary format. But let’s assume you need JSON. And that you’d rather get it pretty-printed, rather than use an external tool to prettify it afterwards.

The basic idea is to enable pretty-printing with either a GET parameter, or preferably with an Accept header like application/json+pretty. With Spring MVC that is not supported out of the box. You’d need to create a class like that:

/**
 * An subclass of the MappingJackson2HttpMessageConverter that accespts the application/json+pretty content type
 * in order to enable per-request prettified JSON responses
 *  
 * @author bozho
 *
 */
public class PrettyMappingJackson2HttpMessageConverter extends MappingJackson2HttpMessageConverter {

  /**
   * Construct a new {@link MappingJackson2HttpMessageConverter} using default configuration
   * provided by {@link Jackson2ObjectMapperBuilder}
   */
  public PrettyMappingJackson2HttpMessageConverter() {
    super();
    objectMapper.enable(SerializationFeature.INDENT_OUTPUT);
    setSupportedMediaTypes(Lists.newArrayList(new MediaType("application", "json+pretty", DEFAULT_CHARSET)));
  }
}

Then in your spring-mvc xml configuraton (or java config counterpart) you should register this as a message converter:

<mvc:message-converters>
    <bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter" />
    <!-- Handling Accept: application/json+pretty -->
    <bean class="com.yourproject.util.PrettyMappingJackson2HttpMessageConverter" />
</mvc:message-converters>

If you have a separately defined ObjectMapper and want to pass it to the pretty converter, you should override the other constructor (accepting an object mapper), and use the .copy() method before enabling the INDENT_OUTPUT.

And then you’re done. You can switch from regular (non-indented) and pretty output by setting the Accept header to application/json+pretty

6

Setting Up Distributed Infinispan Cache with Hibernate and Spring

February 17, 2016

A pretty typical setup – spring/hibernate application that requires a distributed cache. But it turns out not so trivial to setup.

You obviously need cache. There are options to do that with EhCache, Hazelcast, Infinispan, memcached, Redis, AWS’s elasticache and some others. However, EhCache supports only replicated and not distributed cache, and Hazelcast does not yet work with the latest version of Hibernate. Infinispan and Hazelcast support consistent hashing, so the entries live only on specific instance(s), rather than having a full copy of all the cache on the heap of each instances. Elasticache is AWS-specific, so Infinispann seems the most balanced option with the spring/hibernate setup.

So, let’s first setup the hibernate 2nd level cache. The official documentation for infinispan is not the top google result – it is usually either a very old documentaton, or just 2 versions old documentaton. You’d better open the latest one from the homepage.

Some of the options below are rather “hidden”, and I couldn’t find them easily in the documentation or in existing “how-to”s.

First, add the relevant dependencies to your dependency manager configuraton. You’d need infinispan-core, infinispan-spring and hibernate-infinispan. Then in your configuratoin file (whichever it is – in my case it is jpa.xml, a spring file that defines the JPA properties) configure the following:

<prop key="hibernate.cache.use_second_level_cache">true</prop>
<prop key="hibernate.cache.use_query_cache">true</prop>
<prop key="hibernate.cache.region.factory_class">org.hibernate.cache.infinispan.InfinispanRegionFactory</prop>
<prop key="hibernate.cache.inifinispan.statistics">true</prop>
<prop key="hibernate.cache.infinispan.cfg">infinispan.xml</prop>
<prop key="hibernate.cache.infinispan.query.cfg">distributed-query</prop>

These settings enable 2nd level cache and query cache, using the default region factory (we’ll see why that may need to be changed to a custom one later), enable statistics, point to an infinispan.xml configuraton file and change the default name for the query cache in order to be able to use a distributed one (by default it’s “local-cache”). Of course, you can externalize all these to a .properties file.

Then, at the root of your classpath (src/main/resources) create infinispan.xml:

<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="urn:infinispan:config:8.1 http://www.infinispan.org/schemas/infinispan-config-8.1.xsd
                            urn:infinispan:config:store:jdbc:8.0 http://www.infinispan.org/schemas/infinispan-cachestore-jpa-config-8.0.xsd"
    xmlns="urn:infinispan:config:8.1">
    <jgroups>
        <stack-file name="external-file" path="${jgroups.config.path:jgroups-defaults.xml}" />    
    </jgroups>
    <cache-container default-cache="default" statistics="true">
        <transport stack="external-file" />
        <distributed-cache-configuration name="entity" statistics="true" />
        <distributed-cache-configuration name="distributed-query" statistics="true" />
    </cache-container>
</infinispan>

This expects -Djgroups.config.path to be passed to the JVM to point to a jgroups configuration. Depending on whether you use your own setup or AWS, there are multiple options. Here you can find config files for EC2, Google cloud, and basic UDP and TCP mechanism. These should be placed outside the project itself, because locally you most likely don’t want to use S3_PING (S3 based mechanism for node detection), and values may vary between environments.

If you need statistics (and it’s good to have them) you have to enable them both at cache-container level and at cache-level. I actually have no idea what the statistics option in the hibernate properties is doing – it didn’t change anything for me.

Then you define each of your caches. Your entities should be annotated with something like

 
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region = "user")
public class User { .. }

And then Infinispan creates caches automatically. They can all share some default settings, and these defaults are defined for the cache named “entity”. Took me a while to find that out, and finally got an answer on stackoverflow. The last thing is the query cache (using the name we defined in the hibernate properties). Note the “distributed-cache-configuration” elements – that way you you explicitly say “this (or all) cache(s) must be distributed” (they will use the transport mechanism specified in the jgroups file). You can configure defaults in a jgroups-defaults.xml and point to it as shown in the above example, if you don’t want to force developers specify the jvm arguments.

You can define entity-specific properties using <distributed-cache-configuration name="user" /> for example (check the autocomplete from the XSD to see what configuration options you have (and XML is a pretty convenient config DSL, isn’t it?).

So far, so good. Now our cache will work both locally and on AWS (EC2, S3), provided we configure the right access keys, and locally. Technically, it may be a good idea to have different infinispan.xml files for local and production, and to define by default <local-cache>, rather than a distributed one, because with the TCP or UDP settings, you may end up in a cluster with other teammates in the same network (though I’m not sure about that, it may present some unexpected issues).

Now, spring. If you were to only setup spring, you’d create a bean with a SpringEmbeddedCacheManagerFactoryBean, pass classpath:infinispan.xml as resource location, and it would work. And you can still do that, if you want completely separated cache managers. But Cache managers are tricky. I’ve given an outline of the problems with EhCache, and here we have to do some workarounds in order to have a cache manager shared between hibernate and spring. Whether that’s a good idea – it depends. But even if you need separate cache managers, you may need a reference to the hibernate underlying cache manager, so part of the steps below are still needed. A problem with using separate caches is the JMX name they get registered under, but that I guess can be configured as well.

So, if we want a shared cache manager, we have to create subclasses of the two factory classes:

/**
 * A region factory that exposes the created cache manager as a static variable, so that
 * it can be reused in other places (e.g. as spring cache)
 * 
 * @author bozho
 *
 */
public class SharedInfinispanRegionFactory extends InfinispanRegionFactory {

	private static final long serialVersionUID = 1126940233087656551L;

	private static EmbeddedCacheManager cacheManager;
	
	public static EmbeddedCacheManager getSharedCacheManager() {
		return cacheManager;
	}
	
	@Override
	protected EmbeddedCacheManager createCacheManager(ConfigurationBuilderHolder holder) {
		EmbeddedCacheManager manager = super.createCacheManager(holder);
		cacheManager = manager;
		return manager;
	}
	
	@Override
	protected EmbeddedCacheManager createCacheManager(Properties properties, ServiceRegistry serviceRegistry)
			throws CacheException {
		EmbeddedCacheManager manager = super.createCacheManager(properties, serviceRegistry);
		cacheManager = manager;
		return manager;
	}
}

Yup, a static variable. Tricky, I know, so be careful.

Then we reuse that for spring:

/**
 * A spring cache factory bean that reuses a previously instantiated infinispan embedded cache manager
 * @author bozho
 *
 */
public class SharedInfinispanCacheManagerFactoryBean extends SpringEmbeddedCacheManagerFactoryBean {
        private static final Logger logger = ...;
	@Override
	protected EmbeddedCacheManager createBackingEmbeddedCacheManager() throws IOException {
		EmbeddedCacheManager sharedManager = SharedInfinispanRegionFactory.getSharedCacheManager();
		if (sharedManager == null) {
			logger.warn("No shared EmbeddedCacheManager found. Make sure the hibernate 2nd level "
					+ "cache provider is configured and instantiated.");
			return super.createBackingEmbeddedCacheManager();
		}
		
		return sharedManager;
	}
}

Then we change the hibernate.cache.region.factory_class property in the hibernate configuration to our new custom class, and in our spring configuration file we do:

<bean id="cacheManager" class="com.yourcompany.util.SharedInfinispanCacheManagerFactoryBean" />
<cache:annotation-driven />

The spring cache is used with a mehtod-level @Cacheable annotation that allows us to cache method calls, and we can also access the CacheManager via simple injection.

Then the “last” part is to check if it works. Even if your application starts ok and looks to be working fine, you should run your integration or selenium test suite and check the statistics via JMX. You may even have tests that use the MBeans to fetch certain stats data about the caches to make sure they are being used. And/or you can write an integraton test that injects the CacheManager and uses the StandardCacheEntryImpl it turns to compare the version property after subsequent operations, to see if the cache is properly updated.

Overall, it shouldn’t take much time to set the whole thing up, and then later even to replace with another implementation if necessary.

2

Issues With Electronic Machine Readable Travel Documents

February 3, 2016

Most of us have passports, and most of these passports are by now equipped with chips that store some data, including fingerprints. But six months ago I had no idea how that operates. Now that my country is planning to roll out new identity documents, I had to research the matter.

The chip (which is a smartcard) in the passports has a contactless interface. That means RFID, 13.56 MHz (like NFC). Most typical uses of smartcards require PIN entry from the owner. But the point with eMRTD (Eletronic machine readable travel documents) is different – they have to be read by border control officials anf they have to allow quickly going through Automatic Border Control gates/terminals. Typing a PIN will allegedly slow the process, and besides, not everyone will remember their PIN. So the ICAO had to invent some standard and secure way to allow gates to read the data, but at the same time prevent unauthorized access (e.g. someone “sniffing” around with some device).

And they thought they did. A couple of times. First the mechanism was BAC (Basic Access Control). When you open your passport on the photo page and place it in the e-gate, it reads the machine-readable zone (MRZ) with OCR and gets the passport number, birth date and issue date from there. That combination of those is a key that is used to authenticate to the chip in order to read the data. The security issues with that are obvious, but I will leave the details to be explained by this paper.

Then, they figured, they could improve the previously unsecure e-passports, and they introduced EAC (Extended Access Control). That includes short-lived certificate on the gates, and the chip inside the passport verifies those certificates (card-verifiable certificates). Only then the gate can read the data. You can imagine that requires a big infrastructure – every issuing country has to support a PKI, countries should cross-sign their “document verifier certificates”, and all of those should be in a central repository, where gates pull the certificates from. Additionally, these certificates should be very short-lived in order to reduce the risk of leaking a certificate. Such complexity, of course, asks for trouble. The first version of EAC was susceptible to a number of attacks, so they introduced EACv2. Which mostly covers the attacks on v1, except a few small details: chips must be backward-compatible with BAC (because some gates may not support EAC). Another thing is that since the passport chip has no real clock, it updates the time after successful validation with a gate. But if a passport is not used for some period of time, expired (and possibly leaked) certificates can be used to get the data from the chip anyway. All of the details and issues of EACv1 and EACv2 are explained in this paper.

Since BAC is broken due to the low entropy, SAC (Supplemental Access Control) was created, using the PACE (v2) protocol. It is a password-authenticated key agreement protocol – roughly Diffie-Hellman + mutual authentication. The point is to generate a secret with high entropy based on a small password. The password is either a PIN, or a CAN (Card Authentication Number) printed in the MRZ of the passport. (I think this protocol can be used to secure a regular communication with a contactless reader, if used with a PIN). The algorithm has two implementations GM (General Mapping) and IM (Integrated Mapping). The latter, however, uses a patented Map2Point algorithm, and if it becomes widely adopted, is a bomb waiting to explode.

The whole story above is explained in this document. In addition, there is the BioPACE algorithm which includes biometric validation on the terminal (i.e. putting your finger for unlocking the chip), but (fortunately) that is not adopted anywhere (apart from Spain, afaik).

Overall, after many years and many attempts, the ICAO protocols seem to still have doubtful security. Although much improvement has been made, the original idea of allowing a terminal to read data without requiring action and knowledge from the holder, necessarily leads to security issues. Questions arise about brute-forcing as well – either an attacker can jam the chip with requests, or he can lock it after several unsuccessful attempts.

And if you think passports have issues, let me mention ID cards. Some countries make their ID cards ICAO-compliant in order to allow citizens to use them instead of passports (in the EU, for example, the ID card is a valid travel document). Leaving the question “why would a Schengen citizen even need to go through border control in Europe” aside, there are some more issues: the rare usage of the cards brings the EACv2 vulnerability. The MRZ is visible without the owner having to open it on the photo page – this means anyone who gets a glimpse of the ID card knows your CAN, and then authenticate as if it’s a terminal. And while passports are carried around only when you travel abroad, ID cards are carried at all times, increasing the risks for personal and biometric data leakage many times. Possibly these issues are the reason that by 2014 only Germany and Spain had e-gates that support ID cards as eMRTD. Currently there is the ABC4EU project that is aimed at defining common standards and harmonizing the e-gates infrastructure, so in 5-6 years there may be more e-gates supporting ID cards, and therefore more ID cards conforming to ICAO.

Lukas Grunwald has called all of the above “Security by politics” in his talk at DEF CON last year. He reveals practical issues with the eMRTD, including attacks not only on the chips, but on the infrastructure as well.

Leaking data, including biometric data, to strangers on the metro who happen to have a “listening” device is a huge issue. Stainless steel wallets shielding from radio signals will probably become more common, at least with more technical people. Others may try to microwave their ID cards, like some Germans have done.

It’s not about the automatic control, some say, it’s about the security of the document itself, and by that – the security of everyone. If your fingerprints are signed by your country, surely nobody can create a fake document. First, even when there are checks on the biometrics (photo, iris, fingerprints), they are far from perfect. Also, in order to identify a fake passport, you have to check the fingerprints of everyone. Which they do in the US, but they don’t rely on the ones on the passport – they specifically take your fingerprints when issuing a visa. And reasonably so – in the ICAO system, if the root certificate of any country gets compromised, it can be used to sign fake passports (rogue states aside, are we certain that all countries have proper security around their CA? I’m not). And besides – are fake passports really the threat? Even if passports are ultra-secure (which they aren’t), attackers don’t attack the strongest part of a security system – they attack the weakest part. For example unguarded borders. Arriving by car or bus (where comparing fingerprints is rather impractical). Or, actually, working with people that already have valid passports, like most of the terrorists in recent attacks.

But apparently the “political will” is aimed at ensuring the false sense of security, and at convenience at the airport, allowing for less queues and less human border control officers, while getting all possible data about the citizen. Currently all of that appears to be at the expense of information security, but can it be different? Having an RFID chip in your document is always a risk (banks allow contactless payments up to a given limit, and they accept the risk themselves). But if we eliminate all the data from the passport/ID card, and leave simply a “passport number” to be read, it may be useless to attackers (currently the eMRTD have names, address, birth date, photo, fingerprints).

There is a huge infrastructure already in place, and it operates in batch mode – i.e. rotating certificates on regular intervals. But the current state of technology allows for near-real time querying – e.g. you go the the gate, put your eMRTD, it reads your passport numbers and sends a query to the passport database of the issuing country, which returns the required data as a response. If that is at all needed – the country where you enter can simply store the passport numbers that entered, together with the picture of the citizen, and later obtain the required data in batches. If batches suffice, data on the chip may still be present, but encrypted with the issuer’s public key and sent for decryption. This “issuer database” approach has its own implications – if every visit to a foreign country triggers a check in their national database, that may be used to easily trace citizen’s movements. While national passport databases exist, forming a huge global database is too scary. (Not) logging validation attempts in national databases may be regulated and audited, but that increases the complexity of the whole system. But I think this is the direction this should move to – having only a “key” in the passport, and data in central, (allegedly) protected databases. Note that e-gates normally do picture verification, so that might have to be stored on the passport. (Note: I discovered this proposal for an online verification protocol after writing this post)

Technical issues aside, when getting our passports, and more importantly – our ID cards, we must be allowed to make an informed choice – do we want to bear the security risks for the sake of the convenience of not waiting in queues (although queues form on e-gates as well), or we don’t care about automatic border control and we’d rather keep our personal and biometric data outside the RFID chip. For EU ID cards I would even say the default option must be the latter.

And while I’m not immediately concerned about an Orwellian (super)state tracking all your movements through a mandatory RFID document (or even – implant), not addressing these issues may lead to one some day (or has already lead in less democratic countries that have RFID ID cards), and at the very least – to a lot of fraud. For that reason “security by politics” must be avoided. I just don’t know how. Probably on an EU level?

0

Microservices Use Cases

January 19, 2016

A few months ago I wrote a piece in defence of monoliths and then gave a talk about it. Overall, one should not jump to microservices, because the overhead and risk are much higher than any professed benefits. But there I left out some legitimate use cases for microservices.

These use cases may not be “typical” microservices, but they mostly conform to the notion of a separate, stand-alone deployment of independent functionality.

The most obvious use cases are those of a CPU or RAM intensive part of the application. That normally goes into a separate deployment, offering an interface to the rest of the application.

First, it’s easy to spawn multiple instances of a stateless, CPU-intensive microservice, on demand. They may even be “workers” that process a given spike and then die, including a fork-join setup. And they shouldn’t make the rest of the application get stuck because of their processing requirements – they should be separated.

There are services that consume a lot of RAM (e.g. text analysis tools that include big gazetteers, trained models, natural language processing pipelines) that are impractical to be run every time a developer starts the application he’s working on. They are even problematic to redeploy and restart in a production environment. And if they change rarely, it’s justified to separate them.

What’s common in those above is that they do not have a database. They expose their processing functionality but do not store anything (apart from some caching). So there is no complexity in coordinating database transactions, for example.

Another “partial” use case is having multiple teams working on the same product. That looks applicable to all projects out there – thousands of facebook developers are working on just facebook, for example. First, it isn’t. Many non-billion-dollar-billion-user companies actually dedicate one or a small number of teams to a project. And even facebook actually has many projects (mobile, ads, chat, photos, news feed). And those are not “micro” services. They are full-featured products that happen to integrate with the rest in some way. But back to the use case – sometimes microservices may give multiple teams increased flexibility. That very much depends on the domain, though. And it’s not impossible for two teams to work on the same monolith, with due process.

Universally, if you are sure that the network and coordination overhead of microservices will be negligible compared to the amount of work being done and the flexibility, then they are a valid approach. But I believe that’s rare. Martin Fowler talks about complexity vs productivity, so, in theory, if you know in advance how complex your project is going to be, maybe you have a valid microservices use case.

Separating a piece of functionality into a service of its own and communicating with it through web services should not be something that deserves so much attention. But apparently we have to say “no, it’s not for every project” and “yes, the approach is not dumb by itself, there are cases when it’s useful”.

1

Testing: Appetite Comes With Eating

January 11, 2016

I’ve written a lot about testing. Some tips on integration tests, some how-tos, some general opinions about tests. But I haven’t told my “personal story” about testing.

Why are tests needed should be obvious by now. It’s not all about finding bugs (because then you can use an excuse like “QAs will find them anyway”), it’s about having a codebase that can remain stable with changes. And it’s about writing better code, because testable code is cleaner.

I didn’t always write tests. Well, at least not the right amount. I had read a lot about testing, about the benefits of testing, about test-first / test-driven, about test coverage. And it seemed somewhat distant. The CRUD-like business logic seemed unworthy of testing. A few if-statements here, a few database queries there, what’s to be tested?

There are companies where tests were “desirable”, “optional”, “good, but maybe not now”. There are times when marking a test with @Ignore looks ok. And although that always bites you in the end, you can’t get yourself motivated to get your coverage up.

Yup, I’ve been there. I’ve written tests “every now and then”, and knew how to tests, but it wasn’t my “nature”. But I’m “clean” now – not only at work, but also in side-projects, I think I have a somehow different mentality – “how do I test that” and “how do I write that in order to be able to test it”.

I won’t go into the discussion of whether “test-first” is better. I don’t do it – I’ve done it, but I don’t find it that important, provided you have the right mindset towards your code. The fact that you write your tests after the code doesn’t mean the code isn’t written with the tests in mind.

How did that happen? I didn’t have a failed project because of lack of tests, and I didn’t go on a soul-searching trip to find out that I have to write tests to achieve inner peace. I think it’s a combination of several factors.

First, it’s the company/team culture. The team that I’m in now has the right practical approach to tests – it doesn’t have to be 100% coverage, but it has to cover all edge cases – we even have a task in most stories that makes us explicitly think of any possible edge cases. Even if you want to write test, if nobody around is, then you get demotivated. But when everyone around is doing it, it becomes a habit.

Then there’s experience. After years of years of reading about the benefits and seeing the problems of not having tests, and seeing that even your mere 25% of coverage has given you some freedom and that the tested pieces just look better, one should eventually do it. It’s the way of things.

And finally, it’s about what the French express as “appetite comes with eating”. The more you write tests, the more you want to write them.

0

General Performance Tips

December 28, 2015

Performance is a mystical thing our systems must have. But as with most things in software engineering, there is no clearly defined set of steps that have to be followed in order to have a performant systems. It depends on the architecture, on the network, on the algorithms, on the domain problem, on the chosen technologies, on the database, etc.

Apart from applying common-sense driven development, I have “collected” some general tips on how problems with performance can usually be addressed.

But before that I have to make a clarification. A “performance problem” is not only about problems that you realize after you run your performance tests or after you deploy to production. Not all optimization is premature, so most of these “tips” must be applied in advance. Of course, you should always monitor, measure and try to find bottlenecks in a running system, but you should also think ahead.

The first thing is using a cache. Anything that gets accessed many times but doesn’t change that often must be cached. If it’s a database table, the query should be cached. If a heavy method is invoked many times, it can be cached. Static web resources must be cached. On an algorithmic level, memoization is a useful technique. So how to do the caching?
It depends. An ORM can provide the relevant cache infrastructure for database queries, spring has method-level cache support, web frameworks have resource caches. Some distributed cache (memcached/redis/ElastiCache) can be setup, but that may be too much of an effort. Sometimes it’s better and easier to have a local cache. Guava has a good cache implementation, for example.

Of course, as Phil Karlton has once said, “There are only two hard things in Computer Science: cache invalidation and naming things”. So cache comes with a “mental” cost. How and when should the cache be invalidated. So don’t just cache everything – figure out where there’s benefit. In many cases that is quite obvious.

The second tip is to use queues (and that does not contradict my claim that you probably don’t need an MQ). It can be an in-memory queue, or it can be a full-blown MQ system. In any case, if you have a heavy operation that has to be performed, you can just queue all the requests for that operation. Users will have to wait, but sometimes that doesn’t matter. For example, twitter can generate your entire twitter archive. That takes a while, as it has to go through a lot of records and aggregate them. My guess is that they use a queue for that – all requests for archive generation are queued. When your time comes, and your request is processed, you get an email. Queuing should not be overused, though. Simply having an expensive operation doesn’t mean a queue solves it.

The third tip is background calculation. Some data you have to show to your users don’t have to be generated in real-time. So you can have a background task that does its job periodically, instead of having the user wait for the result in a veery long request. For example, music generation in my Computoser takes a lot of time (due to the mp3 generation), so I can’t just generate tracks upon request. But there’s a background process that generates tracks and serves a newly generated track to each new visitor.

The previous two tips were more about making heavy operations not look slow, rather than actually optimizing them. But they are also about not using too much server resources for achieving the required task.

Database optimizations is next. Quite obvious, you may say, but actually – no. Especially if using an ORM, many people have no idea what happens underneath (hint: it’s not the ORM’s fault). I’ve seen a production system with literally no secondary indexes, for example. It was fine until there were millions of records, but it has gradually become unusable (why it wasn’t fixed – different story). So, yes, create indexes. Use EXPLAIN to see how your queries are executed, see if there are any unnecessary full table scans.

Another tip that I’ve already written about is using the right formats for internal communication. Schemes like Thrift, Avro, protobuf, messagepack, etc. exist for exactly this reason. If your systems/services have to communicate internally, you don’t want XML, if there’s another format that takes 20% of the space and uses 30% of the CPU to serialize/deserialize. These things accumulate at scale.

The final tip is “Don’t do stupid things”, and it’s harder than it sounds. It is a catch-all tip, but sometimes when you look at your code from aside, you want to slap yourself. Have you just written an O(nn) array search? Have you just called an external service a thousand times where you could’ve cached the result the first time? Have you forgotten to add an index? Such obviously stupid things lurk in every project. So in order to minimize the stupid things being done, do code reviews. Code reviews are not premature optimization either.

Will applying these tips mean your system performs well? Not at all. But it’s a good start.

0