Issues With Electronic Machine Readable Travel Documents

February 3, 2016

Most of us have passports, and most of these passports are by now equipped with chips that store some data, including fingerprints. But six months ago I had no idea how that operates. Now that my country is planning to roll out new identity documents, I had to research the matter.

The chip (which is a smartcard) in the passports has a contactless interface. That means RFID, 13.56 MHz (like NFC). Most typical uses of smartcards require PIN entry from the owner. But the point with eMRTD (Eletronic machine readable travel documents) is different – they have to be read by border control officials anf they have to allow quickly going through Automatic Border Control gates/terminals. Typing a PIN will allegedly slow the process, and besides, not everyone will remember their PIN. So the ICAO had to invent some standard and secure way to allow gates to read the data, but at the same time prevent unauthorized access (e.g. someone “sniffing” around with some device).

And they thought they did. A couple of times. First the mechanism was BAC (Basic Access Control). When you open your passport on the photo page and place it in the e-gate, it reads the machine-readable zone (MRZ) with OCR and gets the passport number, birth date and issue date from there. That combination of those is a key that is used to authenticate to the chip in order to read the data. The security issues with that are obvious, but I will leave the details to be explained by this paper.

Then, they figured, they could improve the previously unsecure e-passports, and they introduced EAC (Extended Access Control). That includes short-lived certificate on the gates, and the chip inside the passport verifies those certificates (card-verifiable certificates). Only then the gate can read the data. You can imagine that requires a big infrastructure – every issuing country has to support a PKI, countries should cross-sign their “document verifier certificates”, and all of those should be in a central repository, where gates pull the certificates from. Additionally, these certificates should be very short-lived in order to reduce the risk of leaking a certificate. Such complexity, of course, asks for trouble. The first version of EAC was susceptible to a number of attacks, so they introduced EACv2. Which mostly covers the attacks on v1, except a few small details: chips must be backward-compatible with BAC (because some gates may not support EAC). Another thing is that since the passport chip has no real clock, it updates the time after successful validation with a gate. But if a passport is not used for some period of time, expired (and possibly leaked) certificates can be used to get the data from the chip anyway. All of the details and issues of EACv1 and EACv2 are explained in this paper.

Since BAC is broken due to the low entropy, SAC (Supplemental Access Control) was created, using the PACE (v2) protocol. It is a password-authenticated key agreement protocol – roughly Diffie-Hellman + mutual authentication. The point is to generate a secret with high entropy based on a small password. The password is either a PIN, or a CAN (Card Authentication Number) printed in the MRZ of the passport. (I think this protocol can be used to secure a regular communication with a contactless reader, if used with a PIN). The algorithm has two implementations GM (General Mapping) and IM (Integrated Mapping). The latter, however, uses a patented Map2Point algorithm, and if it becomes widely adopted, is a bomb waiting to explode.

The whole story above is explained in this document. In addition, there is the BioPACE algorithm which includes biometric validation on the terminal (i.e. putting your finger for unlocking the chip), but (fortunately) that is not adopted anywhere (apart from Spain, afaik).

Overall, after many years and many attempts, the ICAO protocols seem to still have doubtful security. Although much improvement has been made, the original idea of allowing a terminal to read data without requiring action and knowledge from the holder, necessarily leads to security issues. Questions arise about brute-forcing as well – either an attacker can jam the chip with requests, or he can lock it after several unsuccessful attempts.

And if you think passports have issues, let me mention ID cards. Some countries make their ID cards ICAO-compliant in order to allow citizens to use them instead of passports (in the EU, for example, the ID card is a valid travel document). Leaving the question “why would a Schengen citizen even need to go through border control in Europe” aside, there are some more issues: the rare usage of the cards brings the EACv2 vulnerability. The MRZ is visible without the owner having to open it on the photo page – this means anyone who gets a glimpse of the ID card knows your CAN, and then authenticate as if it’s a terminal. And while passports are carried around only when you travel abroad, ID cards are carried at all times, increasing the risks for personal and biometric data leakage many times. Possibly these issues are the reason that by 2014 only Germany and Spain had e-gates that support ID cards as eMRTD. Currently there is the ABC4EU project that is aimed at defining common standards and harmonizing the e-gates infrastructure, so in 5-6 years there may be more e-gates supporting ID cards, and therefore more ID cards conforming to ICAO.

Lukas Grunwald has called all of the above “Security by politics” in his talk at DEF CON last year. He reveals practical issues with the eMRTD, including attacks not only on the chips, but on the infrastructure as well.

Leaking data, including biometric data, to strangers on the metro who happen to have a “listening” device is a huge issue. Stainless steel wallets shielding from radio signals will probably become more common, at least with more technical people. Others may try to microwave their ID cards, like some Germans have done.

But apparently the “political will” is aimed at ensuring the convenience at the airport, and allowing for less queues and less human border control officers, while getting all possible data about the citizen. Currently all of that appears to be at the expense of information security, but can it be different? Having an RFID chip in your document is always a risk (banks allow contactless payments up to a given limit, and they accept the risk themselves). But if we eliminate all the data from the passport/ID card, and leave simply a “passport number” to be read, it may be useless to attackers (currently the eMRTD have names, address, birth date, photo, fingerprints).

There is a huge infrastructure already in place, and it operates in batch mode – i.e. rotating certificates on regular intervals. But the current state of technology allows for near-real time querying – e.g. you go the the gate, put your eMRTD, it reads your passport numbers and sends a query to the passport database of the issuing country, which returns the required data as a response. If that is at all needed – the country where you enter can simply store the passport numbers that entered, together with the picture of the citizen, and later obtain the required data in batches. If batches suffice, data on the chip may still be present, but encrypted with the issuer’s public key and sent for decryption. This “issuer database” approach has its own implications – if every visit to a foreign country triggers a check in their national database, that may be used to easily trace citizen’s movements. While national passport databases exist, forming a huge global database is too scary. (Not) logging validation attempts in national databases may be regulated and audited, but that increases the complexity of the whole system. But I think this is the direction this should move to – having only a “key” in the passport, and data in central, (allegedly) protected databases. Note that e-gates normally do picture verification, so that might have to be stored on the passport.

Technical issues aside, when getting our passports, and more importantly – our ID cards, we must be allowed to make an informed choice – do we want to bear the security risks for the sake of the convenience of not waiting in queues (although queues form on e-gates as well), or we don’t care about automatic border control and we’d rather keep our personal and biometric data outside the RFID chip. For EU ID cards I would even say the default option must be the latter.

And while I’m not immediately concerned about an Orwellian (super)state tracking all your movements through a mandatory RFID document (or even – implant), not addressing these issues may lead to one some day (or has already lead in less democratic countries that have RFID ID cards), and at the very least – to a lot of fraud. For that reason “security by politics” must be avoided. I just don’t know how. Probably on an EU level?

0

Microservices Use Cases

January 19, 2016

A few months ago I wrote a piece in defence of monoliths and then gave a talk about it. Overall, one should not jump to microservices, because the overhead and risk are much higher than any professed benefits. But there I left out some legitimate use cases for microservices.

These use cases may not be “typical” microservices, but they mostly conform to the notion of a separate, stand-alone deployment of independent functionality.

The most obvious use cases are those of a CPU or RAM intensive part of the application. That normally goes into a separate deployment, offering an interface to the rest of the application.

First, it’s easy to spawn multiple instances of a stateless, CPU-intensive microservice, on demand. They may even be “workers” that process a given spike and then die, including a fork-join setup. And they shouldn’t make the rest of the application get stuck because of their processing requirements – they should be separated.

There are services that consume a lot of RAM (e.g. text analysis tools that include big gazetteers, trained models, natural language processing pipelines) that are impractical to be run every time a developer starts the application he’s working on. They are even problematic to redeploy and restart in a production environment. And if they change rarely, it’s justified to separate them.

What’s common in those above is that they do not have a database. They expose their processing functionality but do not store anything (apart from some caching). So there is no complexity in coordinating database transactions, for example.

Another “partial” use case is having multiple teams working on the same product. That looks applicable to all projects out there – thousands of facebook developers are working on just facebook, for example. First, it isn’t. Many non-billion-dollar-billion-user companies actually dedicate one or a small number of teams to a project. And even facebook actually has many projects (mobile, ads, chat, photos, news feed). And those are not “micro” services. They are full-featured products that happen to integrate with the rest in some way. But back to the use case – sometimes microservices may give multiple teams increased flexibility. That very much depends on the domain, though. And it’s not impossible for two teams to work on the same monolith, with due process.

Universally, if you are sure that the network and coordination overhead of microservices will be negligible compared to the amount of work being done and the flexibility, then they are a valid approach. But I believe that’s rare. Martin Fowler talks about complexity vs productivity, so, in theory, if you know in advance how complex your project is going to be, maybe you have a valid microservices use case.

Separating a piece of functionality into a service of its own and communicating with it through web services should not be something that deserves so much attention. But apparently we have to say “no, it’s not for every project” and “yes, the approach is not dumb by itself, there are cases when it’s useful”.

1

Testing: Appetite Comes With Eating

January 11, 2016

I’ve written a lot about testing. Some tips on integration tests, some how-tos, some general opinions about tests. But I haven’t told my “personal story” about testing.

Why are tests needed should be obvious by now. It’s not all about finding bugs (because then you can use an excuse like “QAs will find them anyway”), it’s about having a codebase that can remain stable with changes. And it’s about writing better code, because testable code is cleaner.

I didn’t always write tests. Well, at least not the right amount. I had read a lot about testing, about the benefits of testing, about test-first / test-driven, about test coverage. And it seemed somewhat distant. The CRUD-like business logic seemed unworthy of testing. A few if-statements here, a few database queries there, what’s to be tested?

There are companies where tests were “desirable”, “optional”, “good, but maybe not now”. There are times when marking a test with @Ignore looks ok. And although that always bites you in the end, you can’t get yourself motivated to get your coverage up.

Yup, I’ve been there. I’ve written tests “every now and then”, and knew how to tests, but it wasn’t my “nature”. But I’m “clean” now – not only at work, but also in side-projects, I think I have a somehow different mentality – “how do I test that” and “how do I write that in order to be able to test it”.

I won’t go into the discussion of whether “test-first” is better. I don’t do it – I’ve done it, but I don’t find it that important, provided you have the right mindset towards your code. The fact that you write your tests after the code doesn’t mean the code isn’t written with the tests in mind.

How did that happen? I didn’t have a failed project because of lack of tests, and I didn’t go on a soul-searching trip to find out that I have to write tests to achieve inner peace. I think it’s a combination of several factors.

First, it’s the company/team culture. The team that I’m in now has the right practical approach to tests – it doesn’t have to be 100% coverage, but it has to cover all edge cases – we even have a task in most stories that makes us explicitly think of any possible edge cases. Even if you want to write test, if nobody around is, then you get demotivated. But when everyone around is doing it, it becomes a habit.

Then there’s experience. After years of years of reading about the benefits and seeing the problems of not having tests, and seeing that even your mere 25% of coverage has given you some freedom and that the tested pieces just look better, one should eventually do it. It’s the way of things.

And finally, it’s about what the French express as “appetite comes with eating”. The more you write tests, the more you want to write them.

0

General Performance Tips

December 28, 2015

Performance is a mystical thing our systems must have. But as with most things in software engineering, there is no clearly defined set of steps that have to be followed in order to have a performant systems. It depends on the architecture, on the network, on the algorithms, on the domain problem, on the chosen technologies, on the database, etc.

Apart from applying common-sense driven development, I have “collected” some general tips on how problems with performance can usually be addressed.

But before that I have to make a clarification. A “performance problem” is not only about problems that you realize after you run your performance tests or after you deploy to production. Not all optimization is premature, so most of these “tips” must be applied in advance. Of course, you should always monitor, measure and try to find bottlenecks in a running system, but you should also think ahead.

The first thing is using a cache. Anything that gets accessed many times but doesn’t change that often must be cached. If it’s a database table, the query should be cached. If a heavy method is invoked many times, it can be cached. Static web resources must be cached. On an algorithmic level, memoization is a useful technique. So how to do the caching?
It depends. An ORM can provide the relevant cache infrastructure for database queries, spring has method-level cache support, web frameworks have resource caches. Some distributed cache (memcached/redis/ElastiCache) can be setup, but that may be too much of an effort. Sometimes it’s better and easier to have a local cache. Guava has a good cache implementation, for example.

Of course, as Phil Karlton has once said, “There are only two hard things in Computer Science: cache invalidation and naming things”. So cache comes with a “mental” cost. How and when should the cache be invalidated. So don’t just cache everything – figure out where there’s benefit. In many cases that is quite obvious.

The second tip is to use queues (and that does not contradict my claim that you probably don’t need an MQ). It can be an in-memory queue, or it can be a full-blown MQ system. In any case, if you have a heavy operation that has to be performed, you can just queue all the requests for that operation. Users will have to wait, but sometimes that doesn’t matter. For example, twitter can generate your entire twitter archive. That takes a while, as it has to go through a lot of records and aggregate them. My guess is that they use a queue for that – all requests for archive generation are queued. When your time comes, and your request is processed, you get an email. Queuing should not be overused, though. Simply having an expensive operation doesn’t mean a queue solves it.

The third tip is background calculation. Some data you have to show to your users don’t have to be generated in real-time. So you can have a background task that does its job periodically, instead of having the user wait for the result in a veery long request. For example, music generation in my Computoser takes a lot of time (due to the mp3 generation), so I can’t just generate tracks upon request. But there’s a background process that generates tracks and serves a newly generated track to each new visitor.

The previous two tips were more about making heavy operations not look slow, rather than actually optimizing them. But they are also about not using too much server resources for achieving the required task.

Database optimizations is next. Quite obvious, you may say, but actually – no. Especially if using an ORM, many people have no idea what happens underneath (hint: it’s not the ORM’s fault). I’ve seen a production system with literally no secondary indexes, for example. It was fine until there were millions of records, but it has gradually become unusable (why it wasn’t fixed – different story). So, yes, create indexes. Use EXPLAIN to see how your queries are executed, see if there are any unnecessary full table scans.

Another tip that I’ve already written about is using the right formats for internal communication. Schemes like Thrift, Avro, protobuf, messagepack, etc. exist for exactly this reason. If your systems/services have to communicate internally, you don’t want XML, if there’s another format that takes 20% of the space and uses 30% of the CPU to serialize/deserialize. These things accumulate at scale.

The final tip is “Don’t do stupid things”, and it’s harder than it sounds. It is a catch-all tip, but sometimes when you look at your code from aside, you want to slap yourself. Have you just written an O(nn) array search? Have you just called an external service a thousand times where you could’ve cached the result the first time? Have you forgotten to add an index? Such obviously stupid things lurk in every project. So in order to minimize the stupid things being done, do code reviews. Code reviews are not premature optimization either.

Will applying these tips mean your system performs well? Not at all. But it’s a good start.

0

TLS Client Authentication

December 15, 2015

I decided to do a prototype for an electronic identification scheme, so I investigated how to do TLS client authentication with a Java/Spring server-side (you can read on even if you’re not a Java developer – most of the post is java-agnostic).

Why TLS client authentication? Because that’s the most standard way to authenticate a user who owns a certificate (on a smartcard, for example). Of course, smartcard certificates are not the only application – organizations may issue internal certificates to users that they store on their machines. The point is to have an authentication mechanism that is more secure than a simple username/password pair. It is a usability problem, especially with smartcards, but that’s beyond the scope of this post.

So, with TLS clientAuth, in addition to the server identity being verified by the client (via the server certificate), the client identity is also verified by the server. This means the client has a certificate that is issued by an authority, which the server explicitly trusts. Roughly speaking, the client has to digitally sign a challenge in order to prove that it owns the private key that corresponds to the certificate it presents. (This process can also be found under “mutual authentication”)

There are two ways to approach that. The first, and most intuitive, is to check how to configure Tomcat (or your servlet container). The spring security x509 authentication page gives the Tomcat configuration at the bottom. The “keystore” is the store where the server certificate (+private key) is stored, and “trustStore” is the store that holds the root certificate of the authority that’s used to sign the client certificates.

However, that configuration is applicable only if you have a single servlet container instance exposed to your users. Most likely in production, though, you’ll have a number of instances/nodes running your application, behind a load-balancer, and TLS is usually terminated at the load-balancer, which then forwards the decrypted requests to the servlet container over a plain HTTP connection. In that case, your options are either to not terminate TLS at the load-balancer, which is most likely not a good idea, or you have to somehow forward the client certificate from your load-balancer to your node.

I’ll use nginx as an example. Generating the keypairs, certificates, certificate signing requests, signed certificates and keystores is worth a separate post. I’ve outlined what’s needed here. You need openssl and keytool/Portecle and a bunch of commands. For production, of course, it’s even more complicated, because for the server certificate you’d need to send a CSR to a CA. Having done that, in your nginx configuration, you should have something like:

server {
   listen 443 ssl;
   server_name yourdomain.com;

   ssl_certificate server.cer;
   # that's the private key
   ssl_certificate_key server.key;
   # that holds the certificate of the CA that signed the client certificates that you trust. =trustStore in tomcat
   ssl_client_certificate ca.pem;
   # this indicates whether client authentication is required, or optional (clientAuth="true" vs "want" in tomcat)
   ssl_verify_client on;

   location / {
      # proxy_pass configuration here, inclding X-Forwarded-For headers. Note: take extra care for not forwarding X-Client-Certificate forged headers
      proxy_set_header X-Client-Certificate $ssl_client_cert;
   }
}

That way the client certificate will be forwarded as a header (as advised here). This looks like a hack, and it probably is, because the client certificate is not exactly a small string. But that’s the only way I can think of. Here is how to do something similar with Apache.

There is one small issue with that, however (and it’s the same for the Tomcat solution as well) – if you enable client authentication for your entire domain, you can’t have fully unprotected pages. Even if authentication is optional (“want”), the browser dialog (from which the user selects a certificate) would still be triggered no matter which pages the user opens first. The good thing is that a user without certificate would still be able to browse pages that are not explicitly protected with code. But for a person that has a certificate, opening the home page would open the dialog, even though he might not want to authenticate. There is something that can be done to handle it.

I’ve actually seen it done with Perl “per page”, but I’m not sure this can be done with a Java setup. Well, it can, if you don’t use a servlet container, but handle your TLS handshakes yourself. But that’s not desirable.

Normally, you’d need the browser authentication dialog only for a single URL. “/login”, or as in my case with my fork of the OpenID Connect implementation MitreID, the “/authenticate” endpoint (the user gets redirected to the Identity Provider /authenticate URL, where normally he’d have to enter username/password, but in this case he would have to just select the proper certificate). What can be done is to access that particular endpoint from a subdomain. That would mean having another “server” section in the nginx configuration with the subodmain and the ssl_verify_client on, while the regular domain remains without any client certificate verification. That way, only requests to the subdomain will be authenticated.

Now, how to do the actual authentication. The OpenID Connect implementation mentioned above uses spring security, but it can be anything. My implementation supports both cases mentioned above (tomcat and nginx+tomcat). That makes the application load-balancer-aware, but you can safely choose one or the other approach and get rid of the other half from the code.

For the single tomcat approach, the X509Certificate is obtained simply by this lines:

    X509Certificate certs[] = (X509Certificate[]) request
       .getAttribute("javax.servlet.request.X509Certificate");
    // check if not empty and get the first one

For the nginx-in-front approach, it’s a bit more complicated. We have to get the header, transform it to a proper state and then parse it. (Note that I’m not using the spring-security X509 filter, because it supports only the single-tomcat approach.)

String certificateHeader = 
    request.getHeader("X-Client-Certificate");
if (certificateHeader == null) {
    response.sendError(HttpServletResponse.SC_UNAUTHORIZED);
    return;
}
// the load balancer (e.g. nginx) forwards the certificate 
// into a header by replacing new lines with whitespaces 
// (2 or more). Also replace tabs, which sometimes nginx 
// may send instead of whitespaces
String certificateContent = certificateHeader
     .replaceAll("\\s{2,}", System.lineSeparator())
     .replaceAll("\\t+", System.lineSeparator());
userCertificate = (X509Certificate) certificateFactory
    .generateCertificate(new ByteArrayInputStream(
        certificateContent.getBytes("ISO-8859-11")));

The “hackiness” is now obvious, because the way nginx sends the certiciate PEM-encoded, but on one line. Fortunately, lines are separated by some sort of whitespace (one time it was spaces, another time it was tabs (on a Windows machine)), so we can revert them to their original PEM format (even without necessarily knowing that a PEM line is 64 characters). It can be that other versions of nginx or other servers do not put whitespaces, so the splitting into 64-character lines may have to be done. Then we use a X.509 certificate factory to create a certificate object.

That’s basically it. Then we can use this clever “trick” to extract the CN (Common name), or any other uniquely identifying field, from the certificate, and use it to load the corresponding user record from our database.

That’s it, or at least what I got from my proof-of-concept. It’s a niche use-case, and smartcard-to-computer communication is a big is a big usability issue, but for national secure e-id schemes, for e-banking and for internal applications it’s probably not a bad idea.

0

Electronic Identification

December 1, 2015

On the conference where I spoke about e-voting, I had another talk that focused on e-identification (which is a mandatory step for e-voting).

It is in Bulgarian and EU context, given the new EU Regulation that aims at cross-border e-identification. Nearly 2 years ago I did a campaign for that, not knowing the European parliament is already discussing the matter.

In short, e-identification is the means to prove your real identity online (to both the public and the private sector).

That sounds very convenient. But I know many people are concerned about privacy. And they should be. But not having a national ID card, and not having an e-identification scheme is not the solution. The US and the UK don’t have ID cards or even a citizen database (which most ex-socialist countries do have). And yet, US and UK citizens are ones under the highest levels of surveillance.

On the other hand, the practical advantages of having a way to prove your identity online, especially when working with the public sector (but not only), are not to be ignored. Therefore, here are my slides:

You will notice that privacy is addressed in a very concrete way – it is not that the government doesn’t have access to one’s data – it already has, through all of its databases that store records about properties and card owned, current address, driver’s license, etc. Privacy is addressed by giving control to the citizen. He sees (and can be notified) about each and every time data about him from a given database (register) is accessed. The citizen also has control (including the ability to delete) data about his e-id usage. And if one doesn’t want to have an e-id, he can declare that and the chip will stay empty.

How is it guaranteed that this happens? Through our proposed law that mandates that all government software is open source, in a public repo.

Of course that doesn’t guarantee that they are not running a special version that gives the NSA counterparts undetectable access to one’s data. But that can happen regardless of the identification process or the connectivity between databases. And technically competent people know that simply having a chip in your card doesn’t let the government track you – it can’t “phone home”, it can’t connect to a cell tower, etc. If it is contactless, and it supports a PIN-less readable section, and the range is big enough, someone with the right certificates on a reader can read the e-id from the distance. But then what – he’ll end up with a meaningless UUID.

By all means, we should demand that the government doesn’t abuse the information it has about us, and we should not allow that information leaking uncontrollably to the private sector. And we must think of the means to abuse the system. And that is what our proposal is about.

The technical details – how a smartcard will be configured, whether it will be a contact, or dual interface (or contactless only, like in Germany), and how will fraud be detected and prevented, is a matter of a technical discussion we have already started.

I believe we can have security, privacy and comfort (usability) at the same time. And for that we don’t need to “just trust the government/company X”. We should trust the technology, though.

0

A Problem With Convention-Over-Configuration

November 15, 2015

Convention-over-configuration is a convenient thing. Instead of writing tons of configuration in xml/yaml/json/whatever, you simply know that something will have a given default value. For example, the restful endpoint URL may have a value of /class-name/method-name, or a join table will be named mainEntity_joinField. A “view” can be populated by default with the input parameters of the controller method or other values.

And this is all very nice – we don’t want to annotate each field of a java bean, and we don’t want to configure explicitly our maven directory structure. But there is a “darker” side of convention-over-configuration, and it comes whenever the knowledge of the default behaviour is needed in order to understand the code.

Imagine you open a project for the first time. I have had the need to do that recently – as a government adviser I sometimes have to do a quick-fix or a quick investigation of some software that would otherwise require a tender, a contract and at least 4 months to handle. I agree it’s a rare usecase, but bear with me.

If, for example, a web framework automatically includes /views/header.ext into your views, and you try to find “where the hell is this menu item coming from”, you may have a hard time. If you try to figure out what controller handles the /foo/bar/baz URL, and you don’t find any configuration or mapping file, nor you find any part of the URL using the search functionality, you’re lost.

Convention-over-configuration can roughly be split in three groups: straightforward removal of boilerplate code, specific configuration logic and stuff that doesn’t matter for investigating the project. But there is no obvious line between them. The fields of the java-bean can obviously be referred to in a JSP by name, or spring beans are automatically named using the uncapitalized class name. It doesn’t matter whether Maven has the java classes in src/main/java, or in src/java – you’ll click through the folders anyway. But if there is specific logic of mapping URLs to controller methods, then you’d have to read about the framework being used. In rare cases like mine you may even not know the framework being used, so you have to find that out first.

That’s a problem, in a sense – in order to understand the program flow, you need to know the framework details.

I know this is rarely a big problem – normally you join a team which already has knowledge of the framework and if you find yourself wondering “how is that configured”, you can always ask someone. But as a general advice – try not to use convention-over-configuration with complicated, specific logic. It may save a few keystrokes, but typing is not what takes time in software development. And any complicated convention-over-configuration logic makes the project harder to read and navigate.

4

E-voting [presentation]

November 13, 2015

Last week I gave a talk on the OpenFest conference about (remote) e-voting (or internet voting/i-voting). The talk was not in English, but here are my translated slides:

I’ve addressed some of the topics from the presentation in a previous post.

Overall, it’s a hard problem, with a lot of issues to be addressed and it must be treated seriously. However, we must do it, sooner or later, as it will allow for a more dynamic and direct democracy.

0

Setting Up CloudFront With S3

November 1, 2015

Yesterday I decided to setup CloudFront for Computoser. I store all the generated tracks in AWS S3, and every time a track is played or downloaded, I was making a request to S3. Not that the traffic is that much, but still it sounded like a good idea to use a CDN (CloudFront) – it would save a little bit of money (not that the current bill is any big) and it would make download faster accross the globe. To be honest, I didn’t do it for any of these – I was just curious how would a CloutFront setup work.

There is enough documentation on “How to setup CloudFront with S3″, and besides, the UI in the AWS console is pretty straightforward – you create a “distribution”, in “Origin” you specify your S3 bucket, and that’s it. Of course, you can use your own server as origin server, if you don’t store the content in S3.

Then you wait for around 10-15 minutes, and things should work – i.e. when you access http://randomkey.cloudfront.net/foo.png, it should be opened. But for me it wasn’t – the response was “Access denied”. Which meant the bucket policy had to be changed (as described here):

{
  "Version": "2008-10-17",
  "Statement": [{
    "Sid": "AllowPublicRead",
    "Effect": "Allow",
    "Principal": { "AWS": "*" },
    "Action": ["s3:GetObject"],
    "Resource": ["arn:aws:s3:::bucket/*" ]
  }]
}

Then the application had to be configured to use CloudFront. This can be done in two ways:

  • In the view – in your pages you can set the root of the CloudFront, and make all references to CDN-available resources absolute
  • In a controller (or anything back-end) – if it is not about static resources, but (as in my case), files that are generated and stored by the software, then configure the path, and instead of fetching from S3, redirect to the CloudFrount URL.

Both approaches can be useful and easy to roll-out. For the former to work out-of-the-box, you’d need some pre-existing handling mechanism for static resources (e.g. a ${staticRoot} prefix, or a custom tag. It is generally a good idea to have a good static resources setup, regardless of whether you use a CDN or not.

But for bigger systems, a CDN is useful and apparently – easy to setup.

1

In Defence of Monoliths

October 22, 2015

The first Microservices talk I attended was a year and a half ago. My first reaction was “why is that something new?”. Then I realized it is already getting overhyped, so I listened to some more talks, read a bit more articles, so that I can have a good reason not to like the hype.

What are microservices is probably defined here or by Martin Fowler, or at any of the first google results for “microservices”. It is basically splitting up your functionality into separately deployable modules that communicate with each other in order to complete a business goal, and each microservice is limited to just a small, well-defined scope. Product purchasing is one microservice, user registration is another microservice, “current promotions” is another microservice, and so on. Or they can be even more fine-grained – that appears to be debatable.

And whenever I encounter a new “thing”, I try to answer the questions “what’s the point” and “does this apply to me”. And I’d like to point out that I’m not the person that rejects anything new, because things can be done “the old way”, but there’s a fine line between a good new technology/architecture, and hype. And besides, microservices is nothing new. I remember several years back when we had an application split into several parts that communicated with web services. When I joined, I refactored that into a single “monolith”, which improved response times by around 20%. It turned out we never needed the split.

And that’s why I’m going to write about – that you probably don’t need microservices. And Martin Fowler has phrased this very well:

…don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services

There, done. Now go build a monolith. But microservices advocates wouldn’t agree and will point out all sort of benefits of a microservices architecture (or will point out that your system is too complex, so you have to use microservices. And pay them for consultancy). So let’s examine a few alleged advantages that microservices have (for example around 30:57 of this video). The question I’m going to ask is – can this easily be done in a monolith? (I have to clarify, that a “monolith” is what is generally perceived as the opposite of microservices – e.g. one codebase, one deployment.)

  • modeled around the business domain – absolutely. You can structure your packages and runtime dependencies around the business domain.
  • culture of automation – that has nothing to do with the architecture – you can automate the deployment of any application. (We are doing an automated blue-green deployment for a monolith, for example).
  • hide implementation details – that’s what object-oriented programming is about. Your classes, and your packages, hide their implementation details and expose interfaces. Microservices bring nothing to that (except the network overhead). You can even still have multiple projects, built as dependencies for the main project.
  • decentralize all things – well, the services are still logically coupled, no matter how you split them. One depends on the other. In that sense, “dcentralized” is just a thing that sounds good, but in practice means nothing in this particular context. And is maybe synonymous with the next point.
  • deployed independently, and monitored independently. That alone doesn’t give you anything over a monolith, where you can gather metrics (e.g. with statsd) or get profiling and performance information about each class or package.
  • isolated failures – now that’s potentially a good thing. If one module “fails”, you can just display “sorry, this functionality doesn’t work right now” and handle the failure. A monolith, however, doesn’t have to fail completely either. It is the details of the failure that matter, and I’ve never seen any detailed explanation. A server goes down? Well, you have a highly-available cluster for that, regardless of how your code is structured.

Some more, loosely defined benefits like “easy to modify” and “easy to understand” are claimed. But again, a well written, structured and tested monolith can be as easy to understand and modify.

Basically, a lot of commons sense, software engineering, continuous integration/delivery and infrastructure management best practices are claimed to be a bonus of microservices, while in fact they work perfectly fine with a monolith.

The ability for a graceful degradation is possibly an important aspect of microservices, but again, you can handle it with a monolith as well – it would require a little bit of extra code – e.g. feature if’s that are toggled in case of failures. But it’s nothing compared to the extra effort you have to put in place in order to get a working microservices application.

And that’s a lot – you have to coordinate your services. You have to decide what to do with common data. And the usual suggestion is “duplicate it”. If two microservices need some common data, they can’t just use the same shared database – each microservices has its own database, so it has to duplicate the data. Which sounds easy, unless you have to keep that data in sync. Which you always have to do. And when you have to keep duplicated data in sync accross 5 microservices, the overhead possibly exceeds any possible advantages.

Another big problem are transactions. You either don’t need transactions (in which case – lucky you), or you end up with (if I may quote Martin Kleppmann) “ad-hoc, informally-specified, bug-ridden slow implementation of half of transactions” (here’s the whole talk about transactions).

The microservices communication overhead is also not to be underestimated, including the network overhead and all the serialization and deserialization. And my controversial advice to use a fast, binary format for internal communication, rather than JSON/XML, is rarely seen in practice.

So, I would again recommend to follow Martin Fowler’s advice and just stay with a monolith. Do follow best practices, of course, including:

  • Modularity – separate your logic into well defined modules (will be easier with project jigsaw), define your class public interfaces carefully, and use loose coupling within your codebase
  • Continous delivery and automation – automate everything, deploy often
  • Be highly available – make your application horizontally scalable, be as stateless as possible, and make failures undetectable by end users

But don’t believe when someone tells you these best practices are features of the microservices architecture. They aren’t, and can be done pretty easily in a monolith, without the side effects – you don’t have to think about keeping duplicated data in sync, about network overhead, about writing a half-assed two-phase commit mechanism, about coordinating the services, etc. And if you really think you have to do “microservices”, be sure to have pretty good reasons for it.

P.S. I gave a talk on the topic, here are the slides

10