Open-Sourcing My Music Composition Algorithm

August 19, 2014

Less than two years ago I wrote about the first version of my algorithm for music composition. Since then computoser.com got some interest and the algorithm was incrementally improved.

Now, on my birthday, I decided it’s time to make it open-source. So it’s on GitHub.

It contains both the algorithm and the supporting code to run it on a website (written with spring and hibernate). The algorithm itself is in the com.music package, everything else is in subpackages, so it’s easy to identify it.

It isn’t a perfect piece of code, but I think it’s readable, if you happen to know some music theory. I am now preparing a paper to present my research (as some research is involved in the creation) as well as how the algorithm functions. Opening the code is part of the preparation for the paper – it will be noted there as a reference implementation.

The license is AGPL – as far as I know, that should not allow closed-source use of my algorithm on the server-side.

I don’t think making it open-source is such a significant step, but I hope it will somehow help algorithmic music composition advance further than it is today.

1

Get Rid of the URL Pollution

August 13, 2014

You want to copy the URL of a nice article/video/picture you’ve just opened and send it to friends in skype chats, whatsapp, other messengers or social networks. And you realize the URL looks like this:

http://somesite.com/artices/title-of-the-article?utm_campagin=fsafser454fasfdsaffffas&utm_bullshit=543fasdfafd534254543&somethingelse=uselessstuffffsafafafad&utm_source=foobar

What are these parameters that pollute the URL? The above example uses some of the Google Analytics parameters (utm*), but other analytics tools use the same approach. And probably other tools as well. How are these parameters useful? They tell Google Analytics (which is run with javascript) details about the current campaign, probably where the user is coming from, and other stuff I and especially users don’t really care about.

And that’s ugly. I myself always delete the meaningless parts of the URL, so that in the end people see only “http://somesite.com/artices/title-of-the-article”. But that’s me – a software engineer, who can distinguish the useless parts of the URL. Not many people can, and even fewer are bothered to cut parts of the URL, which results in looong and ugly URLs being pasted around. Why is that bad?

  • website owners have put effort in making their URLs pretty/ With “url pollution” that efforts goes to waste.
  • defeating the purpose of the parameters – when you copy-paste such a url, all the people that open it may be counted as, for example, coming from a specific AdWords campagin. Or from a source that’s actually wrong (because they got the URL in skype, for example, but utm_source is ‘facebook’)
  • lower likelihood of clicking on a hairy url with meaningless stuff in it (at least I find myself more hesitant)

If you have a website, what can you do about this URL pollution, without breaking your analytics tool? You can get rid of them with javascript:

    window.history.replaceState(null, null, 
        window.location.href.replace("utm_source=....", ""));

This won’t trigger fake analytics results (for GA, at least, as it requires manual work to trigger it after pushState). Now there are three questions: how to get the exact parameters, when to run the above code, and is it worth it?

You can get all parameters (as shown here) and then either remove some blacklisted ones (utm_source, utm_campagin, etc.), or remove all, unless your whitelisted parameters. If your application isn’t using GET parameters at all, that’s easy. If it is, then keeping the whitelist in sync would be tedious, so probably go for the blacklist.

When should you do that? A little after the page loads, and the analytics tool does its job. When exactly is that – I don’t know. Maybe on window.load, maybe you have to wait for a second and then remove the parameters. You’d have to experiment.

And is it worth it? I think yes. Less useless parameters, less noise, nicer, friendlier URLs (that’s why you spent time prettifying them, right?), and less incorrect analytics results due to copy-pasted long URLs.

And I have a request to Google and all other providers of similar tools – please cleanup your “mess” after you read it, so that we don’t have to do it ourselves.

6

Generating equals(..), hashCode() and toString()

August 10, 2014

You most probably need to override hashCode(), equals(..) and toString() – I won’t go into details when and why, but you need that (ok, just a reminder – always implement hashCode and equals together, and you most likely need to implement these methods if you are going to look up objects of a given class in a hashmap or an arraylist). And you have plenty of options to do it:

  • Manually implement the methods – that’s sort-of ok for toString() and quite impractical with hashCode() and equals(..). Unless you are pretty certain that you want a custom, well-considered hash function, then you should rely on another, more practical mechanism
  • Use the IDE – all IDEs can generate the three methods, asking you to specify the fields you want to base them on. The hash function is usually good enough, and the rest just saves you from the headache of writing boilerplate comparisons, ifs and elses. But when you add a field, you shouldn’t forget to regenerate the methods.
  • commons-lang – there’s EqualsBuilder, HashCodeBuilder and ToStringBuilder there, which help you write the methods quickly, either with manual append(field).append(field), or with reflection, e.g. reflectionEquals(..). Adding a field again requires modifications, and it’s easy to forget that.
  • guava – very similar to commons-lang, with all the pros and cons. Guava has Objects and MoreObjects, with helper functions for equals(..) and hashCode and a builder for toString() – you still have to manually add/compare each field you want to include.
  • project lombok – it plugs into the compiler and turns some annotations into actual implementations, sparing you writing the biolerplate code completely. For example, if you annotated the class with @EqualsAndHashCode, Lombok will generate the two methods with all the fields in the class (you can customize that). The other annotations are @ToString, @Value (for immutables), @Data (for value-objects). You just have to put a jar on your compile time classpath, and it should work.

Which of these should you use? I generally exclude the manual approach, as well as guava and commons-lang – they require too much manual work for a task that you shouldn’t need to care in 99% of the cases. The reflection option with commons-lang sounds interesting, but also sounds like performance overhead.

I’ve always used the IDE – the only downside of this is that you have to regenerate them. Sometimes you may forget and that may yield unexpected behaviour. But apart from that, it’s quick and robust approach.

Project lombok seems to eliminate the risk of forgetting to regenerate, but that sometimes has another side effect – you may not need to automatically include all new fields, and you can forget to exclude them. But my personal reluctance to use lombok is based on a sort-of a superstition – it does “black magic” by plugging into the compiler. It does work, but it you don’t know how exactly it manages to handle both eclipse compiler, javac, IntelliJ compiler; will it always work with maven, including your CI environment? Will it work through a major/minor compiler version upgrade? Obviously it does, and I have no rational argument against it. And it has some more useful features as well.

So, it’s up to you to pick either of the two approaches. But do not implement it manually, and I don’t think the helper functions/builders are that practical.

7

Suggestion for Spam Filters

August 4, 2014

One of the issues with spam is false positives. “Did you check your spam folder” is often a question to ask if your email is not received on the other end.

I’m not a machine learning expert and I’ve never made a spam filter, and I only know the naive Bayes approach. So this suggestion is not a machine-learning “breakthrough”. But from what I know about classification algorithms is that they usually provide a likelihood of one item being in one group or another. Some items are not identified as spam with absolute certainty – they are 51% likely to be spam, for example.

My suggestions is: for borderline items (lower certainty that they should be classified as spam), the spam filter should send emails to the sender indicating that his message was considered spam. A genuine sender will probably take additional steps, like sending another short email or calling/messaging the recipient (‘click here to confirm you are not spam’ won’t work, because it will easily be automated).

It’s rather a usability suggestion than a technical one, and I’m sure there are some issues that I’m missing. But I thought it’s at least worth sharing.

5

RabbitMQ in Multiple AWS Availability Zones

July 17, 2014

When working with AWS, in order to have a highly-available setup, once must have instances in more than one availability zone (AZ ≈ data center). If one AZ dies (which may happen), your application should continue serving requests.

It’s simple to setup your application nodes in multiple AZ (if they are properly written to be stateless), but it’s trickier for databases, message queues and everything that has state. So let’s see how to configure RabbitMQ. The first steps are not relevant only to RabbitMQ, but to any persistent data solution.

First (no matter whether using CloudFormation or manual setup), you must:

  • Have a VPC. It might be possible without a VPC, but I can’t guarnatee that, especially the DNS hostnames as discussed below
  • Declare private subnets (for each AZ)
  • Declare the RabbitMQ autoscaling group (recommended to have one) to span multiple AZs, using:
            "AvailabilityZones" : { 
              "Fn::GetAZs" : {
                "Ref": "AWS::Region"
              }
            }
            
  • Declare the RabbitMQ autoscaling group to span multiple subnets using the VPCZoneIdentifier property
  • Declare the LoadBalancer in front of your RabbitMQ nodes (that is the easiest way to ensure even distribution of load to your Rabbit cluster) to span all the subnets
  • Declare LoadBalancer to be "CrossZone": true

Then comes the specific RabbitMQ configuration. Generally, you have two options:

Clustering is not recommended in case of WAN, but the connection between availability zones can be viewed (maybe a bit optimistically) as a LAN. (This detailed post assumes otherwise, but this thread hints that using a cluster over multiple AZ is fine)

With federation, you declare your exchanges to send all messages they receive to another node’s exchange. This is pretty useful in a WAN, where network disconnects are common and speed is not so important. But it may still be applicable in a multi-AZ scenario, so it’s worth investigating. Here is an example, with exact commands to execute, of how to achieve that, using the federation plugin. The tricky part with federation is auto-scaling – whenever you need to add a new node, you should modify (some of) your existing nodes configuration in order to set the new node as their upstream. You may also need to allow other machines to connect as guest to rabbitmq ([{rabbit, [{loopback_users, []}]}] in your rabbitmq conf file), or find a way to configure a custom username/password pair for federation to work.

With clustering, it’s a bit different, and in fact simpler to setup. All you have to do is write a script to automatically join a cluster on startup. This might be a shell script or a python script using the AWS SDK. The main steps in such a script (which, yeah, frankly, isn’t that simple), are:

  • Find all running instances in the RabbitMQ autoscaling group (using the AWS API filtering options)
  • If this is the first node (the order is random and doesn’t matter), assume it’s the “seed” node for the cluster and all other nodes will connect to it
  • If this is not the first node, connect to the first node (using rabbitmqctl join_cluster rabbit@{node}), where {node} is the instance private DNS name (available through the SDK)
  • Stop RabbitMQ when doing all configurations, start it after your are done

In all cases (clustering or federation), RabbitMQ relies on domain names. The easiest way to make it work is to enable DNS hostnames in your VPC: "EnableDnsHostnames": true. There’s a little hack here, when it terms to joining a cluster – the AWS API may return the fully qualified domain name, which includes something like “.eu-west-1.compute.internal” in addition to the ip-xxx-xxx-xxx-xxx part. So when joining the RabbitMQ cluster, you should strip this suffix, otherwise it doesn’t work.

The end results should allow for a cluster, where if a node dies and another one is spawned (by the auto-scaling group), the cluster should function properly.

Comparing the two approaches with PerfTest yields better throughput for the clustering option – about 1/3 less messages were processed with federation, and also there was a bit higher latency. The tests should be executed from an application node, towards the RabbitMQ ELB (otherwise you are testing just one node). You can get PerfTest and execute it with something like that (where the amqp address is the DNS name of the RabbitMQ load balancer):

wget http://www.rabbitmq.com/releases/rabbitmq-java-client/v3.3.4/rabbitmq-java-client-bin-3.3.4.tar.gz
tar -xvf rabbitmq-java-client-bin-3.3.4.tar.gz
cd rabbitmq-java-client-bin-3.3.4
sudo sh runjava.sh com.rabbitmq.examples.PerfTest -x 10 -y 10 -z 10 -h amqp://internal-foo-RabbitMQEl-1GM6IW33O-1097824.eu-west-1.elb.amazonaws.com:5672

Which of the two approaches you are going to pick up depends on your particular case, but I would generally recommend the clustering option. A bit more performant and a bit easier to setup and to support in a cloud environment, with nodes spawning and dying often.

0

The Cloud Beyond the Buzzword [presentation]

July 14, 2014

The other day I gave a presentation about “The Cloud”. I talked about buzzwords, incompetence, classification, and most importantly – embracing failure.

Here are the slides (the talk was not in English). I didn’t have time to go into too much details, but I hope it’s a nice overview.

0

You Probably Don’t Need a Message Queue

July 3, 2014

I’m a minimalist, and I don’t like to complicate software too early and unnecessarily. And adding components to a software system is one of the things that adds a significant amount of complexity. So let’s talk about message queues.

Message Queues are systems that let you have fault-tolerant, distributed, decoupled, etc, etc. architecture. That sounds good on paper.

Message queues may fit in several use-cases in your application. You can check this nice article about the benefits of MQs of what some use-cases might be. But don’t be hasty in picking an MQ because “decoupling is good”, for example. Let’s use an example – you want your email sending to be decoupled from your order processing. So you post a message to a message queue, then the email processing system picks it up and sends the emails. How would you do that in a monolithic, single classpath application? Just make your order processing service depend on an email service, and call sendEmail(..) rather than sendToMQ(emailMessage). If you use MQ, you define a message format to be recognized by the two systems; if you don’t use an MQ you define a method signature. What is the practical difference? Not much, if any.

But then you probably want to be able to add another consumer that does additional thing with a given message? And that might happen indeed, it’s just not for the regular project out there. And even if it is, it’s not worth it, compared to adding just another method call. Coupled – yes. But not inconveniently coupled.

What if you want to handle spikes? Message queues give you the ability to put requests in a persistent queue and process all of them. And that is a very useful feature, but again it’s limited based on several factors – are your requests processed in the UI background, or require immediate response? The servlet container thread pool can be used as sort-of queue – response will be served eventually, but the user will have to wait (if the thread acquisition timeout is too small, requests will be dropped, though). Or you can use an in-memory queue for the heavier requests (that are handled in the UI background). And note that by default your MQ might not be highly-availably. E.g. if an MQ node dies, you lose messages. So that’s not a benefit over an in-memory queue in your application node.

Which leads us to asynchronous processing – this is indeed a useful feature. You don’t want to do some heavy computation while the user is waiting. But you can use an in-memory queue, or simply start a new thread (a-la spring’s @Async annotation). Here comes another aspect – does it matter if a message is lost? If you application node, processing the request, dies, can you recover? You’ll be surprised how often it doesn’t actually matter, and you can function properly without guaranteeing all messages are processed. So, just asynchronously handling heavier invocations might work well.

Even if you can’t afford to lose messages, the use-case when a message is put into a queue in order for another component to process it, there’s still a simple solution – the database. You put a row with a processed=false flag in the database. A scheduled job runs, picks all unprocessed ones and processes them asynchronously. Then, when processing is finished, set the flag to true. I’ve used this approach a number of times, including large production systems, and it works pretty well.

And you can still scale your application nodes endlessly, as long as you don’t have any persistent state in them. Regardless of whether you are using an MQ or not. (Temporary in-memory processing queues are not persistent state).

Why I’m trying to give alternatives to common usages of message queues? Because if chosen for the wrong reason, an MQ can be a burden. They are not as easy to use as it sounds. First, there’s a learning curve. Generally, the more separate integrated components you have, the more problems may arise. Then there’s setup and configuration. E.g. when the MQ has to run in a cluster, in multiple data centers (for HA), that becomes complex. High availability itself is not trivial – it’s not normally turned on by default. And how does your application node connect to the MQ? Via a refreshing connection pool, using a short-lived DNS record, via a load balancer? Then your queues have tons of configurations – what’s their size, what’s their behaviour (should consumers explicitly acknowledge receipt, should they explicitly acknowledge failure to process messages, should multiple consumers get the same message or not, should messages have TTL, etc.). Then there’s the network and message transfer overhead – especially given that people often choose JSON or XML for transferring messages. If you overuse your MQ, then it adds latency to your system. And last, but not least – it’s harder to track the program flow when analyzing problems. You can’t just see the “call hierarchy” in your IDE, because once you send a message to the MQ, you need to go and find where it is handled. And that’s not always as trivial as it sounds. You see, it adds a lot of complexity and things to take care of.

Certainly MQs are very useful in some contexts. I’ve been using them in projects where they were really a good fit – e.g. we couldn’t afford to lose messages and we needed fast processing (so pinging the database wasn’t an option). I’ve also seen it being used in non-trivial scenarios, where we are using to for consuming messages on a single application node, regardless which node posts the message (pub/sub). And you can also check this stackoverflow question. And maybe you really need to have multiple languages communicate (but don’t want an ESB), or maybe your flow is getting so complex, that adding a new method call instead of a new message consumer is an overkill.

So all I’m trying to say here is the trite truism “you should use the right tool for the job”. Don’t pick a message queue if you haven’t identified a real use for it that can’t be easily handled in a different, easier to setup and maintain manner. And don’t start with an MQ “just in case” – add it whenever you realize the actual need for it. Because probably, in the regular project out there, a message queue is not needed.

7

How to Handle Incompetence?

June 25, 2014

We’ve all had incompetent colleagues. People that tend to write bad code, make bad decisions or just can’t understand some of the concepts in the project(s). And it’s never trivial to handle this scenario.

Obviously, the easiest solution is to ignore it. And if you are not a team lead (or something similar), you can probably pretend that the problem doesn’t exist (and occasionally curse and refactor some crappy code).

There are two types of incompetent people: those who know they are not that good, and those who are clueless about their incompetence.

The former are usually junior and mid-level developers, and they are expected to be less experienced. With enough coaching and kindly pointing out their mistakes, they will learn. This is where all of us have gone though.

The latter is the harder breed. They are the “senior” developers that have become senior only due to the amount of years they’ve spent in the industry, and regardless of their actual skills or contribution. They tend to produce crappy code, misunderstand assignments, but on the other hand reject (kindly or more aggressively) any attempt to be educated. Because they’re “senior”, and who are you to argue with them? In extreme cases this may be accompanied with an inferiority complex, which in turn may result in clumsy attempts to prove they are actually worthy. In other cases it may involve pointless discussions on topics they do not want to admit they are wrong about, just because admitting that would mean they are inferior. They will often use truisms and general statements instead of real arguments, in order to show they actually understand the matter and it’s you that’s wrong. E.g. “we must do things the right way”, “we must follow best practices”, “we must do more research before making this decision”, and so on. In a way, it’s not exactly their incompetence that is the problem, it’s their attitude and their skewed self-image. But enough layman psychology. What can be done in such cases?

A solution (depending on the labour laws) is to just lay them off. But in a tight market, approaching deadlines, a company hierarchy and rules, probably that’s not easy. And such people can still be useful. It’s just that “utilizing” them is tricky.

The key is – minimizing the damage they do without wasting the time of other team members. Note that “incompetent” doesn’t mean “can’t do anything at all”. It’s just not up to the desired quality. Here’s an incomplete list of suggestions:

  • code reviews – you should absolutely have these, even if you don’t have incompetent people. If a piece of code is crappy, you can say that in a review.
  • code style rules – you should have something like checkstyle or PMD rule set (or whatever is relevant to your language). And it won’t be offensive when you point out warnings from style checks.
  • pair programming – often simple code-style checks can’t detect bad code, and especially a bad approach to a problem. And it may be “too late” to indicate that in a code review (there is never a “too late” time for fixing technical debt, of course). So do pair programming. If the incompetent person is not the one writing the code, his pair of eyes may be useful to spot mistakes. If writing the code, then the other team member might catch a wrong approach early and discuss that.
  • don’t let them take important decisions or work or important tasks alone; in fact – this should be true even for the best developer out there – having more people involved in a discussion is often productive

Did I just make some obvious engineering process suggestions? Yes. And they would work in most cases, resolving the problem smoothly. Just don’t make a drama out of it and don’t point fingers…

…unless it’s too blatant. If the guy is both incompetent and with an intolerable attitude, and the team agrees on that, inform management. You have a people-problem then, and you can’t solve it using a good process.

Note that the team should agree. But what to do if you are alone in a team of incompetent people, or the competent people too unmotivated to take care of the incompetent ones? Leave. That’s not a place for you.

I probably didn’t say anything useful. But the “moral” is – don’t point fingers; enforce good engineering practices instead.

6

Make Tests Fail

June 12, 2014

This is about a simple testing technique that is probably obvious, but I’ll share it anyway.

In case you are not following TDD, when writing tests, make them fail, in order to be sure you are testing the right thing. You can make them fail either by changing some preconditions (the “given” or “when” parts, if you like), or by changing something small in the code. After you make them fail, you revert the failing change and don’t commit it.

Let me try to give an examples why this matters.

Suppose you want to test that a service triggers some calculation only in case a set of rules are in place.
(using Mockito to mock dependencies and verify if they are invoked)

@Test
public void testTriggeringFoo() {
   Foo foo = mock(Foo.class);
   StubConfiguration config  new StubConfiguration();
   config.enableFoo();
   Service service = new Service(foo, config);
   service.processOptionallyTriggeringFoo();
   verify(foo).calculate(); //verify the foo calculation is invoked
}

That test passes, and you are happy. But it must fail if you do not call enableFoo(). Comment that out and run it again – if it passes again, there’s something wrong and you should investigate.

The obvious question here is – shouldn’t you have a negative test case instead (i.e. test that’s testing the opposite behaviour – i.e. that if you don’t enable foo, calculate() is not called)? Sometimes, yes. But sometimes it’s not worth having the negative test case. And sometimes it’s not about the functionality that you are testing.

Even if you code is working, your mocks and stubs might not be implemented correctly, and you may think you are testing something that you aren’t actually testing. That’s why making a test fail while writing it is not about the code you are testing, it’s about your test code. In the above example, if StubConfiguration is ignoring enableFoo(), but has it set to true by default, then the test won’t fail. But in this case the test is not useful at all – it always passes. And when you refactor your code later, and the condition is no longer met, your test won’t indicate that.

So, make sure your test and test infrastructure is actually testing the code the way you intend it to, by making the test fail.

5

An Architecture for E-Voting

May 27, 2014

E-voting is a hot topic in my country, and has been discussed a lot everywhere. Since we are already using the internet and touch-screen technologies in our everyday lives, why not apply that to voting? And not for the sake of technology itself, but in order to prevent technical mistakes and election fraud, and make it easier for citizens to cast their vote and make the elections generally cheaper.

There are many concerns, some of which – relevant, including security, single points of failure, privacy, etc. Some experts claim it is impossible to make it secure enough, and that paper ballots must be used forever. On the other hand, there are several companies producing voting machines, and multiple attempts have been made to introduce e-voting, very few of which were successful. A recent audit of the Estonian e-voting system also showed some drawbacks, although the system has been in use for a while without major issues.

I’ve been thinking and discussing about the details of how a system for electronic voting can be implemented, with the following main requirements:

  • the results cannot be tampered with – neither by an attacker, nor by the election authorities
  • open source – relying on closed source and private audits is “security through obscurity”
  • everyone can vote – there should be no technical limitation to voting – people without internet and without profound technology skills should be able to cast a vote
  • guaranteed anonymity – nobody should be able to see how a person voted
  • only one vote per person – the system must be able to ensure that a person hasn’t voted more than once
  • people should be able to vote without going to a particular location
  • nobody should be able to replace a person’s vote
  • no special skills for the voting staff – ideally, voting machines should be started with one click and handle everything by themselves
  • guaranteed to work with power or internet outages

The requirements are more or less clear, but implementing them is tough.

In order to guarantee that nobody can change the results, the only solution that is secure enough would be a distributed one. No single database is so secure, that can prevent malicious attempts. That’s why a distributed vote database has to be used. Without being an expert in the field, I think the bitcoin blockchain gives us what we need – all nodes participating in the elections will have enough data about the results, so that even if half of them are compromised or taken out, the rest can reconstruct the exact results. It might not be the exact same implementation, but we can view each vote as a separate transaction. Communication between devices is secured by the appropriate protocols, of course.

Open source is a requirement, so that everyone can be sure that no sneaky code in the form “if (party == ‘foo’) then votes += 2″. With a checksum of the current deployed build on each device, for example. It is true, that only software engineers will be able to understand how the process works, while now everyone knows how the paper is cast, but currently even fewer people know how paper is collected, counted and how are results calculated – there’s enough “magic” happening already, from the point of view of the average voter.

Everyone should be able to vote if a simple tablet/tablet-like device is placed in the voting booth. A friend of mine, who is a field linguist, once told me that the indigenous people he’s working with love using his tablet, so anyone can use a clean touch-screen interface with clear indication of the choices. Usability is a major concern of course, and lots of usability and A/B testing has to be done, but that is doable.

Guaranteeing anonymity is one of the toughest problems. In my proposal for unified electronic identification I pointed out that there is a solution to that problem, and it’s called “anonymous credentials”. Here is an introduction to the technology. I understand how it works, but not as good as I would need to explain it. But in short, the owner of the credentials generates a token, that is used to represent him to the election authorities. The token cannot be linked to the owner, but contains enough information for the election authorities to verify if that person has the right to vote, and that he hasn’t voted already (here, the “election authority” is an automated system). The introductory article describes pretty well all aspects needed, including the “one-time spending” (4.1). What I can add is that the system can obtain some metadata about the voter – age group, gender, city, for statistical purposes (though sometimes in small town people can be traced based on a few details).

A good implementation of anonymous credentials handles both the “anonymity” and “one-time” voting, provided each citizen has only one “digital credential”. This is guaranteed in an offline process – if all citizens have a mandatory ID card that contains their digital credentials, then the identity of the person is verified once by the issuing authority, and can later be used in elections (and many other government services). And before the fear of the big brother gets you, re-read the previous paragraph as of why the government can’t track you even if you have an ID card with a digital element in it.

Having the digital credentials, the voter is no longer tied to a particular voting location – people on business trips, temporarily living abroad, handicapped, or in any other way unable or unwilling to be present at the voting station on election day/week, will still be able to vote on the internet, provided they have a reader for their card.

Having said that, client-side security must be taken into account as well – the block chain guarantees the data is are secure once transmitted and results can’t be changed, but (as shown in a recent audit of the Estonian e-voting) there may be client-side attacks. What happens if the computer of the voter (or worse – the tablet at the voting station) is infected by malicious software. This is the case where a real security expert should step in, and many cases should be considered, because I can only suggest general principles. Of course, the identification card is protected by PIN, and the reader can have a simple external keyboard to prevent a trojan horse to cast a vote on behalf of the voter. And having a secure smart-card (or smart-card-like) device makes sure that when you cast a vote nobody can intercept and replace that. But can malicious code interfere with the communication of the device by preventing the vote to be cast, needs further research. I think that it is possible to be secure enough, as to prevent fraud on a large scale.

The staff the facilitates the voting process would need to switch the terminals (tablets) on, and that’s all. Since voting is activated by a card, they don’t need to manually activate it. All they have to do is make sure nobody steals a device, but that’s simple – a sound can be played if the device is disconnected, or moved, for example (a technique used in many shops nowadays). The start and the end of the election day can probably be given by all members of the section commission putting their digital cards in the reader.

And the final point is edge cases. What happens if the power is down? Well, batteries should last sufficiently long. And portable battery chargers can be distributed as well. What happens if the internet is down? And what about voting stations that don’t have access to the internet? If the internet goes down, results can be cached locally until the internet is back. “Paper trail” is something that can be used as a backup – each vote is printed and stored (automatically) in a box, and in case there are problems with the technology, we revert to the old-school way. And even if there is no cable/ADSL internet, or it goes down, 3G/GPRS is normally available (a contract with the mobile carriers has to be signed for the elections, of course, but bureaucracy is offtopic).

So, the solution outlined above depends on having a card, on complex software, on further client security investigation and also needs a lot of logistics considerations – for delivering and connecting the devices, contracts, etc. Regardless of all these ifs, it seems like technology is giving us a way to do elections digitally, and we should put some effort in that direction. Companies providing e-voting solutions can do that, but they should not rely on closed-source software, and would better rely on commodity hardware, making their business model a bit different.

And last, but not least – a lot of government and societal effort will be needed as well, even after the technology is in place.

3