I Stopped Contributing To Stackoverflow, But It’s Not Declining

September 26, 2016

“The decline of Stackoverflow” is now trending on reddit, and I started this post as a comment in the thread, but it got too long.

I’m in the 0.01% (which means rank #34) but I haven’t contributed almost anything in the past 4 years. Why I stopped is maybe part of the explanation why “the decline of stackoverflow” isn’t actually happening.

The mentioned article describes the experience of a new user as horrible – you can’t easily ask a question without having it downvoted, marked as duplicate, or commented on in a negative way. The overall opinion (of the article and the reddit thread) seems to be that SO “the elite” (the moderators) has become too self-important and is acting on a whim for an alleged “purity” of the site.

But that’s not how I see it, even though I haven’t been active since “the good old days”. This Hacker news comment has put it very well:

StackOverflow is a machine designed to do one thing: make it so that, for any given programming question, you will get a search engine hit on their site and find a good answer quickly. And see some ads.
That’s really it. Everything it does is geared toward that, and it does it quite well.
I have lots of SO points. A lot of them have come from answering common, basic questions. If you think points exist to prove merit, that’s bad. But if you think points exist to show “this person makes the kind of content that brings programmers to our site and makes them happy”, it’s good. The latter is their intent.

So why I stopped contributing? There were too many repeating questions/themes, poorly worded, too many “homework” questions, and too few meaningful, thought provoking questions. I’ve always said that I answer stackoverflow questions not because I know all the answers, but because I know a little more about the subject than the person asking. And those seemed to give way (in terms of percentage) to “null pointer exception”, “how to fix this [40 lines pasted] code” and “Is it better to have X than Y [in a context that only I know and I’m not telling you]”. (And here’s why I don’t agree that “it’s too hard to provide an answer on time”. If it’s not one of the “obvious” questions, you have plenty of time to provide an answer).

And if we get back to the HN quote – the purpose of the site is to provide answers to questions. If the questions are already answered (and practically all of the basic ones are), you should have found the answer, rather than asking it again. Because of that maybe somethings non-trivial questions get mistaken for “on, not another null pointer exception”, in which cases I’ve been actively pointing out that this is the case and voting to reopen. But that’s rare. All the examples in the “the decline of stackoverflow” article and in the reddit thread are I believe edge cases (and one is a possible “homework question”). Maybe these “edge cases” are now more prevalent than when I was active, but I think the majority of the new questions are still coming from people too lazy to google one or two different wordings of their problem. Which is why I even summarized the basic steps of finding a problem before asking on SO.

So I wouldn’t say the moderators are self-made tyrants that are hostile to anyone new. They just have a knee-jerk reaction when they see yet-another-duplicate-or-homework-or-subjective question.

And that’s not simply for the sake of purity – the purpose of the site is to provide answers. If the same question exists in 15 variations, you may not find the best answer (it has happened to me – I find three questions that for some reason aren’t marked as duplicate – one contains just a few bad answers, and the other one has the solution. If google happens to place the former ontop, one may think it’s actually a hard question).

There are always “the trolls”, of course – I have been serially downvoted (so all questions about serial downvoting are duplicates), I have had even personal trolls that write comments on all my recent answers. But…that’s the internet. And those get filtered quickly, no need to get offended or think that “the community is too hostile”.

In the past week I’ve been doing a wordpress plugin as a side project. I haven’t programmed in PHP in 4 years and I’ve never written a wordpress plugin. I had a lot of questions, but guess what – all of them were already answered, either on stackoverflow, or in the documentation, or in some blogpost. We shouldn’t assume our question is unique and rush to asking it.

On the other hand, even the simplest questions are not closed just because they are simple. One of my favourite examples is the question whether you need a null check before calling an instanceof. My answer is number 2, with a sarcastic comment that this could be tested in an IDE for a minute. And a very good comment points out that it takes less than that to get the answer on Stackoverflow.

It may seem that most of the questions are already answered now. And that’s probably true for the general questions, for the popular technologies. Fortunately our industry is not static and there are new things all the time, so stackoverflow is going to serve those.

It’s probably a good idea to have different rules/thresholds for popular tags (technologies) and less popular ones. If there’s a way to differentiate trivial from non-trivial questions, answers to the non-trivial ones could be rewarded with more reputation. But I don’t think radical changes are needed. It is inevitable that after a certain “saturation point” there will be fewer contributors and more readers.

Bottom line:

  • I stopped contributing because it wasn’t that challenging anymore and there are too many similar, easy questions.
  • Stackoverflow is not declining, it is serving its purpose quite well.
  • Mods are not evil jerks that just hate you for not knowing something
  • Stackoverflow is a little more boring for contributors now than it was before (which is why I gradually stopped answering), simply because most of the general questions have already been answered. The niche ones and the ones about new technologies remain, though.
20

Traditional Web Apps And RESTful APIs

September 23, 2016

When we are building web applications these days, it is considered a best practice to expose all our functionality as a RESTful API and then consume it ourselves. This usually goes with a rich front-end using heavy javascript, e.g. Angular/Ember/Backbone/React.

But a heavy front-end doesn’t seem like a good default – applications that require the overhead of a conceptually heavy javascript framework are actually not in the majority. The web, although much more complicated, is still not just about single-page applications. Not to mention that if you are writing a statically-typed backend, you would either need a dedicated javascript team (no necessarily a good idea, especially in small companies/startups), or you have to write in that … not-so-pleasant language. And honestly, my browsers are hurting with all that unnecessary javascript everywhere, but that’s a separate story.

The other option for having yourself consume your own RESTful API is to have a “web” module, that calls your “backend” module. Which may be a good idea, especially if you have different teams with different specialties, but the introduction of so much communication overhead for the sake of the separation seems at least something one should think twice before doing. Not to mention that in reality release cycles are usually tied, as you need extra effort to keep the “web” and “backend” in proper sync (“web” not requesting services that the “backend” doesn’t have yet, or the “backend” not providing a modified response model that the “web” doesn’t expect).

As in my defence of monoliths, I’m obviously leaning towards a monolithic application. I won’t repeat the other post, but the idea is that an application can be modular even if it’s run in a single runtime (e.g. a JVM). Have your “web” package, have your “services” package, and these can be developed independently, even as separate (sub-) projects that compile into a single deployable artifact.

So if you want to have a traditional web application – request/response, a little bit of ajax, but no heavy javascript fanciness and no architectural overhead, and you still want to expose your service as a RESTful API, what can you do?

Your web layer – the controllers, working with request parameters coming from form submissions and rendering a response using a template engine – normally communicates with your service layer. So for your web layer, the service layer is just an API. It uses it using method calls inside a JVM. But that’s not the only way that service layer can be used. Frameworks like Spring-MVC, Jersey, etc, allow annotating any method and exposing it as a RESTful service. Normally it is accepted that a service layer is not exposed as a web component, but it can be. So – you consume the service layer API via method calls, and everyone else consumes it via HTTP. The same definitions, the same output, the same security. And you won’t need a separate pass-through layer in order to have a RESTful API.

In theory that sounds good. In practice, the annotations that turn the method into an endpoint may introduce problems – is serialization/deserialization working properly, are the headers properly handled, is authentication correct. And you won’t know that these aren’t working if you are using the methods only inside a single JVM. Yes, you will know they work correctly in terms of business logic, but the RESTful-enabling part may differ.

That’s why you need full coverage with acceptance tests. Something like cucumber/JBehave to test all your exposed endpoints. That way you’ll be sure that both the RESTful aspects, and the business logic work properly. It’s actually something that should be there anyway, so it’s not an overhead.

Another issues is that you may want to deploy your API separately from your main application. https://yoursite.com and https://api.yoursite.com. You may want to have just the API running in one cluster, and your application running in another. And that’s no issues – you can simply disable the “web” part with a configuration switch and your application and deploy the very same artifact multiple times.

I have to admit I haven’t tried that approach, but it looks like a simple way that would still cover all the use-cases properly.

0

The Right To Be Forgotten In Your Application

September 13, 2016

You’ve probably heard about “the right to be forgotten” according to which Google has to delete search results about you, if you ask them to.

According to a new General Data Protection Regulation of the EU, the right to be forgotten means that a data subject (user) can request the deletion of his data from any data controller (which includes web sites), and the data controller must delete the data without delay. Whether it’s a social network profile, items sold in online shops/auctions, location data, properties being offered for sale, even forum comments.

Of course, it is not as straightforward, as it depends on the contractual obligations of the user (even with implicit contracts), and the Regulation lists a couple of cases, some of which very broad, when the right to be forgotten is applicable, but the bottom line is that such functionality has to be supported.

I’ll quote the relevant bits from the Regulation:

Article 17
1. The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay where one of the following grounds applies:
(a) the personal data are no longer necessary in relation to the purposes for which they were collected or otherwise processed;
(b) the data subject withdraws consent on which the processing is based according to point (a) of Article 6(1), or point (a) of Article 9(2), and where there is no other legal ground for the processing;
….
2. Where the controller has made the personal data public and is obliged pursuant to paragraph 1 to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data.

This “legalese” basically means that your application should have a “forget me” functionality that deletes the user data entirely. No “deleted” or “hidden” flags, no “but our business is based on your data”, no “this will break our application”.

Note also that if your applications pushes data to a 3rd party service (uploads images to youtube, pictures to imgur, syncs data with salesforce, etc.), you have to send deletion requests to these services as well.

The Regulation will be in force in 2018, but probably it’s a good idea to have it in mind earlier. Not just because there’s a Directive in force and courts already have decicions in that directions, but also because when building your system, you have to keep that functionality working. And since in most cases all data is linked to a user in your database, it is thus most likely considered “personal data” under the broad definition in the regulation.

Technically, this can be achieved by ON CASCADE DELETE in relational databases, or by cascade=ALL in ORMs, or my manual application layer deletion. The manual deletion needs supporting and extending when adding new entities/tables, but it safer than having a generic cascade deletion. And as mentioned above that may not be enough – your third party integrations should also feature deletion. And most 3rd party APIs have that functionality, so your “/forget-me” endpoint handler would probably look something like this:

@Transactional
public void forgetUser(UUID userId) {
   User user = userDao.find(userId);
   // if cascading is considered unsafe, delete entity by entity
   fooDao.deleteFoosByUser(user);
   barDao.deleteBarsByUser(user);
   userDao.deleteUser(user);
   thirdPartyService1.deleteUser(user.getThirdPartyService1Token);
   thirdPartyService2.deleteUser(user.getThirdPartyService2Token);
   // if some components of your deployment rely on processing events
   eventQueue.publishEvent(new UserDeletionEvent(user));
}

That code may be execute asynchronously. It can also be run as part of a “forgetting” scheduled job – users submit their deletion requests and the job picks it up. The regulation is not strict about real-time deletion, but it should happen “without undue delay”. So “once a month” is not an option.

My point is – you should think of that feature when designing your application, so that it doesn’t turn out that it’s impossible to delete data without breaking everything.

And you should think about that not (just) because some EU regulation says so, but because users’ data is not your property. Yes, the user has decided to just push stuff to your database, but you don’t own that. So if a user decides it no longer wants you to hold any data about him – you are ethically (and now legally) bound to comply. You can bury the feature ten screens and two password forms deep, but it better be there.

Whether that’s practical – I think it is. It comes handy for acceptance tests, for example, which you can run against production (without relying on a hardcoded user profiles). It is also not that hard to support a deletion feature and it allows you to have a flexible data model.

Whether the Regulation will be properly applied depends on many factors, but the technical one may be significant with legacy systems. And as every system becomes “legacy” after six months, we should be talking about it.

0

Why I Introduced Scala In Our Project

August 29, 2016

I don’t like Scala. And I think it has some bad and very ugly aspects that make it a poor choice for mainstream development.

But recently I still introduced it in our project. Not only that, but the team has no experience with Scala. And I’ll try to explain why that is not a bad idea.

  • First and most important – I followed my own advice and introduced it only in a small, side module. We didn’t have acceptance tests and we desperately needed some, so the JBehave test module was a good candidate for a scala project.
  • Test code is somewhat different from “regular” code – it is okay to be less consistent, more “sketchy” and to have hacks. On the other hand it could benefit from the expressiveness and lesser verbosity of scala, as tests are generally tough to read, especially in their setup phase. So Scala seems like a good choice – you can quickly write concise test code without worrying too much that you are shooting yourself in the foot in the long term, or that multiple team members do things in a different way (as Scala generously allows). Also, in tests you don’t have to face a whole stack of frameworks and therefore – all the language concepts at once (like implicits, partial functions, case objects, currying, etc.)
  • I didn’t choose a scripting language (or groovy in particular), because I have a strong preference for statically typed languages (yeah, I know groovy has that option for a couple of years now). I it should’ve been a JVM language, because we’d want to reuse some common libraries of ours.
  • It is good for people to learn new programming languages every now and then, as it widens their perspective. Even if that doesn’t change their language of choice.
  • Learning Scala I think can lead to better understanding and using of Java 8 lambdas, as in Scala you can experience them “in their natural habitat”.
  • I have previous experience with Scala, so there is someone on the team who can help

(As a side-note – IntelliJ scala support is now better than last time I used it (unlike the Eclipse-based Scala IDE, which is still broken))

If writing acceptance test code with Scala turns out as easy as I imagine it, then it can mean we’ll have more and betters acceptance tests, which is the actual goal. And it would mean we are using the right tool for the job, rather than a hammer.

3

Biometric Identification [presentation]

August 21, 2016

Biometric identification is getting more common – borders, phones, doors. But I argue that it is not by itself a good approach. I tried to explain this in a short talk, and here are the slides

Biometric features can’t be changed, can’t be revoked – they are there forever. If someone gets hold of them (and that happens sooner or later), we are screwed. And now that we use our fingerprints to unlock our phones, for example, and at the same time we use our phone as the universal “2nd factor” for most online services, including e-banking in some cases, fraud is waiting to happen (or already happening).

As Bruce Schneier has said after an experiment that uses gummi bears to fool fingerprint scanners:

The results are enough to scrap the systems completely, and to send the various fingerprint biometric companies packing

On the other hand, it is not that useful and pleasant to biometric features for identification – just typing a PIN is just as good (but we can change the PIN).

I’ve previously discussed the risks related to electronic passports, which have fingerprint images in clear form and are read without a PIN thought a complex certificate management scheme. The bottom line is, they can leak from your passport without you understanding (if the central databases don’t leak before that). Fortunately, there are alternatives that would still guarantee that the owner of the passport is indeed the one it was issued to, an that it’s not fake.

But anyway, I think the biometric data can have some future applications. Near the end of the presentation I try to imagine how it can be used for a global, distributed anonymous electronic identification scheme. But the devil is always in the details. And so far we have failed with the details.

0

Writing Laws Is Quite Like Programming

August 7, 2016

In the past year I’ve taken the position of an adviser in the cabinet of a deputy prime minister and as a result of that I had the option to draft legislation. I’ve been doing that with a colleague, both with strong technical background, and it turned out we are not bad at it. Most of “our” laws passed, including the “open source law”, the electronic identification act, and the e-voting amendments to the election code (we were, of course, helped by legal professionals in the process, much a like a junior dev is helped by a senior one).

And law drafting felt to have much in common with programming – as a result “our” laws were succinct, well-structured and “to the point”, covering all use-cases. At first it may sound strange that people not trained in the legal profession would be able to do it at all, but writing laws is actually “legal programming”. Here’s what the two processes have in common:

  • Both rely on a formalized language. Programming languages are stricter, but “legalese” is also quite formalized and certain things are worded normally in a predefined way, in a way there are “keywords”.
  • There is a specification on how to use the formalized language and how it should behave. The “Law for normative acts” is the JLS (Java language specification) for law-drafting- it defines what is allowed, how laws should be structured and how they should refer to each other. It also defines the process of law-making.
  • Laws have a predefined structure, just as a class file, for example. There are sections, articles, references and modification clauses for other laws (much like invoking a state-changing function on another object).
  • Minimizing duplication is a strong theme in both law-drafting and programming. Instead of copy-pasting shared code / sections, you simply refer to it by its unique identifier. You do that in a single law as well as across multiple laws, thus reusing definitions and statements.
  • Both define use-cases – a law tries to cover all the edge cases for a set of use-cases related to a given matter, much like programming. Laws, of course, also define principles, which is arguably their more important feature, but the definition is use-cases is pretty ubiquitous.
  • Both have if-clasues and loops. You literally say “in case of X, do Y”. And you can say “for all X, do Y”. Which is of course logical, as these programming constructs come from the real world.
  • There are versions and diffs. After it appears for the first time (“is pushed to the legal world”) every change is in the form of an amendment to the original text, in a quite formalized “diff” structure. Adding or removing articles, replacing words, sentences or whole sections. You can then replay all the amendments ontop of the original document to find the current active law. Sounds a lot like git.
  • There are “code reviews” – you send your draft to all the other institutions and their experts give you feedback, which you can accept or reject. Then the “pull request” is merged into master by the parliament.
  • There is a lot of “legacy code”. There are laws from 50 years ago that have rarely been amended and you have to cope with them.

And you end up with a piece of “code” that either works or doesn’t solve the real world problems and has to be fixed/amended. With programming it’s the CPU, and possibly a virtual machine that carry out the instructions, and with laws it’s the executive branch (and in some cases – the juridical).

It may seem like the whole legal framework can be written in a rules engine or in Prolog. Well, it can’t, because of the principles it defines and the interpretation (moral and ethical) that judges have to do. But that doesn’t negate the similarities in the process.

There is one significant difference though. In programming we have a lot of tools to make our lives easier. Build tools, IDEs, (D)VCS, issue tracking systems, code review systems. Legal experts have practically none. In most cases they use Microsoft Word and even without “Track changes” sometimes. They get the current version of the text from legal information systems or in many cases even from printed versions of the law. Collaboration is a nightmare, as Word documents are flying around via email. The more tech-savvy may opt for a shared document with Google Docs or Office365 but that’s rare. People have to manually write the “diff” based on track changes, and then manually apply the diff to get the final consolidated version. The process of consultation (“code review”) is based on sending paper mails and getting paper responses. Not to mention that once the draft gets in parliament, there are work groups and committees that make the process even more tedious.

Most of that can be optimized and automated. The UK, for example, has done some steps forward with legislation.gov.uk where each legal text is stored using LegalXML (afaik), so at least references and versioning can be handled easily. But legal experts that draft legislation would love to have the tools that we, programmers, have. They just don’t know they exist. The whole process, from idea, through work groups, through consultation, and multiple readings in parliament can be electronic. A GitHub for laws, if you wish, with good client-side tools to collaborate on the texts. To autocomplete references and to give you fine-tuned search. We have actually defined such a “thing” to be built in two years, and it will have to be open source, so even though the practices and rules vary from country to country, I hope it will be possible to reuse it.

As a conclusion, I think programming (or software engineering, actually), with its well defined structures and processes, can not only help in many diverse environments, but can also give you ideas on how to optimize them.

1

Custom Audit Log With Spring And Hibernate

July 18, 2016

If you need to have automatic auditing of all database operations and you are using Hibernate…you should use Envers or spring data jpa auditing. But if for some reasons you can’t use Envers, you can achieve something similar with hibernate event listeners and spring transaction synchronization.

First, start with the event listener. You should capture all insert, update and delete operations. But there’s a tricky bit – if you need to flush the session for any reason, you can’t directly execute that logic with the session that is passed to the event listener. In my case I had to fetch some data, and hibernate started throwing exceptions at me (“id is null”). Multiple sources confirmed that you should not interact with the database in the event listeners. So instead, you should store the events for later processing. And you can register the listener as a spring bean as shown here.

@Component
public class AuditLogEventListener
        implements PostUpdateEventListener, PostInsertEventListener, PostDeleteEventListener {

    @Override
    public void onPostDelete(PostDeleteEvent event) {
        AuditedEntity audited = event.getEntity().getClass().getAnnotation(AuditedEntity.class);
        if (audited != null) {
            AuditLogServiceData.getHibernateEvents().add(event);
        }
    }

    @Override
    public void onPostInsert(PostInsertEvent event) {
        AuditedEntity audited = event.getEntity().getClass().getAnnotation(AuditedEntity.class);
        if (audited != null) {
            AuditLogServiceData.getHibernateEvents().add(event);
        }
    }

    @Override
    public void onPostUpdate(PostUpdateEvent event) {
        AuditedEntity audited = event.getEntity().getClass().getAnnotation(AuditedEntity.class);
        if (audited != null) {
            AuditLogServiceData.getHibernateEvents().add(event);
        }
    }

    @Override
    public boolean requiresPostCommitHanding(EntityPersister persister) {
        return true; // Envers sets this to true only if the entity is versioned. So figure out for yourself if that's needed
    }
}

Notice the AuditedEntity – it is a custom marker annotation (retention=runtime, target=type) that you can put ontop of your entities.

To be honest, I didn’t fully follow how Envers does the persisting, but as I also have spring at my disposal, in my AuditLogServiceData class I decided to make use of spring:

/**
 * {@link AuditLogServiceStores} stores here audit log information It records all 
 * changes to the entities in spring transaction synchronizaton resources, which 
 * are in turn stored as {@link ThreadLocal} variables for each thread. Each thread 
 * /transaction is using own copy of this data.
 */
public class AuditLogServiceData {
    private static final String HIBERNATE_EVENTS = "hibernateEvents";
    @SuppressWarnings("unchecked")
    public static List<Object> getHibernateEvents() {
        if (!TransactionSynchronizationManager.hasResource(HIBERNATE_EVENTS)) {
            TransactionSynchronizationManager.bindResource(HIBERNATE_EVENTS, new ArrayList<>());
        }
        return (List<Object>) TransactionSynchronizationManager.getResource(HIBERNATE_EVENTS);
    }

    public static Long getActorId() {
        return (Long) TransactionSynchronizationManager.getResource(AUDIT_LOG_ACTOR);
    }

    public static void setActor(Long value) {
        if (value != null) {
            TransactionSynchronizationManager.bindResource(AUDIT_LOG_ACTOR, value);
        }
    }

    public void clear() {
       // unbind all resources
    }
}

In addition to storing the events, we also need to store the user that is performing the action. In order to get that we need to provide a method-parameter-level annotation to designate a parameter. The annotation in my case is called AuditLogActor (retention=runtime, type=parameter).

Now what’s left is the code that will process the events. We want to do this prior to committing the current transaction. If the transaction fails upon commit, the audit entry insertion will also fail. We do that with a bit of AOP:

@Aspect
@Component
class AuditLogStoringAspect extends TransactionSynchronizationAdapter {

    @Autowired
    private ApplicationContext ctx; 
    
    @Before("execution(* *.*(..)) && @annotation(transactional)")
    public void registerTransactionSyncrhonization(JoinPoint jp, Transactional transactional) {
        Logger.log(this).debug("Registering audit log tx callback");
        TransactionSynchronizationManager.registerSynchronization(this);
        MethodSignature signature = (MethodSignature) jp.getSignature();
        int paramIdx = 0;
        for (Parameter param : signature.getMethod().getParameters()) {
            if (param.isAnnotationPresent(AuditLogActor.class)) {
                AuditLogServiceData.setActor((Long) jp.getArgs()[paramIdx]);
            }
            paramIdx ++;
        }
    }

    @Override
    public void beforeCommit(boolean readOnly) {
        Logger.log(this).debug("tx callback invoked. Readonly= " + readOnly);
        if (readOnly) {
            return;
        }
        for (Object event : AuditLogServiceData.getHibernateEvents()) {
           // handle events, possibly using instanceof
        }
    }

    @Override
    public void afterCompletion(int status) {
	// we have to unbind all resources as spring does not do that automatically
        AuditLogServiceData.clear();
     }

In my case I had to inject additional services, and spring complained about mutually dependent beans, so I instead used applicationContext.getBean(FooBean.class). Note: make sure your aspect is caught by spring – either by auto-scanning, or by explicitly registering it with xml/java-config.

So, a call that is audited would look like this:

@Transactional
public void saveFoo(FooRequest request, @AuditLogActor Long actorId) { .. }

To summarize: the hibernate event listener stores all insert, update and delete events as spring transaction synchronization resources. An aspect registers a transaction “callback” with spring, which is invoked right before each transaction is committed. There all events are processed and the respective audit log entries are inserted.

This is very basic audit log, it may have issue with collection handling, and it certainly does not cover all use cases. But it is way better than manual audit log handling, and in many systems an audit log is mandatory functionality.

3

Spring-Managed Hibernate Event Listeners

July 15, 2016

Hibernate offers event listeners as part of its SPI. You can hook your listeners to a number of events, including pre-insert, post-insert, pre-delete, flush, etc.

But sometimes in these listeners you want to use spring dependencies. I’ve written previously on how to do that, but hibernate has been upgraded and now there’s a better way (and the old way isn’t working in the latest versions because of missing classes).

This time it’s simpler. You just need a bean that looks like this:

@Component
public class HibernateListenerConfigurer {
    
    @PersistenceUnit
    private EntityManagerFactory emf;
    
    @Inject
    private YourEventListener listener;
    
    @PostConstruct
    protected void init() {
        SessionFactoryImpl sessionFactory = emf.unwrap(SessionFactoryImpl.class);
        EventListenerRegistry registry = sessionFactory.getServiceRegistry().getService(EventListenerRegistry.class);
        registry.getEventListenerGroup(EventType.POST_INSERT).appendListener(listener);
        registry.getEventListenerGroup(EventType.POST_UPDATE).appendListener(listener);
        registry.getEventListenerGroup(EventType.POST_DELETE).appendListener(listener);
    }
}

It is similar to this stackoverflow answer, which however won’t work because it also relies on deprecated calsses.

You can also inject a List<..> of listeners (though they don’t share a common interface, you can define your own).

As pointed out in the SO answer, you can’t store new entities in the listener, though, so it’s no use injecting a DAO, for example. But it may come handy to process information that does not rely on the current session.

0

Installing Java Application As a Windows Service

June 26, 2016

It sounds like something you’d never need, but sometimes, when you distribute end-user software, you may need to install a java program as a Windows service. I had to do it because I developed a tool for civil servants to automatically convert and push their Excel files to the opendata portal of my country. The tool has to run periodically, so it’s a prime candidate for a service (which would make the upload possible even if the civil servant forgets about this task altogether, and besides, repetitive manual upload is a waste of time).

Even though there are numerous posts and stackoverflow answers on the topic, it still took me a lot of time because of minor caveats and one important prerequisite that few people seemed to have – having a bundled JRE, so that nobody has to download and install a JRE (would complicate the installation process unnecessarily, and the target audience is not necessarily tech-savvy).

So, with maven project with jar packaging, I first thought of packaging an exe (with launch4j) and then registering it as a service. The problem with that is that the java program uses a scheduled executor, so it never exits, which makes starting it as a process impossible.

So I had to “daemonize” it, using commons-daemon procrun. Before doing that, I had to assemble every component needed into a single target folder – the fat jar (including all dependencies), the JRE, the commons-daemon binaries, and the config file.

You can see the full maven file here. The relevant bits are (where ${installer.dir} is ${project.basedir}/target/installer}):

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>2.3.2</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
    </configuration>
</plugin>
<plugin>
    <artifactId>maven-assembly-plugin</artifactId>
    <executions>
        <execution>
            <id>assembly</id>
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
                <finalName>opendata-ckan-pusher</finalName>
                <appendAssemblyId>false</appendAssemblyId>
            </configuration>
        </execution>
    </executions>
</plugin>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>1.7</version>
    <executions>
        <execution>
            <id>default-cli</id>
            <phase>package</phase>
            <goals>
                <goal>run</goal>
            </goals>
            <configuration>
                <target>
                    <copy todir="${installer.dir}/jre1.8.0_91">
                        <fileset dir="${project.basedir}/jre1.8.0_91" />
                    </copy>
                    <copy todir="${installer.dir}/commons-daemon">
                        <fileset dir="${project.basedir}/commons-daemon" />
                    </copy>
                    <copy file="${project.build.directory}/opendata-ckan-pusher.jar" todir="${installer.dir}" />
                    <copy file="${project.basedir}/install.bat" todir="${installer.dir}" />
                    <copy file="${project.basedir}/uninstall.bat" todir="${installer.dir}" />
                    <copy file="${project.basedir}/config/pusher.yml" todir="${installer.dir}" />
                    <copy file="${project.basedir}/LICENSE" todir="${installer.dir}" />
                </target>
            </configuration>
        </execution>
    </executions>
</plugin>

You will notice the installer.bat and uninstaller.bat which are the files that use commons-daemon to manage the service. The installer creates the service. Commons-daemon has three modes: exe (which allows you to wrap an arbitrary executable), Java (which is like exe, but for java applications) and jvm (which runs the java application in the same process; I don’t know how exactly though).

I could use all three options (including the launch4j created exe), but the jvm allows you to have a designated method to control your running application. The StartClass/StartMethod/StopClass/StopMethod parameters are for that. Here’s the whole installer.bat:

commons-daemon\prunsrv //IS//OpenDataPusher --DisplayName="OpenData Pusher" --Description="OpenData Pusher"^
     --Install="%cd%\commons-daemon\prunsrv.exe" --Jvm="%cd%\jre1.8.0_91\bin\client\jvm.dll" --StartMode=jvm --StopMode=jvm^
     --Startup=auto --StartClass=bg.government.opendatapusher.Pusher --StopClass=bg.government.opendatapusher.Pusher^
     --StartParams=start --StopParams=stop --StartMethod=windowsService --StopMethod=windowsService^
     --Classpath="%cd%\opendata-ckan-pusher.jar" --LogLevel=DEBUG^ --LogPath="%cd%\logs" --LogPrefix=procrun.log^
     --StdOutput="%cd%\logs\stdout.log" --StdError="%cd%\logs\stderr.log"
     
     
commons-daemon\prunsrv //ES//OpenDataPusher

A few clarifications:

  • The Jvm parameter points to the jvm dll
  • The StartClass/StartMethod/StopClass/StopMethod point to a designated method for controlling the running application. In this case, starting would just call the main method, and stopping would shutdown the scheduled executor, so that the application can exit
  • The classpath parameter points to the fat jar
  • Using %cd% is risky for determining the path to the current directory, but since the end-users will always be starting it from the directory where it resides, it’s safe in this case.

The windowsService looks like that:

public static void windowsService(String args[]) throws Exception {
     String cmd = "start";
     if (args.length > 0) {
        cmd = args[0];
    }

    if ("start".equals(cmd)) {
        Pusher.main(new String[]{});
    } else {
        executor.shutdownNow();
        System.exit(0);
    }
}

One important note here is the 32-bit/64-bit problem you may have. That’s why it’s safer to bundle a 32-bit JRE and use the 32-bit (default) prunsrv.exe.

I then had an “installer” folder with jre and commons-daemon folders and two bat files and one fat jar. I could then package that as an self-extractable archive and distribute it (with a manual, of course). I looked into IzPack as well, but couldn’t find how to bundle a JRE (maybe you can).

That’s a pretty niche scenario – usually we develop for deploying to a Linux server, but providing local tools for a big organization using Java may be needed every now and then. In my case the long-running part was a scheduled executor, but it can also run a jetty service that serves a web interface. Why would it do that, instead of providing a URL – in cases where access to the local machine matters. It can even be a distributed search engine (like that) or another p2p software that you want to write in Java.

3

Why I Prefer Merge Over Rebase

June 17, 2016
Tags: , ,

There are many ways to work with git. The workflows vary depending on the size of the team, organization, and on the way of working – is it distributed, is it sprint-based, is it a company, or an open-source project, where a maintainer approves pull requests.

You can use vanilla-git, you can use GitHub, BitBucket, GitLab, Stash. And then on the client side you can use the command line, IDE integration, or stand-alone clients like SourceTree.

The workflows differ mostly in the way you organize you branches and the way you merge them. Do you branch off branches? Do you branch off other people’s branches, which are work-in-progress. Do you push or stay local? Do you use it like SVN (perfectly fine for a single developer on a pet project), or you delve into more “arcane” features like --force-with-lease.

This is all decided by each team, but I’d like to focus on one very debated topic – rebasing vs merging. While you can get tons of results discussing rebasing vs merging, including the official git documentation, it has become more of a philosophical debate, rather than a practical one.

I recently asked a practical question about a rebase workflow. In short, by default rebasing seems not to favour pushing stuff to the central repo. If you do that before rebasing, you’d always need to force-push. And force-pushing may make it very hard for people that are based on your branch. Two questions that you are already asking:

  • Why do you need to push if something isn’t ready? Isn’t it the point of the “D” in “DVCS” to be able to commit locally and push only when ready? Well, even if you don’t use git as SVN, there are still plenty of use-cases for pushing every change to your own feature branch remote – you may be working from different machines, a colleague may want to pick up where you left (before leaving for holiday or falling sick), or even hard drive failures and theft. I think basically you have to push right before you log off, or even more often. THe “distributed” allows for working offline, or even without a central repo (if it goes down), but it is not the major benefit of git.
  • Why would anyone be based on your work-in-progress branch? Because it happens. Sometimes tasks are not split that strictly and have dependencies – you write a piece of functionality, which you then realize should be used by your teammates who work on another task within the same story/feature. You aren’t yet finished (e.g. still polishing, testing), but they shouldn’t wait. Even a single person may want to base his next task on the previous one, while waiting for code review comments. The tool shouldn’t block you from doing this from time to time, even though it may not be the default workflow scenario.

Also, you shouldn’t expect every team member to be a git guru, who rewrites history for breakfast. A basic set of commands (even GUIs) should be sufficient for a git workflow, including the edge cases. Git is complicated and the task of each team is to make it work for them, rather than against them. Probably there is one article for each git command or concept with a title “X considered harmful”, and going through that maze is not trivial for an inexperienced git user. As Linus Torvalds once allegedly said:

Git has taken over where Linux left off separating the geeks into know-nothings and know-it-alls. I didn’t really expect anyone to use it because it’s so hard to use, but that turns out to be its big appeal.

Back to the rebase vs merge – merge (with pull requests) feels natural for the above. You branch often, you push often. Rebase can work in the above use-cases (which I think are necessary). You can force-push after each rebase, and you can make sure your teammates resolve that. But what’s the point?

The practical argument is that the graph that shows the history of the repo is nice and readable. Which I can’t argue with, because I’ve never had a case when I needed a cleaner and better graph. No matter how many merge commits and ugliness there’s in the graph, you can still find your way (if you ever need to). Besides, a certain change can easily be traced even without the graph (e.g. git annotate).

If you are truly certain you can’t go without a pretty graph, and your teammates are all git gurus who can resolve a force-push in minutes, then rebasing is probably fine.

But I think a merge-only workflow is the more convenient way of work that accounts for more real-world scenarios.

I realize this is controversial, and I’m certainly a git n00b (I even use SourceTree rather than the command line for basic commands, duh). But I have used both merge and rebase workflows and I find the merge one more straightforward (after all, force-pushing being part of the regular workflow sounds suspicious?).

Git is the scala of VCS – it gives you many ways to do something, but there is no “right way”. This isn’t necessarily bad, as indeed there are many different scenarios where git can be used. For the ones I’ve had (regular project in a regular company, with a regular semi-automated release & deployment cycle, doing regular agile), I’d always go for merge, with pull requests.

6