Hubot : how to pipeline your commands ?

If you got used to creating new commands in your Hubot server, you will probably remember of this syntax :

robot.hear(/say hello/i, function(res) {/*...*/}
robot.respond(/how are you/i, function(res) {/*...*/}

When you want to combine then into one, the easy way out is to create a spare command like this one :

robot.respond(/hello, how are you/i, function(res) {/*...*/}

Things become more difficult when you need combine a set of different instructions. Theorically, it means that you should create n^k different commands.
I was interested in finding how I could change the commands into a full pipeline with the least possible amount of changes.
And I found an answer (maybe not the best indeed) :
Insert the pipeline object into your code.

And then this is how your existing code is going to change :

instead of this :

robot.hear(/say hello/i, function(res) {/*...*/}
robot.respond(/how are you/i, function(res) {/*...*/}

you will write this :

pipeline.hear(/say hello/i, function(robot, res, resolve) {/*...*/;resolve();}
pipeline.respond(/how are you/i, function(robot, res, resolve) {/*...*/;resolve();}

It is important to call resolve() at the end of the execution so the pipeline knows when to start the next step.

the module.exports should look like this one :
module.exports = pipeline.buildAnswer('(my-commands)')

Here you are, your commands now are all included in your pipeline. Enjoy.

ES6 (or Javascript) : how to loop on the same promise

In the era of the promises, who likes to wait for an uncertain amount of time ?

A promise is the key to hope.
No need to setTimeout(function(){}, XXXXX) milliseconds. You don’t know if you waited too much, and even worse… : you don’t know if you did not wait enough.

Actually, you can tell your program to wait for a promise, then check a condition, then chain a new promise if the condition is not met. The only subtle delta to achieve this is to trigger the condition testing and the return value inside functions ; here is an example :

aPromise.then((pollResponse) => {
const promiseWhile = (condition, oldResponse, promiseGenerator) => {
if (!condition(oldResponse)) {
return typeof oldResponse === 'function' ? oldResponse : () => oldResponse
return promiseGenerator().then((newResponse) => promiseWhile(condition, newResponse, promiseGenerator))

const promiseGenerator = (execute, oldResponse) => {
return promiseWhile((newResponse) => newResponse.code === 'PENDING', oldResponse,
() => execute().then((newResponse) => promiseGenerator(execute, newResponse)))

return promiseGenerator(() => query(`/myApi/pollStatus`), pollResponse)
.then((pollResponse) => {
const result = pollResponse()
console.log(result.code) //'COMPLETE'

Given a method called query, which returns either {code: ‘PENDING’} or {code: ‘COMPLETE’}, This code runs the query method regularly and stops when the value of code is no more ‘PENDING’. But this does not wait for an arbitrary amount of time.

If the condition is not true anymore, it returns a function whose result is the value to fetch.
Else it chains the previously resolved promise with a promise calling query again and checking the same condition again.

That is all.

There is also a lighter version without value return, and where ‘query’ is immediate (actually, it occurs within condition()), but unfortunately you have to define a small interval to avoid saturation on your event loop :

const waitWhile = (condition, resolve) => {
if (!condition()) {
setTimeout(() => waitWhile(condition, resolve), 1)

git branches : easy way out (Not for purists)

Bouncing on this old article about git (, I give you some more commands to help you work with git in an efficient way…
Warn : this is useful in the case where you work on a centralized git version control, and when the work is stored in the ‘origin’ remote server.

singleCommit = "!f() { git pull origin develop --no-edit && git push && git reset --soft origin/master && git commit -m \"$1\" && git push -f; }; f"
singleCommitAs = "!f() { git pull origin develop --no-edit && git push && git reset --soft origin/master && git commit -m \"$2\" --author \"$1\" && git push -f; }; f"
singleCommitFrom = "!f() { git pull origin $1 --no-edit && git push && git reset --soft $1 && git commit -m \"$2\" && git push -f; }; f"
addToSingleCommit = "!f() { git add \"$1\" && git singleCommit $(git log --format=%B -n 1); }; f"

git singleCommit « message » : group together all the commits made in the actual branch, ahead of master, into one, and updates the current branch
git singleCommitAs « someone » « message » : same as git singleCommit, with the ability to commit for someone else
git singleCommitFrom « branch » « message » : same as git singleCommit, but synchronizes the work with a branch other than master
git addToSingleCommit « path » : adds path to the version control, grab the last commit message, then group all the commits ahead of master together into one commit.

Android : one or two flashy tips

Ever wanted to use the same FloatingActionButton for several purposes without poping/replacing it ?
If your need is to change the color of your button to fade it smoothly, you can think of a color fading animation.
This can be useful if you want to recycle a « play » button as a « stop » button.

You can save your breathe implementing this animation by using this code :

What if you have a material design app, with a tab layout (several tabs) and you want to make it less linear, by scrolling the fabs in an original way ?

You can make a page listener on the ViewPager and implement some animation cases according to the currently active tab :
In this example, we have three tabs. In the first tab, the fab pops over the screen. In the second one, the fab scrolls slower than the screen, and in the third one, it scrolls up from the bottom right corner of the screen.

Feel free to use these snippets. If you want to see this animations in action, just download my app at

Thanks for reading me and happy 2017.

Java : How to speak about events in a material world

The title of this article is assumed to not be IT related, but it speaks about java. I just wanted it to be of common sense.

Most people writing software today only speak about what we have (the objects), and not about what we know (the events). And sometimes they refuse to understand the meaning of the knowledge (with its history) in favor of its only content.

But sometimes it can be useful.

« History is not important » means « Giving the money to the wrong person is not important ». Why do I say that ?
If we address a bank check of $100 to a Mr John and then a malicious person adds an ‘s’ to the recipient, it will become Mr Johns. We give the money anyway, and we don’t know that we gave it to the wrong person… In fact, our bank account does not notice the problem, we are debited of $100. We are attacked and we just lost data about the attack.

What is the problem behind ? We lose track of known data just because the state has changed.
How to deal with ? Well it is quite simple : past history won’t change.

Therefore, storing the past data can ensure everything gets tracked. The bank check will be photocopied before and after each event (like someone appending a ‘s’ char to a family name). Therefore we can validate that nothing wrong has happened during the process by scanning the bank check history.

Howto ? You can implement it on your own, but that is really a boilerplate to code.
I have tried the JaVers library and wrote a very small webapp to maintain an in-memory user api (

Things stay simple : just keep on persisting your data as you would in a non-event-sourced program, but at the last moment, let JaVers work directly with the repository as following :
interface PersonRepository extends MongoRepository {}

You are now able to query data as usual, but also examine the changes in data and read some old snapshots : (

Then, reading the current state of the bank check would need to get
Reading the old values of the bank check is as easy as reading

…And you can figure out that someone replaced « Mr John » by « Mr Johns » by reading !
The response will look like this :
1. ValueChange{
property:'to', oldVal:'Mr John', newVal:'Mr Johns'}

Then, you will go beyond things, you will be less materialist and care only about the facts… what matters really to your users.

Android : what if « git push » becomes your Play Store publish command

Ever wondered why you need these repetitive tasks each time (in these three pictures below) ?
apk from intellij


Not only you run the risk of losing time, but you can go wrong because of a lack of concentration (by publishing in prod instead of alpha), or you can even forget to publish your app intermediary updates. Your users might lose patience while waiting for some fixes, or some features.

Now think about how devops teams works. The team developers change a piece of code, than ensure nothing is broken, then let the software factory prepare, package, deploy and monitor the program.

How to achieve this, without money nor time ?
Start by using :

  • a source control management : either a private one, or BitBucket, or GitHub.
  • an Application Lifecycle Management tool : maven or gradle
  • a Continuous Integration service : sure there is Jenkins, but you can use a free cloud CI :
  • an account of the Play Store of course.

This example below will use GitHub + maven + Travis :

  • go to your play store params > API Access > Create a service account. Display this item in the Google Developers Console and download the .p12 file
  • make sure you know where your .keystore file is to sign your APK. If you haven’t got one yet, please visit the android developers help
  • add your repository to travis and configure a .travis.yml file in the root folder of your repository
  • install the travis client with gem install travis
  • base64 encode both the keystore and the p12 file, then concat them together (cat key.p12.b64 account.keystore.b64 > credentials.b64). Do not place the file in your repo, instead move it to the parent folder.
  • ask travis to add it securely without comprimising the security (travis encrypt-file ../credentials.b64 --add). You can add the credentials.b64.enc securely to your repository without fear.
  • have a look at this travis build file : We have the openssl command that was asked to be added, but the ALM must be aware of both keystore and keypass passwords.
  • There is a tip to unpack a single .enc file into both the keystore and the .p12 file (.travis.yml lines 19-20). Split the file on the == token to know where the p12 file ends and where the keystore begins. base64 decode them both, and you are set…
  • on the maven command line, we ask to proceed to the tests (org.jacoco:jacoco-maven-plugin:prepare-agent test org.jacoco:jacoco-maven-plugin:report org.eluder.coveralls:coveralls-maven-plugin:report), then the packaging phase (android:manifest-merger install android:zipalign) then the publishing phase (android:publish-apk), and we pass as parameters the keystore and keypass secrets right from the travis hidden vars settings
  • now we just need to setup the android-maven-plugin so it is aware of how it must be configured.
    • The android manifest versionCode must be incremented for each publish, therefore we rely on the $TRAVIS_BUILD_NUMBER value for the counter.
    • We ask the plugin to override the manifest from the repository (by specifying the output file name of the manifest-merger target).
    • We must fill in the service account e-mail and the path to the .p12 file
    • We must provide a path to the keystore, with the following secrets given as properties : keystorepass, keypass
    • Have a look at this pom.xml file to know more about the challenge :
  • publish a first version of your app if it is not already done (by the classic workflow with your IDE or bash commands)
  • change any file on your git repository, commit the changes, push them…
  • go to your travis app job, and see it working :

Wait about 10-15 minutes, and your app is deployed on the play store automatically :

From now on, each time you change something in your codebase, will be published on the Play Store… painless.
Now you can say : to deploy on the Play Store, just do git push

Let your code know… your code

Has this ever happened to you ?

… you work on a legacy project, and there is this method called `getAccount` which sounds simple.
But this method is a compound one.
1°) It actually calls a database A to fetch a user Id.
2°) Then completes the data with a B Web service based on LDAP
3°) Then completes it with some extra data based on the account preferences from a service C
4°) It updates the database with the found data (yes, « getAccount » writes something… in a legacy project, you will certainly see that)
5°) It returns all the data.

What if you call getAccount from another point of view ? If you are the caller of the method, you could even do this :

if (getAccount (user) != null) {
String userName = getAccount (user).getName ();

Ugh… Did you realize that these 4 lines will call different external services 3 * 2 = 6 times… It has awful performance, and can be harmful (= with side effects) if the method getAccount is not idempotent (which is likely to happen in a legacy project)

What you need is not (really) refactoring. The technical debt is too tough to be attacked right now. What you need IS knowledge about the code. How can you know what is happening in the code ? Think painless, think about code to understand code …

I have created what I call a DiveDeepAnalyzer. The aim of this is to understand which sensitive methods are called directly or indirectly in each method.

Write down the list of sensitive calls (webservices, LDAP, databases, cache, drivers, hardware), and the algorithm will annotate each and every method that is likely to use these calls.

How to ?
1°) Copy this class in your project,
2°) change the params in the main method (you will need to exclude the DiveDeepAnalyzer class itself from the scan)
3°) run the class.

This analyzer uses spoon, an OSS made by a research lab in France (inria). You can find more about spoon at this address :

My Git hint : a feature per commit

Have you had a look on your project git log ? It is often an enumeration of changes which are sometimes not explained correctly… « bugfixes », « some changes », « work in progress », « pull request review ».

wip git log

The log what you would want to see is a log of all features recently added from the JIRA (or any other issue tracker) board.
Moreover, having your features grouped in one commit will be a way to know all of the necessary changes for each issue (because a git diff will reveal all the changes of a feature at once)

Here is how it can be done :

– As a setup, create this command alias in your .gitconfig file (I assume that your main branch is called « master » here.)

singleCommit = "!f() { git pull origin master --no-edit && git push && git reset --soft origin/master && git commit -m \"$1\" && git push -f; }; f"

– create a git branch from « master »
– commit everything you want inside this branch, don’t hesitate to commit or push any correction for a mistake you found.
– create a pull request if you need to, review the comments, commit / push, and so on.
– when everything is ready and the code is ready to be merged, find the JIRA code (or your issue tracker issue id) (we will name it MYPROJECT-XXXX)
– issue this command : git singleCommit "MYPROJECT-XXXX a comment for this feature" (change this text to the aim, or the title of the feature)
– if the command fails, your branch is not synced with master. Resolve the conflicts, git add . and git commit, then go back to the instruction before.
– now you can merge. There will be ONE commit for all the content of the branch.

Jackson : Make the most of POJO versus Immutable Objects

There is a simple dilemma that comes into my thoughts :
-> I have an object, right… class A { String b;int c}
-> It must be filled automatically thanks to unmarshalling from a json string {"b":"val1","c":2}
-> I want it to be immutable because I don’t want to alter my input, or to trust some insecure hashcode/equals methods.

Therefore : I must have a deserializable class, with final fields, a comprehensive constructor, and a builder pattern.

That sounds awkward with jackson. But is it possible ? Yes, it is, and I’ll prove it.

Neat, but let’s complicate this attempt a bit. Suppose we have a polymorphic object to be deserialized. It is a paradigmatic issue between Java and Json : Java is typed, Json does not care.
let’s declare this in pseudo DSL : class TripSegment { TransportMode mode; SegmentDetails detailsOfTheSegment }
and with several details classes like this one : class LocalTransitDetails implements SegmentDetails {String lineCode;DateTime timeOfDeparture;DateTime timeOfArrival;String departureStationName;String arrivalStationName;}

We want neither to create a TripSegmentBean as a POJO, nor to have a non jackson deserializer… So how to ?
Here is how I could successfully answer to that problem :

Now, I can handle my unmarshalled value object as if I built it manually. Now forget about dozer or BeanUtils. Let’s concentrate on the workflow.


Would you enjoy being able to cURL something within one command line in java ?

Seems sometimes annoying to instantiate an HttpPost, or even an HttpClientBuilder, with its fluent interface.

What I suggest you is to use my code sample, to enhance it and to transform it in your customized cURL command. You can even add custom arguments to it, so the commands will be shorter to type.

How to use it ? Like a normal curl !
Response curlResponse = curl(" -X'POST' -H'Content-Type:application/x-www-form-urlencoded' -d 'token=mytoken&channel=mychannel' ''")

Here you are, with a curl response that you are able to read easily with the HttpClient API.
Where is this code sample ? There :

EDIT : 2016-07-22 : and

BTW, this blog is now 8 years old, and this is the 200th post… way to go.