git branches : easy way out (Not for purists)

Bouncing on this old article about git (http://libe.toile-libre.org/?p=1318), I give you some more commands to help you work with git in an efficient way…
Warn : this is useful in the case where you work on a centralized git version control, and when the work is stored in the ‘origin’ remote server.


[alias]
singleCommit = "!f() { git pull origin develop --no-edit && git push && git reset --soft origin/master && git commit -m \"$1\" && git push -f; }; f"
singleCommitAs = "!f() { git pull origin develop --no-edit && git push && git reset --soft origin/master && git commit -m \"$2\" --author \"$1\" && git push -f; }; f"
singleCommitFrom = "!f() { git pull origin $1 --no-edit && git push && git reset --soft $1 && git commit -m \"$2\" && git push -f; }; f"
addToSingleCommit = "!f() { git add \"$1\" && git singleCommit $(git log --format=%B -n 1); }; f"

git singleCommit « message » : group together all the commits made in the actual branch, ahead of master, into one, and updates the current branch
git singleCommitAs « someone » « message » : same as git singleCommit, with the ability to commit for someone else
git singleCommitFrom « branch » « message » : same as git singleCommit, but synchronizes the work with a branch other than master
git addToSingleCommit « path » : adds path to the version control, grab the last commit message, then group all the commits ahead of master together into one commit.

Android : one or two flashy tips

Ever wanted to use the same FloatingActionButton for several purposes without poping/replacing it ?
If your need is to change the color of your button to fade it smoothly, you can think of a color fading animation.
This can be useful if you want to recycle a « play » button as a « stop » button.

You can save your breathe implementing this animation by using this code : https://gist.github.com/anonymous/0e985307c0e1e11c589b8932b2913fd9#file-fromcolortocolor-java

What if you have a material design app, with a tab layout (several tabs) and you want to make it less linear, by scrolling the fabs in an original way ?

You can make a page listener on the ViewPager and implement some animation cases according to the currently active tab : https://gist.github.com/anonymous/0e985307c0e1e11c589b8932b2913fd9#file-onpagelistener-java
In this example, we have three tabs. In the first tab, the fab pops over the screen. In the second one, the fab scrolls slower than the screen, and in the third one, it scrolls up from the bottom right corner of the screen.

Feel free to use these snippets. If you want to see this animations in action, just download my app at https://play.google.com/store/apps/details?id=org.toilelibre.libe.athg2sms.

Thanks for reading me and happy 2017.

Java : How to speak about events in a material world

The title of this article is assumed to not be IT related, but it speaks about java. I just wanted it to be of common sense.

Most people writing software today only speak about what we have (the objects), and not about what we know (the events). And sometimes they refuse to understand the meaning of the knowledge (with its history) in favor of its only content.


But sometimes it can be useful.

« History is not important » means « Giving the money to the wrong person is not important ». Why do I say that ?
If we address a bank check of $100 to a Mr John and then a malicious person adds an ‘s’ to the recipient, it will become Mr Johns. We give the money anyway, and we don’t know that we gave it to the wrong person… In fact, our bank account does not notice the problem, we are debited of $100. We are attacked and we just lost data about the attack.

What is the problem behind ? We lose track of known data just because the state has changed.
How to deal with ? Well it is quite simple : past history won’t change.

Therefore, storing the past data can ensure everything gets tracked. The bank check will be photocopied before and after each event (like someone appending a ‘s’ char to a family name). Therefore we can validate that nothing wrong has happened during the process by scanning the bank check history.

Howto ? You can implement it on your own, but that is really a boilerplate to code.
I have tried the JaVers library and wrote a very small webapp to maintain an in-memory user api (https://github.com/libetl/test-with-javers)

Things stay simple : just keep on persisting your data as you would in a non-event-sourced program, but at the last moment, let JaVers work directly with the repository as following :
interface PersonRepository extends MongoRepository {}

You are now able to query data as usual, but also examine the changes in data and read some old snapshots : (https://github.com/libetl/test-with-javers/blob/master/src/main/java/org/toilelibre/libe/person/PersonResource.java).

Then, reading the current state of the bank check would need to get http://myapi.com/bankcheck/01theid23
Reading the old values of the bank check is as easy as reading http://myapi.com/bankcheck/01theid23/snapshots/1

…And you can figure out that someone replaced « Mr John » by « Mr Johns » by reading http://myapi.com/bankcheck/01theid23/changes/1 !
The response will look like this :
1. ValueChange{
globalId:'com.myapi.bankcheck/01theid23',
property:'to', oldVal:'Mr John', newVal:'Mr Johns'}

Then, you will go beyond things, you will be less materialist and care only about the facts… what matters really to your users.

Android : what if « git push » becomes your Play Store publish command

Ever wondered why you need these repetitive tasks each time (in these three pictures below) ?
apk from intellij
intellij2

playstore

Not only you run the risk of losing time, but you can go wrong because of a lack of concentration (by publishing in prod instead of alpha), or you can even forget to publish your app intermediary updates. Your users might lose patience while waiting for some fixes, or some features.

Now think about how devops teams works. The team developers change a piece of code, than ensure nothing is broken, then let the software factory prepare, package, deploy and monitor the program.

How to achieve this, without money nor time ?
Start by using :

  • a source control management : either a private one, or BitBucket, or GitHub.
  • an Application Lifecycle Management tool : maven or gradle
  • a Continuous Integration service : sure there is Jenkins, but you can use a free cloud CI : http://travis-ci.org
  • an account of the Play Store of course.


This example below will use GitHub + maven + Travis :

  • go to your play store params > API Access > Create a service account. Display this item in the Google Developers Console and download the .p12 file
  • make sure you know where your .keystore file is to sign your APK. If you haven’t got one yet, please visit the android developers help https://developer.android.com/studio/publish/app-signing.html
  • add your repository to travis and configure a .travis.yml file in the root folder of your repository
  • install the travis client with gem install travis
  • base64 encode both the keystore and the p12 file, then concat them together (cat key.p12.b64 account.keystore.b64 > credentials.b64). Do not place the file in your repo, instead move it to the parent folder.
  • ask travis to add it securely without comprimising the security (travis encrypt-file ../credentials.b64 --add). You can add the credentials.b64.enc securely to your repository without fear.
  • have a look at this travis build file : https://github.com/libetl/singIn/blob/alpha/.travis.yml. We have the openssl command that was asked to be added, but the ALM must be aware of both keystore and keypass passwords.
  • There is a tip to unpack a single .enc file into both the keystore and the .p12 file (.travis.yml lines 19-20). Split the file on the == token to know where the p12 file ends and where the keystore begins. base64 decode them both, and you are set…
  • on the maven command line, we ask to proceed to the tests (org.jacoco:jacoco-maven-plugin:prepare-agent test org.jacoco:jacoco-maven-plugin:report org.eluder.coveralls:coveralls-maven-plugin:report), then the packaging phase (android:manifest-merger install android:zipalign) then the publishing phase (android:publish-apk), and we pass as parameters the keystore and keypass secrets right from the travis hidden vars settings
  • now we just need to setup the android-maven-plugin so it is aware of how it must be configured.
    • The android manifest versionCode must be incremented for each publish, therefore we rely on the $TRAVIS_BUILD_NUMBER value for the counter.
    • We ask the plugin to override the manifest from the repository (by specifying the output file name of the manifest-merger target).
    • We must fill in the service account e-mail and the path to the .p12 file
    • We must provide a path to the keystore, with the following secrets given as properties : keystorepass, keypass
    • Have a look at this pom.xml file to know more about the challenge : https://github.com/libetl/singIn/blob/alpha/pom.xml
  • publish a first version of your app if it is not already done (by the classic workflow with your IDE or bash commands)
  • change any file on your git repository, commit the changes, push them…
  • go to your travis app job, and see it working : https://travis-ci.org/libetl/singIn/builds/171238769

Wait about 10-15 minutes, and your app is deployed on the play store automatically :
playstore2

From now on, each time you change something in your codebase, will be published on the Play Store… painless.
Now you can say : to deploy on the Play Store, just do git push

Let your code know… your code

Has this ever happened to you ?

… you work on a legacy project, and there is this method called `getAccount` which sounds simple.
But this method is a compound one.
1°) It actually calls a database A to fetch a user Id.
2°) Then completes the data with a B Web service based on LDAP
3°) Then completes it with some extra data based on the account preferences from a service C
4°) It updates the database with the found data (yes, « getAccount » writes something… in a legacy project, you will certainly see that)
5°) It returns all the data.

What if you call getAccount from another point of view ? If you are the caller of the method, you could even do this :

if (getAccount (user) != null) {
String userName = getAccount (user).getName ();
}

Ugh… Did you realize that these 4 lines will call different external services 3 * 2 = 6 times… It has awful performance, and can be harmful (= with side effects) if the method getAccount is not idempotent (which is likely to happen in a legacy project)

What you need is not (really) refactoring. The technical debt is too tough to be attacked right now. What you need IS knowledge about the code. How can you know what is happening in the code ? Think painless, think about code to understand code …

I have created what I call a DiveDeepAnalyzer. The aim of this is to understand which sensitive methods are called directly or indirectly in each method.

https://gist.github.com/libetl/00bc6079b3dd91af55bb6cf8229e942a

Write down the list of sensitive calls (webservices, LDAP, databases, cache, drivers, hardware), and the algorithm will annotate each and every method that is likely to use these calls.

How to ?
1°) Copy this class in your project,
2°) change the params in the main method (you will need to exclude the DiveDeepAnalyzer class itself from the scan)
3°) run the class.

This analyzer uses spoon, an OSS made by a research lab in France (inria). You can find more about spoon at this address : http://spoon.gforge.inria.fr/

My Git hint : a feature per commit

Have you had a look on your project git log ? It is often an enumeration of changes which are sometimes not explained correctly… « bugfixes », « some changes », « work in progress », « pull request review ».

wip git log

The log what you would want to see is a log of all features recently added from the JIRA (or any other issue tracker) board.
Moreover, having your features grouped in one commit will be a way to know all of the necessary changes for each issue (because a git diff will reveal all the changes of a feature at once)

Here is how it can be done :

– As a setup, create this command alias in your .gitconfig file (I assume that your main branch is called « master » here.)

[alias]
singleCommit = "!f() { git pull origin master --no-edit && git push && git reset --soft origin/master && git commit -m \"$1\" && git push -f; }; f"

– create a git branch from « master »
– commit everything you want inside this branch, don’t hesitate to commit or push any correction for a mistake you found.
– create a pull request if you need to, review the comments, commit / push, and so on.
– when everything is ready and the code is ready to be merged, find the JIRA code (or your issue tracker issue id) (we will name it MYPROJECT-XXXX)
– issue this command : git singleCommit "MYPROJECT-XXXX a comment for this feature" (change this text to the aim, or the title of the feature)
– if the command fails, your branch is not synced with master. Resolve the conflicts, git add . and git commit, then go back to the instruction before.
– now you can merge. There will be ONE commit for all the content of the branch.

Jackson : Make the most of POJO versus Immutable Objects

There is a simple dilemma that comes into my thoughts :
-> I have an object, right… class A { String b;int c}
-> It must be filled automatically thanks to unmarshalling from a json string {"b":"val1","c":2}
-> I want it to be immutable because I don’t want to alter my input, or to trust some insecure hashcode/equals methods.

Therefore : I must have a deserializable class, with final fields, a comprehensive constructor, and a builder pattern.

That sounds awkward with jackson. But is it possible ? Yes, it is, and I’ll prove it.
https://gist.github.com/libetl/853029faf7999c98159f36d1c229c961#file-a-java

Neat, but let’s complicate this attempt a bit. Suppose we have a polymorphic object to be deserialized. It is a paradigmatic issue between Java and Json : Java is typed, Json does not care.
let’s declare this in pseudo DSL : class TripSegment { TransportMode mode; SegmentDetails detailsOfTheSegment }
with enum TransportMode { FLIGHT, TRAIN, BOAT, CAR_SHARING, LOCAL_TRANSIT }
and with several details classes like this one : class LocalTransitDetails implements SegmentDetails {String lineCode;DateTime timeOfDeparture;DateTime timeOfArrival;String departureStationName;String arrivalStationName;}

We want neither to create a TripSegmentBean as a POJO, nor to have a non jackson deserializer… So how to ?
Here is how I could successfully answer to that problem :
https://gist.github.com/libetl/853029faf7999c98159f36d1c229c961#file-tripsegment-java

Now, I can handle my unmarshalled value object as if I built it manually. Now forget about dozer or BeanUtils. Let’s concentrate on the workflow.

curl4j

Would you enjoy being able to cURL something within one command line in java ?

Seems sometimes annoying to instantiate an HttpPost, or even an HttpClientBuilder, with its fluent interface.

What I suggest you is to use my code sample, to enhance it and to transform it in your customized cURL command. You can even add custom arguments to it, so the commands will be shorter to type.

How to use it ? Like a normal curl !
Response curlResponse = curl(" -X'POST' -H'Content-Type:application/x-www-form-urlencoded' -d 'token=mytoken&channel=mychannel' 'https://slack.com/api/chat.postMessage'")

Here you are, with a curl response that you are able to read easily with the HttpClient API.
Where is this code sample ? There : https://gist.github.com/libetl/edd72cca5464aa403395029430360344

EDIT : 2016-07-22 : https://github.com/libetl/curl and https://mvnrepository.com/artifact/org.toile-libre.libe/curl

BTW, this blog is now 8 years old, and this is the 200th post… way to go.

Convert List(a,b,c,d) into Map(a->b,c->d) in Java 8

Let input be a List of String, you want it to become Map<String, String> where even elements are the keys, and odd elements are the values. Works only when key elements are unique.

Map<String, String> mappedValues = input.stream().collect(Collectors.toMap( Function.identity(), input::indexOf)).entrySet().stream().collect(Collectors.toMap(
entry -> {return entry.getKey();}, entry -> {return entry.getValue() == input.size() 1 ? 1 : entry.getValue() + 1;})).entrySet().stream().filter(entry -> {return entry.getValue() % 2 == 1;}).collect(Collectors.toMap(entry -> {return entry.getKey();}, entry -> {return input.get(entry.getValue());}));

My opinion on the « Best-so-far » architecture

This post is not to talk about something I have made, but to talk about something I have lived with.

First, what is it ?

« Best-so-far » (or BSF) means to work with the latest pieces of software each and every day.

It relies on several IT infrastructure components : a software factory, a source code management, a build scheduler and an integration monitoring.

The software factory

It is a software whose role is to collect all the libraries across a company and to release the binaries for the employees, partners and customers.
It can be implemented by a storage access on a platform, or a machine, or a shared drive.
It also contains archived binaries of the previous versions of each component (for redundancy purposes), but is mostly focused on providing the latest framework chunks, with the latest features and the latest bugfixes.
All of these binaries can be used dynamically to build and release other binaries, and so on…

The SCM (source code management)

It is a set of tools to optimize the branches code management in a team (allowing several users to commit on the same project with the least possible conflicts).
Most of the teams appreciate to use standard tools like git, svn, mercurial, cvs, ClearCase, perforce, bazaar. Some others wanted to create their own scm tool (thinking of adl, http://www.maruf.ca/files/caadoc/CAAWsmQuickRefs/).
It is often managed by having one single integration branch where every team member has to merge in after finishing a task.

The build scheduler

It has two modules : a batch whose role is to execute some actions on some files, and an UI to help the users to administrate the jobs to be run each day.
It uses the latest version from « the SCM » to build it and install the packages on « The software factory ». It also contains some details about the errors found while trying to build or release something.

The integration monitoring

It is a UI tool with red and green lights to inform all of the employees that a build failed or suceeded. The most common standard software existing are Jenkins, Bamboo, CircleCI, CruiseControl, or Drone. Some companies prefer to create their own, with better or worse usability (thinking of ReleaseWork). The integration monitoring collects data from « the SCM », « the build scheduler » and « The software factory » to gives an insight of what is working and what is not.

2. Why do we use it ?

The thinking behind this implementation is to wonder how different dev teams can work across a company on the same product without suffering from waiting for each other. The expectation is to « break the silos », and to get the teams in a rhythm.
Letting a dev team beneath another in terms of software dependencies will help the dependent team to update the subsequent code as soon as possible. For example, a team building a polygon extrusion lib will update every day the dependencies to benefit from the work of the maths library. As a consequence, each lib will get the « Best-so-far » state of each lib when it comes to releasing to the employees, partners, customers. And so on…

3. Is it good ? I don’t know. Is it easy ? NO, it is NOT.

This is where I will express opinions of that practice.
I have worked for a company using that practice since birth. The « break the silos » fantasy is far away from reality.
I have never worked in such a partitioned way of thinking company.
The thing that turned wrong is that the work streams who managed the frameworks were separated from those who managed the BSF architecture.
As a result, a lot of improvements in this architecture landed in a total contradiction of the dev teams practices :
– The team handling the integration monitoring is pressured to release the build. The team members try to identify who is late by seeing who has worked on « the SCM ». They identify some culprits, without any consideration about the team responsibility. This team is not appreciated because it is seen like a toilet lady to the developers. « Have you fixed your issue ? The next release must be submitted by noon. I won’t leave the office until it is fixed and working »
– The team working for « the SCM » delivers some features that are useful for some users. Not all. All the other teams are forced to work on ONE SCM that does not fit their needs (if the company uses cvs instead of git, no one is allowed to work with git, and the Internet proxy must forbid any request sent to github or bitbucket).
– « The software factory » uses a common repository which can work with any kind of project lifecycle tool. You must be able to package Java classes, C++ projects or DSL scripts at once and to send them on the same storage access. Do we know a PL tool which is working on anything ? Yes, but it is really primitive. « make » can do the job. That was the most awkard answer I could ever think of. To understand it, just imagine that I work on a Java project but I have to build and to release my software using the « make » C++ tool. And last that not least, just imagine that the choosen storage can be as clever as a Windows share (that is it, samba share) !
– « The build scheduler » is maybe the tool which is the least risky to maintain, because the responsibility lies on the dev teams. It basically consists in launching some tasks and display the logs. Here, no specific tool to an environment. Windows shells are launched, that is all. Neither any JUnit parsing, nor any performance report.

Last but not least,

what is the most disturbing item for developers in the BSF way ?

→ You are not going to believe it, but it is the principle itself. Working on snapshots itself is at once a good, and a bad practice. Imagine that your company makes yogurts. It wants to rely on a single milk provider, which is considered as the most talented milk provider locally. « We don’t need fallback providers, we use the best one, the best-so-far ». Now imagine that a brutal and unexpected disease cause the death of the whole livestock in one night. How the company is going to adapt to that situation ?
So does a company with a « Best-so-far » architecture. You cannot work with any other provider, and you cannot work with older but functional versions of the frameworks.
The activity stops in the hope that the situation gets better in the future. In my opinion, that explains why my previous company decided to release one product version per year at most.