Ante todo, perdonad por esto, pero es que estoy que trino. A ver, señores que trabajan para empresas de trabajo temporal y allegados. No voy a hacer ninguna prueba más de esas que dicen que miden el conocimiento real de programación, no son realistas, se basan normalmente en los típicos programitas que hacíamos en la universidad para aprender a programar. No me llaman la atención, me aburren y lo peor es que no mide nada de nada.

Para demostrar mis habilidades tengo la cuenta en github, donde tengo el código de lo que se hacer y mas me gusta, a saber, programar sistemas del tipo backend que corren en servidores que funcionan 24/7/365. Eso que es? pues pensad en todas las apps que hay por ahi, esas que nos instalamos en nuestros android o iphone, esas apps necesitan comunicarse con, al menos, un servidor para realizar su función. Por lo menos las aplicaciones mas complejas, por que no todas necesitan comunicarse con el exterior. Lo que se hacer, mejor o peor que otros, es programar aplicaciones para el servidor/es de aplicaciones.

También se programar aplicaciones iOS, en teoría sabiendo java podría programar para android, pero tengo un iphone4, comprendo el código html5/css3/jquery y me encanta el movimiento open source, por lo que seré mas receptivo a escuchar ofertas si voy a trabajar con código abierto, aunque no soy un taliban. Si hay algo de fuente cerrada que es mejor que la fuente abierta, lo reconozco y lo uso, de hecho, para el día a día uso un macbook pro retina de finales del 2013, y también uso ubuntu o redhat cuando quiero trabajar con algo relacionado con el bigdata, como programar tareas map reduce sobre apache hadoop y/o apache spark.




The idea behind of this project is to provide an example for a secured web service connected using REST architecture style with command pattern and composite command pattern, to a mongodb instance and a mysql instance too, best of both worlds.

There are at least two ways in java world to connect to a mongo db instance, you can choose spring-data-mongodb project, or Morphia, both of them are very easy to use, you only need to create an interface that extends something and that´s it!.

Using morphia way, you have to declare an interface like this:

package com.aironman.sample.dao;

import org.bson.types.ObjectId;

import com.aironman.sample.dao.model.Employee;

* Date: 12 junio 2014
* @author Konrad Malawski
* @author Alonso Isidoro
public interface EmployeeDao extends org.mongodb.morphia.dao.DAO<Employee, ObjectId> {

And its implementation file:

package com.aironman.sample.dao;


import org.bson.types.ObjectId;

import org.mongodb.morphia.Morphia;

import org.mongodb.morphia.dao.BasicDAO;

import com.aironman.sample.dao.EmployeeDao;

import com.aironman.sample.dao.model.Employee;

import com.mongodb.Mongo;




 * Date: 12 junio 2014


 * @author Konrad Malawski

 * @author Alonso Isidoro


public class EmployeeDaoMorphiaImpl extends BasicDAO<Employee, ObjectId> implements EmployeeDao {


public EmployeeDaoMorphiaImpl(Morphia morphia,Mongo mongo,String dbName) {

super(mongo, morphia,dbName);



Super easy!

What about if you want to use spring-data-mongo project? an interface and that is all!

package com.aironman.sample.mongo.repository;


import com.aironman.sample.mongo.documents.Role;

public interface RoleRepository extends MongoRepository<Role, String> {


And what about jpa?

package com.aironman.sample.dao;


import com.aironman.sample.dao.model.User;




 * User: aironman

 * Date: 4 de junio del 2014


public interface UserDao extends CrudRepository<User,Long> {


The most important using this nonsql technology is to design wisely the mongo db document, which is in JSON format, don’t forget about it, and depending of the wrapper technology chosen, spring or morphia, the way to build one differs. For example, the morphia document:

Employee class, modeled with morphia:

@Entity(value = “employees”, noClassnameStored = true)

public class Employee {



    private ObjectId id;


    private String firstName;

    private String lastName; // value types are automatically persisted


    Long salary; // only non-null values are stored



    Address address;



    Employee       manager; // refs are stored*, and loaded automatically


    List<Employee> underlings; // interfaces are supported


//    @Serialized

//    EncryptedReviews enchryptedReviews; // stored in one binary field



    Date startDate; //fields can be renamed


    Date endDate;



    boolean active = false; //fields can be indexed for better performance



    String readButNotStored; //fields can read, but not saved



    int notStored; //fields can be ignored (load/save)

    transient boolean stored = true; //not @Transient, will be ignored by Serialization/GWT for example.

getters and setters



Now a spring data document class:


public class Role {



private String id;


public Role() {




public Role(String id) {




getters, setters, hashCode and equals method…


What differences are? the annotation , for the spring-data and

org.mongodb.morphia.annotations.Entity for Morphia, that`s all.

the used jpa pojo in this example is the User class, with a different @Entity annotation.


public class User {




    private Long id;


    private String firstName;

    private String lastName;

    private String email;

getters and setters


That is the difficult part, enjoy with the rest!



The source code is located in



Last week i did an interview with a big video game company, King, probably the casual video games company. The point is they want somebody with strong backend skills, so here i am!, i thought! i have some skills with the back end layer, i know very well about the spring framework, orm, sql, nosql, performance, multithreading, asynchronous tasks, big data technology, etc… that was my thoughts, i have an opportunity, but they demand know how about Pico container. Bad luck, Alonso…

Well, now i know that i need to know something about pico container, so stay tunned with next post related with this interesting technology.


currently i am still available to contract.

This is a draft about my next task, periodically ask Twitter with twitter4j api relevant things, for example, my timeline and trending topics to begin with. I am going to create a web service and then integrate that functionality with a topic rabbitmq  using  spring integration and a websocket managed by a controller, so I can display the relevant info in real time in a browser.

Stay tuned!

update 21 May 2014

Twitter have a very restricted policy about using its api, an usual matter, but i consider it very restricted because i am getting some weird exceptions. A few days before i did not get any of this, but now i think i am banned! grrrrr


Failed to delete status: 401:Authentication credentials ( were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.

message – Could not authenticate you

code – 32


401:Authentication credentials ( were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.

message – Could not authenticate you

code – 32


Relevant discussions can be found on the Internet at: or

TwitterException{exceptionCode=[c8fb4e9c-7bffc794], statusCode=401, message=Could not authenticate you, code=32, retryAfter=-1, rateLimitStatus=null, version=3.0.6-SNAPSHOT}

at twitter4j.HttpClientImpl.request(

at twitter4j.HttpClientWrapper.request(

at twitter4j.HttpClientWrapper.get(

at twitter4j.TwitterImpl.get(

at twitter4j.TwitterImpl.showUser(

at twitter4j.examples.user.ShowUser.main(


Esto es un borrador acerca de mi próxima tarea, preguntar periódicamente a Twitter con la api twitter4j cosas relevantes, para empezar, mi timeline y los trending topics, para empezar. La forma de preguntar sera creando un servicio web con esa funcionalidad y luego integrarlo con un topic rabbitmq mediante spring integration a un controlador gestionado por websockets, así podré mostrar la info relevante en tiempo real en un navegador.


Finally i can continue with this post, a sample with a big data technology, for example, a java map reduce task running on apache hadoop.

First at all, you need to install hadoop, and i have to say that it is not trivial, depending of your SO, you may install it with apt, yum, brew, etc… or like i did, downloading a vmware image with all necessary stuff. There are some providers, like Cloudera or IBM BigInsights. I choose the last one because of i learn big data concepts  in, an iniciative from IBM.

Once downloaded the big insights vmware image, you can launch the boot, login with biadmin/biadmin and then click on Start BigInsights button, after few minutes, hadoop will be up and running. Go to http://bivm:8080/data/html/index.html#redirect-welcome in the firefox big insights and you can see it.

Once you have a hadoop cluster to play, it is time to code something, but first, you need to analyze the text, i put a little text, but real data are terabytes, hexabytes or more data with this format, thousands of billions lines with this format:

 id ; Agente Registrador   ; Total dominios;

 1  ; 1&1 Internet    ; 382.972;

 36 ; WEIS CONSULTING    ; 4.154;


This is the mapper, the purpose of the mapper is to create a list with keys and values.


public class DominiosRegistradorMapper extends Mapper<LongWritable, Text, Text, DoubleWritable> {


privatestaticfinal String SEPARATOR = “;”;



public void map(LongWritable key, Text value, Context context) throws IOException,

InterruptedException { 

final String[] values = value.toString().split(SEPARATOR);

for (int i=0;i<values.length;i++){


* id ; Agente Registrador   ; Total dominios;

* 1  ; 1&1 Internet    ; 382.972;

* 36 ; WEIS CONSULTING    ; 4.154;


* */

final String agente = format(values[1]);

final String totalDominios = format(values[2]); 

if (NumberUtils.isNumber(totalDominios.toString() ) ) 

context.write(new Text(agente), new DoubleWritable(NumberUtils.toDouble(totalDominios)));


}//del for


private String format(String value) {

return value.trim();




This is the reducer:

public class DominiosRegistradorReducer extends Reducer<Text, DoubleWritable, Text, Text> {


private final DecimalFormat decimalFormat = new DecimalFormat(“#.###”);


public void reduce(Text key, Iterable<DoubleWritable> totalDominiosValues, Context context)

throws IOException, InterruptedException {

double_maxtotalDominios = 0.0f;


for (DoubleWritable totalDominiosValue : totalDominiosValues) {

double_total = totalDominiosValue.get() ;


_maxtotalDominios = Math.max(_maxtotalDominios, _total);


// i need to keep with the agent which largest number of domains

context.write(key, new Text(decimalFormat.format(_maxtotalDominios)));



This is the main class:

publicclass App extends Configured implements Tool 



public int run(String[] args) throws Exception {


if (args.length != 2) {

System.err.println(“DominiosRegistradorManager required params: {input file} {output dir}”);






final Job job = newJob(getConf(),“DominiosRegistradorManager”);













FileInputFormat.addInputPath(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));




return 0;



private void deleteOutputFileIfExists(String[] args) throws IOException {

final Path output = new Path(args[1]);

FileSystem.get(output.toUri(), getConf()).delete(output, true);



public static void main(String[] args) throws Exception { App(), args);



Now you have a glimpse of the code, you can download it and import to your eclipse. Once imported, you need to create a jar. With that jar and the cluster online, you are almost ready to launch the code, but probably you need to import the huge text file with data from, download it and export to your cluster. I recommend to use the browser for that, click on Start BigInsigths if you don’t yet did , open Biginsights web console,  click Files, on the left you can see an HDFS tree, that is the hadoop file system, expand it until /Users/biadmin/, create a directory, for example, inputMR, so you can see /Users/biadmin/inputMR in your tree. You must upload the example file to that directory. You need to create outputMR directory as well

[biadmin@bivm ~]$ hadoop jar nameOfYourJar.jar /user/biadmin/inputMR /user/biadmin/outputMR
14/05/12 12:09:24 INFO input.FileInputFormat: Total input paths to process : 2
14/05/12 12:09:24 WARN snappy.LoadSnappy: Snappy native library is available
14/05/12 12:09:24 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/05/12 12:09:24 INFO snappy.LoadSnappy: Snappy native library loaded
14/05/12 12:09:24 INFO mapred.JobClient: Running job: job_201405121126_0059
14/05/12 12:09:25 INFO mapred.JobClient: map 0% reduce 0%
14/05/12 12:09:31 INFO mapred.JobClient: map 50% reduce 0%
14/05/12 12:09:34 INFO mapred.JobClient: map 100% reduce 0%
14/05/12 12:09:43 INFO mapred.JobClient: map 100% reduce 100%
14/05/12 12:09:44 INFO mapred.JobClient: Job complete: job_201405121126_0059
14/05/12 12:09:44 INFO mapred.JobClient: Counters: 29
14/05/12 12:09:44 INFO mapred.JobClient: Job Counters
14/05/12 12:09:44 INFO mapred.JobClient: Data-local map tasks=2
14/05/12 12:09:44 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=8827
14/05/12 12:09:44 INFO mapred.JobClient: Launched map tasks=2
14/05/12 12:09:44 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/05/12 12:09:44 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/05/12 12:09:44 INFO mapred.JobClient: Launched reduce tasks=1
14/05/12 12:09:44 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10952
14/05/12 12:09:44 INFO mapred.JobClient: File Input Format Counters
14/05/12 12:09:44 INFO mapred.JobClient: Bytes Read=197
14/05/12 12:09:44 INFO mapred.JobClient: File Output Format Counters
14/05/12 12:09:44 INFO mapred.JobClient: Bytes Written=19
14/05/12 12:09:44 INFO mapred.JobClient: FileSystemCounters
14/05/12 12:09:44 INFO mapred.JobClient: HDFS_BYTES_READ=413
14/05/12 12:09:44 INFO mapred.JobClient: FILE_BYTES_WRITTEN=76101
14/05/12 12:09:44 INFO mapred.JobClient: FILE_BYTES_READ=50
14/05/12 12:09:44 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=19
14/05/12 12:09:44 INFO mapred.JobClient: Map-Reduce Framework
14/05/12 12:09:44 INFO mapred.JobClient: Virtual memory (bytes) snapshot=3867070464
14/05/12 12:09:44 INFO mapred.JobClient: Reduce input groups=2
14/05/12 12:09:44 INFO mapred.JobClient: Combine output records=4
14/05/12 12:09:44 INFO mapred.JobClient: Map output records=4
14/05/12 12:09:44 INFO mapred.JobClient: CPU time spent (ms)=1960
14/05/12 12:09:44 INFO mapred.JobClient: Map input records=2
14/05/12 12:09:44 INFO mapred.JobClient: Reduce shuffle bytes=56
14/05/12 12:09:44 INFO mapred.JobClient: Combine input records=4
14/05/12 12:09:44 INFO mapred.JobClient: Spilled Records=8
14/05/12 12:09:44 INFO mapred.JobClient: SPLIT_RAW_BYTES=216
14/05/12 12:09:44 INFO mapred.JobClient: Map output bytes=36
14/05/12 12:09:44 INFO mapred.JobClient: Reduce input records=4
14/05/12 12:09:44 INFO mapred.JobClient: Physical memory (bytes) snapshot=697741312
14/05/12 12:09:44 INFO mapred.JobClient: Total committed heap usage (bytes)=746494976
14/05/12 12:09:44 INFO mapred.JobClient: Reduce output records=2
14/05/12 12:09:44 INFO mapred.JobClient: Map output materialized bytes=56
[biadmin@bivm ~]$

If you see something like this, congrats! your map reduce task is already done! the results are in /users/biadmin/outputMR

the source is located in

the data is taken from



I like science in general, and space technologies, so it was clear that my first step is to use the latest know how, in order to know where the International Space Station is.

The idea is simple, i need to feed a rabbitmq server with stomp support with the json provided by an application server which is running a web service and then the client needs to subscribe to a specific topic in order to print the data. The code is quite simple, so feel free to download it and share.

If you are asking that this project is too similar with the latest one, well, yes, it is similar, the difference is this web service is behind a secure socket layer, so we need to do import the cert file into our j2ee application server.

Please read mkyong for the details, it is very important because you can avoid Man in the middle attacks or at least minimize the possible problem.

The code is located in

The idea with this project is to learn how to achieve the effect of real time data appearing on the web browser without the need of refresh the web browser. How does it achieved? with web sockets technology, spring framework and rabbitmq broker message with stomp support and a bit of html5 and jquery.

The example project is based on knowing where are the buses of Dublin city in that moment. Luckily Dublin city has every bus connected to internet in order every customer can connect to internet during the voyage and that bus is connected to a system so that its position is always sent to the system. That system is available through a REST api provided by Dublin city, so, where is the trick? the trick is you need to ask periodically to the system about new data, enqueue the data to the rabbitmq-stomp broker message and with a bit of jquery code the effect is done. I think the code is self explainable, so you can download it.



Next project is related with the location of the International space station, stay tunned


Recibe cada nueva publicación en tu buzón de correo electrónico.

Únete a otros 49 seguidores