Recently I was working for a bank where I was supposed to work doing Big Data and distributed systems tasks but it finally resulted in a horrible monolith that had to be maintained that called for euthanasia and purifying fire or extreme refactoring tasks with the right scissors.
The fact is that they ignored me and I took the opportunity to ask them where the door was because I considered that they had deceived me in the initial interview and working in that case would be tremendously counterproductive for my mental health, both for me and for my younger colleagues .
The only thing I was able to take advantage of was a training I did on Apache Flink on Udemy. I created a repository on github that I share with everyone.

In another moment, I take the opportunity to comment on the examples, since it has a lot of crumb to work with Flink as opposed to Apache Spark. Right now I think that working with Flink is like working with Spark in its first iterations, more spartan, with less features, like, ¿where is the machine learning package? Or, ¿why can’t I join between tables if I can work with files as if they were tables?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s