Lately I’ve been looking at little things to see which is the best way for an event-oriented architecture (with Confluent platform, for example) to get predictions on images, besides fighting with Kubernetes and the confluence platform, you know, a way to have packaged and super easy a Kafka/Zookeeper Cluster, while encouraging you to use that platform as a Database.

I don’t see much use of using Kafka as a database, as you are encouraged to use it by Confluent people, that is, they tell you, use the platform with kafkastreams to make Sql queries about the topic data as they appear. I don’t see much of that solution, to begin with because the data in a broker is volatile in nature, that is, yes, you can save it for a while, a month, several perhaps, but, what is potentially necessary speaking as to do intelligent Business?

I don’t think it gives me that security, but some usefulness can be taken out of the platform, you have to recognize it, at least it solves all the work that had to be done to set up a Kafka Cluster, with its Kafka brokers, its Zookeeper for high availability, You can also use kafkastreams to transform the data as they appear in the topic with map, reduce it, transform it into something, or transform it through an SQL statement, then put it into another topic, or save it in a real database…

The platform solves that in a moment, at least.

Now the crazy idea. I want to have an event-oriented architecture, in which an event is, for example, the entry in a folder of a new image, with its filepath, as a minimum. Basically for this initial idea I see publishers of events in the form of Drones that take photos and send them to a server, so that would activate a Kafka producer with the appearance of that photo to plug it into a topic, specifically, would put the topic, at least the filepath of the image stored in a file system, the date, the identifier of the event …

Then, once secured the existence of that file in the hdfs or similar, a consumer of that topic reads that filepath and invokes a gRPC server with TensorFlow embedded or coupled to the web requests.

TensorFlowServer has already trained a model to give you a prediction of the object, the web server receives requests and gives you the prediction. Then the result of that request is written to another output topic, to do anything, save it to disk, use it for a dashboard, create another event to get somewhere else, etc…

The fact is, I don’t like the part about needing to have the predictive model embedded inside a web server, it can be with gRPC or Spring-boot, because that forces me to go through that bottleneck, that is to say, first I am limited by the number of requests that the web server can admit. Second, you may have trouble seeing how to update a pretrained model in real time as new images arrive. The idea is, as you have more images, it would decrease the potential bias due to the number and quality of them.

Although it has advantages, because the creation of that model is usually entrusted to a data scientist, after all, he trains and generates a .rb file, tied to a TensorFlowServer gRPC, and all that in the end you can pack it in a Docker container, so you could even have a swarm of docker containers with a proxy, (kubernetes) to make requests from the TensorFlow client embedded in the Kafka consumer, but, would not there be a better way?

I was thinking that this predictive model, the ideal is that you could train in transit in real time, that is, while new images arrive, use them at the same time to generate at the same time, in another process another predictive model, trying to get a better score, when you measure an objective score better than the one of the previous model, then you make the change.

And there is the part that is not very well, because if now it is embedded in dockers containers, several potentially, you would have that, first, the service would have to be working always, that is to say, there are no stops of operation, something feasible because YA you have containers with an acceptable percentage, so it would need, for example, if it is necessary clear, to stop the number of instantiated containers to recover resources, then, the code that has the TensorFlow server would ask an external service which is the latest model generated, to bring it by the network, load it and finish embedding the gRPC TensorFlow server. Finally, the new jar has to be compiled, packaged in a Docker container in the manner of Kubernetes and uploaded back to the Cluster.

Right now, I have one of those TensorFlowServers gRPC dockerized containers, it works more or less well, and then try to adapt it to what I want, that is, before loading a model pretrained by another, I can ask a repository where to load the latest model.

Then you have to automate the creation of the new docker image, then create docker containers and scale properly, in the manner of Kubernetes, of course, so that your proxy is loaded for future requests.

The final idea is to get high availability in an event-oriented system to recognize images and give customers the results of that prediction. The current way is to train those models offline, but that has the problem that may be unacceptable, customers never stop potentially needing the forecasting service. That’s where this architecture comes in…

Is the idea very nonsensical?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s