This is a purely academic example, but you could use such an architecture to build a better Machine Learning model, here is why:
1) to some extent this depends on the ML architecture you want to use, but vast majority of computing and memory access is LOCAL, that means each one of the machines has the full MEM<=>CPU bus available to a smaller sub-set of the overall ML model, therefore the computations run faster
2) of course the coding required to do this - if you are starting from scratch - would require you to use some kind of parallel computing APIs, not sure to what extent the existing ML libraries (Python based stuff like Tensor, etc.) are capable of scaling up like this
This would work very nicely...way back I built such a ML model that did edge recognition in an image, and the follow-up was speaker recognition based on normalized (GSM compression) audio feed. This was not a sophisticated archtecture, basically several layers of varying sized networks, all using straight-forward backpropagation learning...needless to say, the final error (GLOBAL) minima took a while to get to. I would routinely train the darn thing for 2-3 days only to discover that my model was getting stuck in a LOCAL minima.
That was a C++ code, my undergrad thesis and let's just say we are talking here the mid-90's (LOL...damn, that flew by).
Anyways, if I needed to build that now, a parallel architecture like what you are describing would be more scalable. Of course you'd be hard pressed to beat a fast GPU, which can sit in a single machine where the CPU off-loads majority of the computation over to the GPU and uses it's local memory anyways...like I said, purely academic, but oh so fun to consider!!!