Any attempt at visibility into the inner workings of ML models should be welcomed IMO. It's going to be essential in the coming years, if we're going to reason about or regulate them. E.g. how would we hardcode Asimov's laws of robotics into some future deep learning AGI, if it's still just one big black box for us?
Thinking about law (As Asimov's are), I perceive essentially an impenetrable barrier in this possibility space. We create order from the chaos of potential behavior by defining limits.
Currently, in this 2000D StableDiffusion landscape, there are no boundaries for allowable "travel" and you quickly end up in the "sea".
So if we want AI to behave, we should research how to define the perimeter (in thousands / billions of dimensions) and "wall it off" so we can ensure the potentials inside the no-no space are inaccessible.