Skip connections have very good motivation (see one of my other comments in this thread), and attention is decently well motivated, especially as an improvement in the translation space where they were first introduced. I don't think there's any formal proof that attention >> than convolutions with a wide receptive field.
It would be fantastic to have better measures of problem complexity. My thinking at this point is that huge parameter size makes it easier to search for a solution, but once we've found one, there should be interesting ways to simplify the function you've found. Recall above: there are many equivalent models with the same loss when the learning slows down... Some of these equivalent models have lots of zeros in them. We find that often you can prune 90% of the weights and still have a perfectly good model. Eventually you hit a wall, where it gets hard to prune more without large drops in model quality; this /might/ correspond to the actual problem complexity somehow, but the pruned model you happened to find may not actually be the best, and there may be better methods we haven't discovered yet.
It would be fantastic to have better measures of problem complexity. My thinking at this point is that huge parameter size makes it easier to search for a solution, but once we've found one, there should be interesting ways to simplify the function you've found. Recall above: there are many equivalent models with the same loss when the learning slows down... Some of these equivalent models have lots of zeros in them. We find that often you can prune 90% of the weights and still have a perfectly good model. Eventually you hit a wall, where it gets hard to prune more without large drops in model quality; this /might/ correspond to the actual problem complexity somehow, but the pruned model you happened to find may not actually be the best, and there may be better methods we haven't discovered yet.