> Contracts depend first and foremost upon the legal regime in which they are valid. Every jurisdiction has rules, precedence, language means specific things.
One could counter that no code ever stands alone, but is executed in terms of some runtime.
But I think I can bolster your point with two further thoughts:
1. Contracts are not meant to be code. The process of drafting a contract helps people arrive at a common understanding - a shared consensus, a "meeting of the minds" - and once signed, it then documents that understanding along with relevant context sufficient for third parties to be able to recreate that common understanding.
The contract is not executable - it, at best, is a declarative description of some of the outcomes. The actual execution is handled independently by the parties that entered the contract. The contract only exists so that common understanding can be preserved through time and shared with other people - whether it's because some party forgot the details, or it needs to be modified, or third parties (like lawyers, judges or arbitrators) need to be involved in dispute resolution.
2. To the extent a contract contains anything resembling executable code - say, the aforementioned declarative descriptions of outcomes, or imperative descriptions of specific behavior, or some attachments with deliverable specifications, etc. - these are all expressed in a very high-level programming language, i.e. natural language. This "code" is ultimately "executed" by sapient human beings. In other words, the programming language and runtime in questions are GAI-complete: that is, coding them up from scratch, e.g. to enable "smart" contracts to be comparable in utility to regular ones, would be equivalent to creating a human-level general-purpose artificial intelligence. We're not anywhere close to achieve that, thus by definition, "smart" contracts are too dumb to make sense when we can use the normal ones.
One could counter that no code ever stands alone, but is executed in terms of some runtime.
But I think I can bolster your point with two further thoughts:
1. Contracts are not meant to be code. The process of drafting a contract helps people arrive at a common understanding - a shared consensus, a "meeting of the minds" - and once signed, it then documents that understanding along with relevant context sufficient for third parties to be able to recreate that common understanding.
The contract is not executable - it, at best, is a declarative description of some of the outcomes. The actual execution is handled independently by the parties that entered the contract. The contract only exists so that common understanding can be preserved through time and shared with other people - whether it's because some party forgot the details, or it needs to be modified, or third parties (like lawyers, judges or arbitrators) need to be involved in dispute resolution.
2. To the extent a contract contains anything resembling executable code - say, the aforementioned declarative descriptions of outcomes, or imperative descriptions of specific behavior, or some attachments with deliverable specifications, etc. - these are all expressed in a very high-level programming language, i.e. natural language. This "code" is ultimately "executed" by sapient human beings. In other words, the programming language and runtime in questions are GAI-complete: that is, coding them up from scratch, e.g. to enable "smart" contracts to be comparable in utility to regular ones, would be equivalent to creating a human-level general-purpose artificial intelligence. We're not anywhere close to achieve that, thus by definition, "smart" contracts are too dumb to make sense when we can use the normal ones.