Something like that is the idea. Silicon area is what costs money. Use as little as possible while still being fast enough to beat the clock-rate of your system.
PSO's are way better than brute force. but yeah.
It was a purely academic work for a masters level CS unit.
I had just finished a masters level electrical engineering unit on silicon circuit design.
Where the final project was to design an adder that minimized silicon used (and thus cost) while also being faster enough.
And the hard bit is that you want big thick doped regions for high conductivity, but also the bigger the area the more paracidic capacitance.
And so there are some tricks the to find good sizes, like progressive sizing and stuff.
But afaik there is no actual answer, at least not one we ever learned.
So a lot of trial and error went into it.
It was a hard project.
And so then I did this CS unit where the project was "Do something interesting with a particle swarm optimizer".
And i was like "lets solve this".
and once I saw the results, i was like "this is actually really good", and so the lecturer and I wrote a paper about it.
It is a real problem. Minimizing silicon area subject to speed.
I bet the big integrated designers have there own tricks for it that i don't know about.
To do it really well you need to miminal the real area so also need to solve layout (which is a cool cutting and packing problem).
(and ther are also nth order effects, like running traces over things can cause slow downs, because electromagnetism reasons)
I bet a bunch of folk on HN know this problem much better than i do though.
probably something bad in my solution, but i think it illustrates the utility
Very cool paper. Your observations in V.B.4 are pretty well understood in circuit design. If you've not heard of it, you might be interested in https://en.wikipedia.org/wiki/Logical_effort. Turns out the optimum scaling for propagation delay is e (natural log constant), but I don't know if I ever learned anything about the optimum for area.
Now that everyone is using finfet processes, the layout part is pretty easy to solve because transistor widths have to be a certain number of fins and the layout is extremely regular.
One thing your analysis didn't include, which actually ends up being quite significant, is the extra capacitance caused by the wires between transistors. This changes the sizing requirements substantially.
I've done some custom logic cell design, and I always had to use a lot of trial and error, though generally I was concerned more with speed than area. I'm not sure exactly what the development process is at my current employer, but it seems like its a lot of manual work. I'm guessing they set area targets based on experience and attempt to maximize speed where possible.
Ultimately, everything gets placed and routed by a computer anyways!
> Your observations in V.B.4 are pretty well understood in circuit design.
Indeed, I am actually surprised the paper doesn't include something like _"This is inline with the well known result for progressive sizing [cites textbook]"_.
It was my first paper, i was worse at writing things then.
:-D
> One thing your analysis didn't include, which actually ends up being quite significant, is the extra capacitance caused by the wires between transistors. This changes the sizing requirements substantially.
Good point. And not easy to model in a SPICE style simulator.
I guess one could maybe introduce explict capacitors and them compute capacitiances by making some assimptions about layout.
> I guess one could maybe introduce explict capacitors and them compute capacitiances by making some assimptions about layout.
That is, in fact, exactly what we do! I think it would be pretty straight forward for your large buffer example - you can model it as a fixed capacitance at each output which corresponds to the routing between inverters, which would be the same for all sizes, plus some scaling capacitance that relates to the size of the transistor itself, which you already have.
The adder would be trickier, for sure. Regardless, in my experience, just adding a reasonable estimate is good enough to get you close in terms of sizing in schematics, then you fine tune the layout.
It was a purely academic work for a masters level CS unit. I had just finished a masters level electrical engineering unit on silicon circuit design. Where the final project was to design an adder that minimized silicon used (and thus cost) while also being faster enough. And the hard bit is that you want big thick doped regions for high conductivity, but also the bigger the area the more paracidic capacitance. And so there are some tricks the to find good sizes, like progressive sizing and stuff. But afaik there is no actual answer, at least not one we ever learned. So a lot of trial and error went into it. It was a hard project.
And so then I did this CS unit where the project was "Do something interesting with a particle swarm optimizer". And i was like "lets solve this". and once I saw the results, i was like "this is actually really good", and so the lecturer and I wrote a paper about it.
It is a real problem. Minimizing silicon area subject to speed. I bet the big integrated designers have there own tricks for it that i don't know about. To do it really well you need to miminal the real area so also need to solve layout (which is a cool cutting and packing problem). (and ther are also nth order effects, like running traces over things can cause slow downs, because electromagnetism reasons) I bet a bunch of folk on HN know this problem much better than i do though. probably something bad in my solution, but i think it illustrates the utility