Not the same kind of noise, but this reminds me of a college project from college when I was studying electrical engineering.
We were building an MP3 player using an Atmel AVR dev board with off board circuitry for the MP3 decoding and DAC for output. We couldnt get the MP3 decoder working, despite us appearing to be sending all the right signals/data. Everything looked fine in the logic analyzer. We werent running at super high frequencies, either. Maybe just a few MHz.
We didnt find the problem until someone bumped the frequency scaling, so we were looking at the signal in much higher frequencies. Thats when we saw all of the noise in the signal. We had all sorts of jitter from signal bounce. Our external board was connected to the dev board via an 18" ribbon cable and we had forgotten to properly terminate the bus.
It was really embarassing as we had just taken travelling waves the semester before and should have thought about it. But, hey, we were students, and undergrads at that, so definitely a learning experience and reinforced a class we were required to take and thought was useless at the time. Lesson learned.
For anyone curious why we would build a hardware MP3 player, this was back when the only HW portable players were the early Diamond Rio's (think they only had a 32MB and 64MB model available at the time).
Not only did the Diamond Rio 500 top out at 64 MB of internal storage, it had a rather famous hardware bug where it would lose capacity (permanently) if it lost power during a file transfer. Mine lost about 8 MB. Others lost even more. The other cool thing about it is that it has a USB mini plug but does not use a standard USB mini cable. If you use a standard cable it can destroy the player.
Oh, and it supported a flash memory card for more storage...maximum 32 MB. Oh well.
Which is kind of like saying “if only we had H.265, VCDs could have been high-quality.” We certainly could have invented the codec back then (maybe, if we had had a few more genius information-theory mathematicians born the years before), but could we have implemented it in silicon with 1990s tech, with an affordable power budget (e.g. two AA batteries)?
Spoiler: no, there wasn't enough CPU power for a more complicated codec.
I had a Toshiba Libretto 30 with a 486 processor and PCMCIA soundcard. It could play MP3s .. but only with the Fraunhofer codec, Winamp required slightly more than 100% CPU.
That’s assuming you ran the codec on the CPU. We had codec-accelerator ASICs back then! MP3 players and DVD players were famously built on such ASICs. For a long time, the MPEG-2 decoder ASICs required for DVD playback were a differentiator on video cards and motherboards, allowing some PCs to offer DVD playback long before consumer CPUs were capable of realtime 480p MPEG-2 decoding. (E.g. the original iMac offered DVD playback through such an ASIC.)
The real questions in this hypothetical are:
1. if we handed the netlist for a modern H.265 codec ASIC to a 1990s fab, would they have been able to print it? (Maybe.)
2. Would the resulting chip have made H.265 a worthwhile encoding for shipping media in the 1990s? (Nah; the chip, as rendered at a ~100nm process node, probably would have been ridiculously power-hungry and hot. It would have worked out in a server with blower fans, or in a gaming PC with a powerful PSU and a water-cooling rig dedicated to the ASIC; but you couldn’t have put one in a piece of consumer electronics like the PlayStation 1. As such, its use, if anything, would only be studio-internal, maybe for archival storage of masters in a “nearly-losslessly-compressed” form.)
If you're only saw the problem at high frequency, how do you know it was it a problem at low frequency? You concluded there was less common but still fatal bit errors at low frequency?
I read that as adjusting the timebase on the scope; at low-frequency timebase the scope smooths away the noise, but that doesn't mean the digital receiver in the circuit can't see it.
Yes, this was precisely the issue. We couldnt see the noise at our expected frequency of, say 1 MHz, but we could when scaled to say 50 MHz. Been nearly 2 decades now, so precise numbers elude me.
I believe he meant they changed the scale of their test equipment display to more clearly reveal high frequency noise, not that they clocked their circuit higher.
I remember when the hard drive mp3 players came out, like the first iPod. As a sysadmin at the time, I found it amusing that people were wearing these while running, etc. Worked out better than I had imagined. Did they use "special" hard drives with some sort of unique head crash protection?
They did not use special drives, usually 1.8" drives identical to what was in the smallest portable hard drives. I remember because I fixed a friend's dropped iPod by buying one of those enclosed hard drives and transplanting it.
The used a SSD buffer of 16-64MB so that the hard drive spun up only once every few minutes of playback. But navigation during movement would still be risky.
Each iPod also has 32 MB of RAM, although the 60GB and 80GB fifth generation, and the sixth-generation models have 64 MB. A portion of the RAM is used to hold the iPod OS loaded from firmware, but the majority of it serves to cache songs from the storage medium. For example, an iPod could spin its hard disk up once and copy approximately 30 MB of upcoming songs into RAM, thus saving power by not requiring the drive to spin up for each song.
I think they did use some kind of inertia sensor that would detect when the device was falling and spin down the drive to reduce the chance of it breaking.
This is one of those things that EE students really benefit from seeing as close to in-person as possible. Especially setups where they can bring in bits of ground plane, fiddle with the termination etc and watch the noise appear and disappear on the scope.
A little context on why power integrity is getting worse may be helpful for those unfamiliar with semiconductor tech.
The transistors are getting smaller. To make it possible to connect to the smaller transistors, and to make the more numerous connections, the interconnect is getting narrower and thinner (i.e., smaller depth). This makes the interconnect resistance higher.
The voltage noise is ∆V = ∆I x R, where ∆I is the switching current. Now that R is higher, ∆V is higher. To make it worse, absolute supply voltage, V, is also getting smaller.
* typical supply voltage for advanced CMOS is now under 1V
* your typical high performance part dissipates anywhere from 50W to 250W from the die
* this means that a chip has to be supplied 50-250 A of current
* the current demand from the chip can be quite variable (idling along at a few watts, then ramping up rapidly to full load or vice versa), and power supply/regulators have to be able to provide this current with very little noise to the chip
* chip itself is a complex system (package with all the hundreds of connections to the board feeding power/ground/signals to the silicon die)
* high performance parts, with clock speeds of 1GHz+ have very fast internal signal edge rates internally, on the order of few to tens of picoseconds. 1ps, for reference, is 0.3mm at light speed.
Power grid, power integrity, power behavior of modern designs is stunningly complex.
At some point the semiconductor industry may hit a limit on size because it costs too much. TSMC is planning a massive new fab for the 5nm and 3nm nodes, with an estimated cost of $16B. Everything at the latest node sizes is insanely complicated and expensive - fabs, masks, design. I'm amazed that generating extreme ultraviolet by zapping falling tin droplets with lasers can be made into a cost-effective production process. Mask costs passed $15 million at the previous node. Design costs for 7nm are around $100 million for each different part.
This can't go on much longer. Most of these parts go into phones.
No, but the cost per transistor can keep dropping for years after shrinkage stops. Look at a lot of other things like TVs where competition keeps driving the price down. Unless something new comes up that practically everybody wants, competition will drive down prices and profitability.
I feel like we’re on a path with technology where our expectations of advancement have us using ridiculous “should have only been a prototype” tricks to get things done, where giving us just a few more years would get us far cheaper and more stable approaches to the same processes. (Analogy: it feels as if, upon the invention of radiotelegraphy, we wanted higher throughput, and therefore we just figured out some ridiculous way to trigger off pipelined arrays of spark-gap transmitters, rather than inventing vacuum tubes.)
There was much hope that somebody would find a way to create an extreme ultraviolet light source that wasn't the size of a house and costs over $100 million. There's a startup in Fremont CA trying.[1] "A synchrotron beamline for home laboratory applications", says their site. (That appears to be humor; the thing costs about $30 million.) Right now they have a product for imaging but can't generate a powerful enough beam for lithography. They may be funding-limited. It took them 19 years to get to this point. Worth a look by VCs with $100M+ to invest.
Some foundries have purposely held back on scaling improvements, precisely because the older, coarser processes can still find a lot of use and be optimized in other ways. One can also see Intel's iterative improvements on their 14nm++++ process in the same vein.
> Mask costs passed $15 million at the previous node.
If I'm interpreting/understanding you correctly, that means chip simulation is worth some multiple of $15m.
I am very very very very very curious what that buys in terms of chip-scale simulation. Obviously a perhaps tricky question because the answer would be so specific.
But I still wonder at least what it would look like from a distance. A massively parallel server farm that collectively pretends to be a 100MHz(??) chip? FPGAs? Custom silicon that has microcode-on-steroids?
Simulation is massively used in chip design these days. The hardware they use is just the cheapest general purpose compute hardware available, so just normal server/hpc setups, so this guess was pretty close:
> A massively parallel server farm that collectively pretends to be a 100MHz(??) chip?
... except that you are greatly overestimating the speed at which such simulation can run. The full-chip silicon-level simulations ran on massive datacenters run at speeds measured in kHz, not MHz. For the kind of testing they are used, this isn't a major detriment, as so long as all the io is slowed down to match, they can still get an accurate results, they just take a bit longer.
A lot more simulation happens at the subsystem-level. You can isolate some subsystem, such as a cache controller, and then manufacture traces of the communication it does with the rest of the chip. Then you can just simulate that part at much lower cost, do tweaks, see how the operation changes under the simulation, and repeat.
> For the kind of testing they are used, this isn't a major detriment
Eh... 1 hour tests turning into 4 hour tests turning into 12 hour tests is one thing, but when a 12 hour test turns into a 7 day it hurts. And if a 20 day long bootloader simulation with accurate pad models fails, you may not have a chance to run it again with a fix before tapeout. And Kernel boot in simulation takes so long.
Not that you're wrong, just emphasizing how slow it can be. Simulation complexity has outpaced server farm speed increases over the past 10 years, in my experience. And rtl simulation has slowed itself out of usefulness for many software use cases where it used to be not so bad.
Ha, I overestimated the capabilities of scaleout :D I figured if you added enough racks you could go that high... but yeah, that's asking for the equivalent of total coherency on a piece of software running simultaneously across thousands(?) of nodes.
(Ha, I wonder if the current systems use 50Gbit networking. Or 100Gbit? Wow...)
> A lot more simulation happens at the subsystem-level. You can isolate some subsystem, such as a cache controller, and then manufacture traces of the communication it does with the rest of the chip. Then you can just simulate that part at much lower cost, do tweaks, see how the operation changes under the simulation, and repeat.
Right, that makes sense. And interestingly, that sounds similar to how retro/hobbyist emulation systems do things too. Emulate the exact behavior necessary for a specific set of things to work the way you want.
> I figured if you added enough racks you could go that high... but yeah, that's asking for the equivalent of total coherency on a piece of software running simultaneously across thousands(?) of nodes.
Also, you've got hundreds of users, who may be submitting tens of tests at a time. Like, even a giant company would run out of compute trying to make simulation as fast as the users want it to be.
Forward looking is about 6k$ per wafer. There are 100 fields, and 8 devices per field (at 100mm2). So, about $10. Assume 80pct yield, and we are at a minimum of $12.
16B$ is 1m wafers per year for a foundry. About 7B in litho tools, 2B in dep, 1B in CMP, 5B in etch, and 1B in I&M. I am assuming about 70 litho layers, EUV moderate, significant double patterning.
Anyway, I don't see a way for $1 for any useful sized device at forward looking nodes. That is why the ecosystem needs apple, quallcomm and Nvidia to push performance on the early end.
The complaints about older nodes are missing the mark.
The problem is that a 2019 .25um I/O transistor on a 100KHz I2C line can launch an edge that has multi-GHz components that rattle around the chip, board, and their power grids. In addition, a 2019 .25um I/O transistor can respond to GHz speed glitches that a 1996 .25um I/O transistor would simply ignore. In 1996 or so, it was difficult to get a .25um transistor that could shovel around current and respond at GHz rates.
Creating a TDR (time domain reflectometer) circuit in 1996 on a CMOS technology node was a significant design challenge. In 2019, it's difficult not to create one on your I2C, SPI, UART, etc. communication buses.
> “This is noise on the power grid. Power integrity will cause more degradation of chip performance. That is not only for the advanced nodes, but we have seen that more and more on the legacy nodes — 28nm and 40nm are having the same problem.”
Reminds me of audiophiles getting their own utility poles installed to prevent this [1].
No. You are misreading it (and it's not your fault, the article assumes the reader already has a background in semiconductor engineering). The "power grid" here refers to the internal power distribution network on the chip, not utility electrical power grid.
Signal and power integrity in electronics is a serious area of research supported by applied electromagnetism, measurements, modeling, simulation, and industrial experience. Never confuse signal integrity with "audiophile" voodoo practices, which is supported by none of them. For example, all audiophiles seemed to care about noises from the power lines, but it seems not all of them understand that the power is rectified, filtered, and DC-DC converted, filtering again, routed to the chip, bypassed and filtering again, before the power in finally delivered to the system. The noises on the power rail can have effects on sound quality, but it's the actual power supply and PCB design that dictates power integrity. While it's likely that transients from the utility power can have an effect on the circuit operation (and ground loop is a major problem), but the majority of noise is likely coming from the switching noises on the Hi-Fi amplifier itself. Even the physical location of the traces on the PCB can have much more influence than the utility grid, A wire that should be routed to the left of the board but routed to the right can be problematic. If the voltage of your DAC is unstable because the current loop area is excessive, adding your utility poles or using a gold-plated cable does absolutely zero to the performance of the amplifier.
It absolutely isn't. The issues caused by mains 60Hz AC are vastly different from the issues caused by switched mode power supplies. Saying they're the same thing is straight up ignorant.
>You cannot design the power grid independent of the rest of the design
Never could, even the equivalent during vacuum tube efforts.
Even without an electronic power supply you can use a battery. Sounds so easy, still need to watch it. Consider how they do it with competition car stereos. There are extra storage capacitors to augment the batteries since they can discharge faster than the battery to provide less distortion on large transients.
Basically a car stereo amplifier is designed to neutralize your battery in time with the music. If your battery is not capable of providing the dynamics your amplifier needs, it then will not be able to reproduce them.
Audio amplifiers running on AC line voltage are actually just power supplies. Power supplies which convert and filter the low AC line frequency to DC, then draw from the DC a waveform according to your incoming music preamp signal which is AC of varying amplitude & frequency, often in the distinct pattern of pop music. While your speakers' AC pattern tries to represent the preamp signal fluctuations coming in, your power supply tries to provide the continuous power fluctuations, or represent "music noise", called for during that reproduction.
Anywhere along the power supply lines, an image of some small kind resembling the output can be found, mostly at the higher power extremes. This is the part of the circuitry most thoroughly isolated from the audio input & output path but there it is. The purpose of the isolation by design is to keep the line noise away from the audio reproduction components, but it must let the power through in the pattern requested by the incoming audio signal.
Largely, frequencies will also broadcast over greater distances the higher the frequency is and the higher the amplitude. And wiring, especially unterminated leads, acts as antennae according to their length and direction, for both broadcast and reception. This is whether between chasses miles apart such as amateur broadcasters and recievers, within the same chassis as unwanted electronic "feedback" from one part of the circuit or power supply to another part which would ideally be perfectly isolated instead, or within the same chip, especially when the frequency is high enough, the distance small enough, and the lead length unfavorable enough to serve as the broadcast & reception antennae needed to overcome either the air gap or the electronic filtration.
With linear power supplies the 50/100 or 60/120 Hz noise requires an optimized approach different than the much higher frequencies of switchers. Modern swichers can broadcast their noise further and also more easily defy the type of wiring acceptable for linear alternatives. And they can sound worse than hum too once you hear that trash, especially bad when ultrasonics are throwing audio-range harmonics down at you from other supposedly inaudible modulations.
Thanks for the insight. I actually started reading the comments on that old discussion after digging it up and it does seem like "voodoo" as you call it :)
I believe many claims made by audiophiles do have a point, and some are true. But in order to distinguish them, one needs to actually read from better sources to learn how electronics work in general, for example, an introductory book, primary sources like industiral standards from Audio Engineering Society, or the CD Red Book. The audiophile community and the industry is plagued by many cargo cult and voodoo practices, and it is supported by many vendors.
I am not an audiophile, but I have had to clean up the sloppy power in my neighborhood. The transformer near me is very old and about to blow, but power company won't fix it. I've had to use a combination of isolation transformers, double-conversion Liebert UPS, step-start power distribution switches just to keep from losing internet every time the neighbors spetic pump kicks in. I also had to filter the RF cable as well, as it suffers from the same power spikes from cheap cable modems power supplies. Even my old Sony reference line receiver could not filter out that garbage. I'm in the middle of nowhere though, so this is probably not common for people in the city, just guessing.
Interesting. It appears to me that it's not simply a power quality issue anymore, it's likely that the old wiring and transformer winding is also arcing over, and creates enormous amount of wideband radio frequency interference, like a spark-gap transmitter, both in the airwaves and on the power lines. It seriously degrades any wireless communication and digital wired communication like DSL as well.
If you have a SDR capable of receiving shortwave below 30 MHz, try walking around the neighborhood with a laptop with a loop antenna connected to the SDR, and check the HF spectrum. It's likely that you'll see the entire bands being wiping out. If so, it's a serious violation of radio regulation, you can report the situation to the FCC.
No, I just complain to the power company a couple times per year. One of the neighbors also complains to them, as they smoked 2 of his UPS units. They burnt up a few of my cheaper UPS units prior to having the liebert double-conversion UPS. The 1200W APC caught fire. The other ones just smoked. The smaller APC units (power strip form factor) never smoked, they just tripped a lot.
I recommend installing surge suppression and overvoltage protection device at your home's main electrical panel. It won't improve your power quality or reduce interference, but it'll probably save your appliances from destruction...
I've done that too. I have a bank of 12 metal oxide varisitors. I forgot what each one is rated at, but they are big. They won't clamp until much higher voltage though. The double conversion UPS is between that sloppy power and the sensitive equipment. Prior to that, I lost 3 UPS units.
The problem is real with chips, though. Audiophiles are notorious for doing silly things “to improve the sound quality” in ways undetectable in a double-blind test (but clearly audible to “real audiophiles” otherwise.) At macro scale a UPS with good filtering is more than sufficient to power audio equipment.
IR drop analysis methodology is a joke in my experience. As a verification engineer you are told to run a simulation with 'high switching activity' and then pick the 300 ns with highest activity and send the waveforms to the backend guys who will somehow use this as basis for their IR analysis. This is horribly error prone.
Why are we not developing formal tools that can give us the worst case switching activity and basing analysis on that?
Who says we're not? For context, I'm one of the people quoted in the original article.
EDA companies constantly work on advanced technology with a few development partners. You can read that as: large companies willing to put substantial resources into a research project to push the state of the art. Eventually the more successful projects trickle down to the rest of the industry, some others just become custom options/scripts for particular companies advanced methodologies (this, btw, is one of the reasons EDA software tends to be quite complex)
"Stepping suddenly from lower to higher power, such as when a cache is missed or a new core turns on, causes a large current step and parasitic package/board inductance, which means a correspondingly large voltage drop."
Wouldn't one solution be to use linear voltage regulators instead of switched mode? Or some combination of switched mode and linear?
I am aware linear regulators are much less efficient, but my understanding is that they respond to transients much better.
By the time the transient reaches the voltage regulators, it's old news. We're talking about a base clock in the multiple GHz, which means the edges of power steps have harmonics in the tens of GHz, which means wavelengths on the order of millimeters.
When they talk about package and board inductance, think of it as meaning that the board itself is a transmission line and its characteristic impedance is limiting the risetime of the current waveform. No amount of regulator response can overcome that.
It’s not an issue of the supply, it’s an issue of the current paths inside the chip or on the motherboard. With smaller features and more complicated routing there is more inductance.
I think it's not that the chips are getting noisier, but it's the opposite problem - the chips are more susceptible to noises, and making their design more challenging. Even the article itself agrees.
> the voltage levels keep getting closer to the threshold level, which means that the amount of room you have to buffer your signal from the noise ripple gets smaller and smaller.
I'm not an engineer and I don't design the circuit inside a microchip, but it's useful to explain this concept from a circuit board design perspective, and talk about the circuit outside the microchip. I hope my comment can help software engineers to understand the basic background.
To begin with, take a look at the logic level of different families of simple logic chips. Take a look of this picture, it illustrates the logic threshold voltages of different digital buses.
1. A logic gate. Nothing can be simpler than a 7404 inverter - it inverts a Boolean value, if the input is a "logic 1", the output is "logic 0", vice versa. If you are using a 74LS04 chip, it uses the standard 5 V TTL logic level: anything between 2.0 V - 5.0 V is seen as "logic 1", anything lower than 0.8 V is seen as "logic 0". It has a rise time of 30 nanoseconds or so, and the chip cannot go faster than 30 MHz. You can easily make the chip to do its job by hooking a bunch of random wires, without any impedance control or power supply decoupling.
On the other hand, we have a 74AVC04 chip, which does the same, invert the signal. But it uses a 1.8 V power supply voltage, with thresholds of 1.35 V and 0.63 V respectively. Also, this chip has a rise time of 0.5 nanoseconds, and operates above 200 MHz.
Now:
* All the noise it can tolerate is around 100 milivolts, before the chip starts to go crazy and switch randomly.
* Although it's a digital circuit, you must treat your signals as an analog radio signal (the fifth harmonic of a 100 MHz square wave is 1 GHz) and consider all the physical effects that can distort the signal. It's no longer 0s and 1s, but waves bouncing around. For example, if you don't add input termination to match the impedance, the output signal from the last circuit will hit the receiver and bounce back to the transmitter. Even the location of the interconnecting traces, the dielectric material of the circuit board started to matter.
* You must design an elaborate power supply network on your circuit board to eliminate the switching noises and supply adequate electric power, because the chip can turn on faster than what your power supply can deliver.
Finally we have CPUs that operates at Vcore voltage well below 1.0 V, at a few gigahertz, it makes the problem above more serious. Take a look of a modern CPU, it has a hundred of capacitors underneath, because the circuits inside switch on and off so fast that the power supply cannot even have sufficient time to deliver the current it needs.
2. A DAC/ADC example. There are those 24-bit audio DAC everywhere nowadays, and it's easy to be fooled by them and think that 24-bit is trivial.
But let's examine the cheapest ADC you can buy today, often for free because it comes with your microcontroller. Many people think it's bad and has low precision. So, you have a 0 to 5-volt input signal. When you have a 8-bit ADC, it means it can recognize 256 (2^8) different voltages, it means the smallest delta-V is can recognize is 0.019 V, 19 milivolts, or 1.9% of 5 V! Nowadays, a more expensive data acquisition system is often 10-bit. It's 0.0049 V, 4.9 milivolts! It's often used by the most expensive oscilloscopes. An even more expensive setup is 12-bit, it's 0.001 V.
Just think about it, a 12-bit data acquisition system will be destroyed by any noise higher than 1 milivolt. And this noise can come from everywhere. Bad power integrity due to inadequate decoupling, crosstalks between traces, electromagnetic interference, and if your board design is bad enough, it can even come from the digital portion of the same chip itself.
To put it straight and simple: Even designing a 8-bit circuit that can fully utilize the performance of the 8-bit ADC is challenging, you can't do this without a solid background in electronics. When people are talking about quantization noise of digital circuit, remember that a precision of 8-bit is not easily achievable even in analog circuit. (But that depends on what you are working on, DC is easiest, audio is easier, 100 MHz is not!)
It's why we have 24-bit audio ADC, but we only have 10-bit oscilloscopes and 12-bit software-defined radio, because it doesn't even make sense after that. Does it even make sense to recognize a signal to the precision 24-bit (0.0001 milivolts)?! The only thing that is capable of doing that is a sensitive analog frontend circuit of a radio receiver can do this (yes, it may include your AM radio), and the purpose of it is to process the signal to a more usable form.
Conclusion: The development of higher data rates digital systems and more sensitive analog chips, requires a lower noise floor. The chip in 1979 runs at 1 MHz and 15 volts, today it runs at less than 1 volts Vcore and 1 GHz. Yet, the technology of delivering power into these systems remains the same. The power from a DC power plug in a board today has the same amount of noise, just like it was in 1979.
Thus, the task of designing a power supply network on a board or a chip is becoming more and more challenging, as the article said,
> You have to perform multi-physics, multi-domain simulation with the ability to co-simulate the behavior of the power grid together with timing.
As are potato chips. I can't help but notice all the crunchy sound reverberating across the room when in a lunch meeting many people are eating chips. I feel like chips didn't use to be that noisy as they are now. I like the crunch but not to the level that I have to worry about disturbing others because of the noise.
Note - This comment is not meant to be funny only. I really feel that way.
I am speculating a bit here because I don't know how much the sound of food bwing chewed can be influenced without ruining the look and taste and by sticking to strictly legal additives. But if there is any leeway, it is certainly used.
Well one solution is to start using more differential circuitry on-chip, which is normally reserved for RF/Analog and off-chip busses. There isn’t a well defined ground in silicon processes.
The thing is, at really small gate sizes, everything will leak so badly, that leaks will be bigger than power spent on switching the CMOS couple. And there is nothing you can do about that. That's fundamental physics. Yes, you can bury everything in HfO or a better insulator, but even that will not change the situation more than it does now.
Most current coupled logic families needs less transistors, and do have higher performance than CMOS. And obviously, for as long you deal with current, noise is much less of a concern.
If you cab do more computations with fewer, and higher performance transistors, that a definite win for power efficiency.
At the really small geometry sizes (beyond 28nm for most foundries) transistor topology has changed from planar to finfet or even variants of gate-all-around for the really leading edge stuff. These topologies limit the leakage paths, so even at the most advanced nodes the critical issue continues to be active switching power; not to say you can just ignore leakage, but active power is and remains the dominant contributor to IR drop for CMOS semiconductor designs.
Most power grid noise is fairly well handled by the PSU and motherboard, it's half the reason for the mega capacitors.
As for radio wave noise, that just sounds like BS. It would only really affect higher frequency chips, and those almost always have a metal heat spreader. Further, many cases are metal as well. This makes your computers Faraday cages. IDK, maybe more an issue for mobile computing. However, seems like the solution is simple, just add more metal.
Cross talk and power fluctuation from processing is a more tricky problem to solve. You could possibly fix power fluctuations with more on chip capacitors for sensitive circuits. Cross talk is harder, afaik, the only solution is more spacing.
The clock frequency isn’t the issue, rather the rise time of the signal edges. You can have a 1 kHz clock with GHz of bandwidth.
Case in point are switching power supply ICs, specifically ones with integrated switches. A 200 kHz switcher can radiate noise past 2 GHz due to the fast rise time of the input switch. It’s a real problem in my experience as I have sensitive RF circuitry.
We were building an MP3 player using an Atmel AVR dev board with off board circuitry for the MP3 decoding and DAC for output. We couldnt get the MP3 decoder working, despite us appearing to be sending all the right signals/data. Everything looked fine in the logic analyzer. We werent running at super high frequencies, either. Maybe just a few MHz.
We didnt find the problem until someone bumped the frequency scaling, so we were looking at the signal in much higher frequencies. Thats when we saw all of the noise in the signal. We had all sorts of jitter from signal bounce. Our external board was connected to the dev board via an 18" ribbon cable and we had forgotten to properly terminate the bus.
It was really embarassing as we had just taken travelling waves the semester before and should have thought about it. But, hey, we were students, and undergrads at that, so definitely a learning experience and reinforced a class we were required to take and thought was useless at the time. Lesson learned.
For anyone curious why we would build a hardware MP3 player, this was back when the only HW portable players were the early Diamond Rio's (think they only had a 32MB and 64MB model available at the time).
Edit: typo MGHz->MHz