I doubt it'd work any better. Most of EE time I have spent is swearing at stuff that looked like it'd work on paper but didn't due to various nuances.
I have my own library of nuances but how would you even fine tune anything to understand the black box abstraction of an IC to work out if a nuance applies or not between it and a load or what a transmission line or edge would look like between the IC and the load?
This is where understanding trumps generative AI instantly.
Make two separate signals arrive at exactly the same time on two 50 ohm transmission lines that start and end next to each other and go around a right hand bend. At 3.8GHz.
Edit: no VSWR constraint. Can add that later :)
Edit 2: oh or design a board for a simple 100Mohm input instrumentation amplifier which knows what a guard ring is and how badly the solder mask will screw it up :)
Right - LLMs would be a bit silly for these cases. Both overkill and underkill. Current approach for length matching is throw it off to a domain specific solver. Example test-circuit: https://x.com/DuncanHaldane/status/1803210498009342191
How exact is exactly the same time? Current solver matches to under 10fs, and I think at that level you'd have to fab it to see how close you get with fiber weave skew and all that.
Do you have a test case for a schematic design task?
It would seem to me that the majority of boards would be a lot more forgiving. Are you saying you wouldn't be impressed if it could do only say 70% of board designs completely?
Not the GP, but as an EE I can tell you that the majority of boards are not forgiving. One bad connection or one wrong component often means the circuit just doesn't work. One bad footprint often means the board is worthless.
On top of that, making an AI that can regurgitate simple textbook circuits and connect them together in reasonable ways is only the first step towards a much more difficult goal. More subtle problems in electronics design are all about context-dependent interactions between systems.
I hate that this is true. I think ML itself could be applied to the problem to help you catch mistakes in realtime, like language servers in software eng.
I have experience building boards in Altium and found it rather enjoyable; my own knowledge was often a constraint as I started out, but once I got proficient it just seemed to flow out onto the canvas.
There are some design considerations that would be awesome to farm out to genai, but I think we are far from that. Like stable-diffusion is to images, the source data for text-to-PCB would need to be well-labeled in addition to being correllated with the physical PCB features themselves.
The part where I think we lose a lot of data in pursuit of something like this, is all of the research and integration work that went on behind everything that eventually got put into the schematic and then laid out on a board. I think it would be really difficult to "diffuse" a finished PCB from an RFQ-level description.
I find I spend an enormous amount of time on boring stuff like connecting VCC and ground with appropriate decoupling caps, tying output pins from one IC to the input pins on the other, creating library parts from data sheets, etc.
There's a handful of interesting problems in any good project where the abstraction breaks down and you have to prove your worth. But a ton of time gets spent on the equivalent of boilerplate code.
If I could tell an AI to generate a 100x100 prototype with such-and-such a microcontroller, this sensor and that sensor with those off-board connectors, with USB power, a regulator, a tag-connect header, a couple debug LEDs, and break out unused IO to a header...that would have huge value to my workflow, even if it gave up on anything analog or high-speed. Presumably you'd just take the first pass schematic/board file from the AI and begin work on anything with nuance.
If generative AI can do equivalent work for PCBs as it can do for text programming languages, people wouldn't use it for transmission line design. They'd use it for the equivalent of parsing some JSON or making a new class with some imports, fields, and method templates.
I have my own library of nuances but how would you even fine tune anything to understand the black box abstraction of an IC to work out if a nuance applies or not between it and a load or what a transmission line or edge would look like between the IC and the load?
This is where understanding trumps generative AI instantly.