I'd argue that it has failed in some organisations. DevOps for me is embedding the operations with the development team. I still have operations specialist, however, they attend the development team stand ups and help articulate the problems to the developers. They may have separate operations standups and meetings to ensure the operations teams know what they are doing and share best practices. Developers learn about the operations side from those that understand it well and the operations experts learn the limitations and needs of the developers. Occasionally I am fortunate to discover someone's that can understand both areas incredibly well. Either way, this results in increased trust and closer working. You don't care about helping some random person on a ticket from a tream you don't know. You do care about the person you work with daily and understand the problems they have.
If you can't account for someone spending x% of their time working with a team but for budgetary purposes belonging to a different team then sack your accountants.
DevOps,like agile, when done correctly should help to create teams that understand complete systems or areas of a business work more efficiently than having stand alone teams. The other part of the puzzle is to include the QA team too to ensure that the impact of full system, performance and integration tests are understood by all and that both everyone understands how their changes impact everything else.
Having the dev team build code that makes the test and ops teams life easier benefits everyone. Having the ops team provide solutions that support test and dev helps everyone. Having test teams build system that work best with the Dev and ops teams helps everyone.
Agile development should enable teams to work at a higher level of performance by granting them the agency to make the right decisions at the right time to deliver a better product by building what is needed in the correct timeline.
DevOps and agile fail where companies try to follow waterfall models whilst claiming agile processes. The goal with all these business and operating models is to improve efficiency. When that isn't happening then either you aren't applying the model correctly or you need to change the model.
The issue with the software team using an FPGA is that software developers generally aren't very good at doing things in parallel. They generally do a poor job in implementing hardware. I previously taught undergraduates VHDL, the software students generally struggles with the dealing with things running in parallel.
VHDL and Verilog are used because they are excellent languages to describe hardware. The tools don't really hold anyone back. Lack of training or understanding might.
Consistently the issue with FPGA development for many years was that by the time you could get your hands on the latest devices, general purpose CPUs were good enough. The reality is that if you are going to build a custom piece of hardware then you are going to have to write the driver's and code yourself. It's achievable, however, it requires more skill than pure software programming.
Again, thanks to low power an slow cost arm processors a class of problems previously handled by FPGAs have been picked up by cheap but fast processors.
The reality is that for major markets custom hardware tends to win as you can make it smaller, faster and cheaper. The probability is someone will have built and tested it on an FPGA first.
> VHDL and Verilog are used because they are excellent languages to describe hardware.
Maybe they were in the 80. In 2025, language design has moved ahead quite a lot, you can't be saying that seriously.
Have a look at how clash-lang does it. It uses functional paradigm, which is much more suitable for circuits than pseudo-pricedural style of verilog. You can also parameterize modules by modules, not just by bitness. Take a functional programmer, hive him clash and he'll have no problems doing things in parallel.
Back when I was a systems programmer, I tried learning system verilog. Had zero conceptual difficulty, but I just couldn't justify to myself why I should spend my time on something so outdated and badly designed. Hardware designers at my company at the time were on the other hand ok with verilog because they haven't seen any programming languages other than C and Python, and had no expectations.
The issue isn't the languages, it's the horrible tooling around them. I'm not going to install a multi GB proprietary IDE that needs a GUI for everything and doesn't operate with any of my existing tools. An IDE that costs money, even though I already bought the hardware. Or requires an NDA. F** that.
I want to be able to do `cargo add risc-v` if I need a small cpu IP, and not sacrifice a goat.
Well really, the language _is_ the difficulty of much of hardware design, both Verilog and VHDL are languages that were designed for simulation of hardware, and not synthesis of hardware. Both languages have of similar-but-not-quite ways of writing things, like blocking/nonblocking assigns causing incorrect behavior that's incredibly difficult to spot on the waveform, not being exhaustive in assigns in always blocks causing latches, maybe-synthesizeable for loops, etc. Most of this comes from their paradigm of an event loop, handling all events and the events that those events trigger, etc, until all are done, and advancing time until the next event. They simulate how the internal state of a chip changes every clock cycle, but not to actually do the designing of said chip itself.
I'm tooting my own horn with this, as I'm building my own language for doing the actual designing. It's called SUS.
Simple things look pretty much like C:
module add :
int#(FROM:-8, TO: 8) a,
int#(FROM: 2, TO: 20) b ->
int c {
c = a+b
}
It automatically compensates for pipelining registers you add, and allows you to use this pipelining information in the type system.
It's a very young language, but me, a few of my colleagues, and some researchers in another university are already using it. Check it out => https://github.com/pc2/sus-compiler
You can pretty much do everything in Vivado from the command line as long as you know Tcl...
Also, modern Verilog (AKA Systemverilog) fixes a bunch of the issues you might have had. There isn't much advantage to VHDL these days unless perhaps you are in Europe or work in certain US defense companies.
The main advantage to VHDL is the style of thinking it enforces. If you write your Verilog or SystemVerilog like it's VHDL, everything works great. If you write your VHDL like it's Verilog, you'll get piles of synthesis errors... and many of them will be real problems.
So if you learn VHDL first, you'll be on a solid footing.
I think this can just be summarized to "write any HDL like you are modeling real hardware." Both VHDL and Systemverilog were primarily intended for validation and synthesis is a second class citizen.
I haven't learned Verilog, only VHDL and even that with the explicit <register>_ff <= <register_nxt> pattern when I need flips flops and I never felt like there is anything difficult about VHDL
Is the North American insistence on teaching Verilog what's setting up students for failure since Verilog looks a bit more like a sequential programming language at first glance?
There is a trend among programmers to assume that everything supported by the syntax can be done. This is not even true in C++, but it's something people think. If you are writing synthesizable SystemVerilog, only a small subset of the language used in a particular set of ways works. You have to resist the urge to get too clever (in some ways, but in other ways you can get extremely clever with it).
Or you could do the right thing, ignore the GUI for 99% of what you’re doing, and treat the FPGA tools as command line tools that are invoked by running “make”…
I should have said "most _professional_ FPGA users" because I assume many people here who don't know this (including the author of the article) are not.
Any good guides you'd recommend to get started? Also does this workflow work with the cheap Chinese FPGAs available on aliexpress (tang nano, etc)? I always wanted to try out FPGAs again and I prefer to work from command line when possible.
Yeah I agree it is a lack of understanding on how to use the tools. The main issue I ran into in my undergrad FPGA class as a CS student was a lack of understanding on how to use the IDE. We jumped right into trying to get something running on the board instead of taking time to get everything set up. IMO it would have been way easier if my class used an IDE that was as simple as Arduino instead of everyone trying to run a virtual machine on their macbooks to run Quartus Prime.
> software developers generally aren't very good at doing things in parallel
If only hardware people would stop stereotyping. Also, do you guys not use use formal tools (BMC etc) now? Who do you think wrote those tools? Heck all the EDA stuff was designed by software people.
I just can't with the gatekeeping.
(Btw, this frustration isn't just pointed at you. I find this sentiment being parroted allover /r/FPGA on reddit and elsewhere. It's damn frustrating to say the least. Also, the worst thing is all the hardware folks only know C so they think all programming is imperative. VDHL is Ada for crying out loud.)
I was very specific in using the word generally. I taught a mixture of computer science and electronic engineering students. About three to four times more electronic students were competent for every computer science student over the years I taught.
It's not a case of just stating computer scientist weren't capable of doing it. They struggled with the parallelism and struggled with the optimisations and placements when you had to make physical connections on chips.
I'm well aware it's mostly going to be computer scientists writing the tools we use.
For those that want FPGAs to take off like the Arduino platform, I agree. I'd love it. However, it isn't the tooling that's holding it back. The reality is it is that cheaper, faster and easier solutions already exist. Why would you use an FPGA?
And their cited example was students. I think students would struggle at something new until they 'get it'. Would a software developer who does FPGA development professionally struggle more than, say, a hardware engineer?
Sure but most of software development is about running single-core workflows on top of a parallel environment so the experience of a SWE is very heavily single-threaded.
The ever so popular JS is explicitly singlely threaded.
The default way of programming is with code on individual lines and when you run a debugger you step from one line to the next. This is not how code actually runs within a pipelined CPU though.
I was asked to sign off on an R&D tax claim for my team's work. I reviewed it and said no. Was then sent to a meeting with the accountants who explained the claim was based on what the CEO had told them. We went through the details and the agreed with me on most things. I also discovered that we were entitled to claim for things I wouldn't have known about and the CEO discovered that just because the credits were for R&D the legal definition didn't allow for normal development work.
In this instance nothing intentionally illegal was being attempted. However, had the original claim been made it could have been considered fraud. In these sorts of situations I always ensure that the company put me in contact with the professionals that can indemnify both the company and me from any wrong doing. Provided we tell the truth.
This happened to me too, the claim was outsourced to a contractor who had never interacted with myself or anyone on my team - the only devs in the company - resulting in a purely fictitious depiction of what we did.
I created an account because I didn't have one. I just thought I'd correct a comment which was ignoring what actually happens in the UK. We have wild herds. In the Highlands they can be massive herds. I've known a few people to be involved in serious accidents as a result of hitting deer.
There are herds of wild deer a half hour outside NYC. Deer seem to be able to thrive anywhere as long as there aren't many predators.
The continental US (i.e. excluding alaska) has some 20+ mil wild deer, 200k bison, 1 mil bears, 3-4 mil coyotes, 13,000 wolves, 5 mil alligators, 10k+ mountain lions, etc, etc. Yes, most are in the vast western states. But deer and bears are common even in areas a just a couple hour drive outside of NYC. Thankfully no gators near NYC (except in the sewers, of course :-)
If you can't account for someone spending x% of their time working with a team but for budgetary purposes belonging to a different team then sack your accountants.
DevOps,like agile, when done correctly should help to create teams that understand complete systems or areas of a business work more efficiently than having stand alone teams. The other part of the puzzle is to include the QA team too to ensure that the impact of full system, performance and integration tests are understood by all and that both everyone understands how their changes impact everything else.
Having the dev team build code that makes the test and ops teams life easier benefits everyone. Having the ops team provide solutions that support test and dev helps everyone. Having test teams build system that work best with the Dev and ops teams helps everyone.
Agile development should enable teams to work at a higher level of performance by granting them the agency to make the right decisions at the right time to deliver a better product by building what is needed in the correct timeline.
DevOps and agile fail where companies try to follow waterfall models whilst claiming agile processes. The goal with all these business and operating models is to improve efficiency. When that isn't happening then either you aren't applying the model correctly or you need to change the model.
reply