I seriously don't get it. At my time in university, ALL the exams were oral. And most had one or two written parts before (one even three, the professor called it written-for-the-oral). Sure, the orals took two days for the big exams at the beginning, still, professors and their assistants managed to offer six sessions per year.
When I did my BSc and MSc in physics almost all my exams were oral just like you described. Latter I did a PhD in a different university where oral exams were never practiced. My PhD supervisor told me that part of it is because of the scaling issue, but another very interesting point he made is that it is about cultural interpretation of fairness.
In my BSc and MSc we were all basically locals who are in all aspects about the same except from the aptitude to study. In the university where I did my PhD there were much more divisions (aka diversity) in which every oral examiner would need to navigate so one group does not feel to be made preferential over another.
Professors are just humans. If they can grade you with an AI for $5 and spend the 20 hours gained scrolling on their phone – guess what, they'll do that.
How about they spend that time preparing to become better teachers/professors? Also there’s a lot of paperwork that eats into their time and energy, why not use AI use AI as a tool to assist?
Maybe someonw can explain me, but I never understood the appeal of GPIB for modern instruments (legacy instruments are of course "excused"). Electrically is a terrible interface that introduces ground loops with the control computer. Speed are laughable and it requires exensive and exotic adapters with complex sw stack (I wish this projects good success, it's needed!). Ethernet in comparison tick all my boxes. It's electrically decoupled by default (just use UTP cables), crazy cheap, very fast and with sane sw stack thanks to vxi-11. You can even bypass visa if you wish and open a plain TCP socket, no need for any library. What am I missing?
Not much, but consider latency: You can use the Group Execute Trigger (GET) to simultaneously trigger multiple instruments with both very low latency and very low latency dispersion. Think, easy-to-use sub-microsecond synchronization.
Ethernet and USB 4 may have orders of magnitude more bandwidth, but can’t achieve the same multi-device synchronization capability without side channel signals.
Now, sure, you can add the same capability with a programmable pulse generator connected via coax to the trigger input of all your instruments, but GBIP lets you do that with just the data connection (and you don’t always have a spare trigger channel). The only other protocols I know of with similar capabilities are PXI and PXIe, which are “PCI(express) in an incompatible form-factor, plus some extra signals for real time synchronization”.
sub-microsecond triggering should be doable with a level-2 cut-through switch and an ethernet broadcast no? I admit that ethernet is not really designed for that as the Phy is then the latency bottleneck.
Sure, in principle, but that takes effort and special equipment to set up. The point is that GPIB makes it easy (trivial, actually) with nothing more than the cables you normally use to connect instruments to get very low and predictable latency.
GPIB GET works by first configuring a subset of bus devices as listeners and then sending a single-byte message (it’s an 8 bit bus, so one bus cycle) with the ATN line asserted. It’s intrinsically low latency without any special effort.
Whether that makes it worthwhile to put GPIB on a new instrument in 2025 is a different question. I’m only addressing “what does GPIB give you”?
There are approaches to real-time ethernet (some industry implementations like profinet or ethercat, 802.1as from IEEE) but support is spotty and it requires specialized gear to be effective.
If you buy used equipment which doesn't have Ethernet or your company wants you to ise the stuff that is in the Lab since 10+ years there's simply no other choice. Or companies that see Ethernet as a potential security attack vector.
It's indeed not that GPIB is better than Ethernet. In tiny aspects that's argueable, but as general statement true.
with a bonding machine :)
doing that manually can be tedious, nowadays for IC that are bonded and not flipchipped it's all automatic. Manual bonding is still very used in research.
these prices are like airplanes: no one with volume pays list prices, it's something else. moreover, this FPGA is very peculiar. it's used to simulate ASIC during validation, so it's not really the typical FPGA that gets used in a project
Same issue, but instead I convert the USB-C signals of the laptop to HDMI/USB-A plus charging port with a cheap adapter. Then a KVM with HDMI/USB switching
Funny how the manufacturer proudly claims that the protocol is encrypted, but completely forget to mitigate replay attacks,thus making the encryption completely useless
Unlikely. This kind of wireless thermostat has two parts: the thermostat itself, and a separate receiver box that's directly connected to the boiler. There's usually a pairing process that you can go through where the two parts negotiate a shared value used in the protocol; this prevents one thermostat unintentionally controlling other boilers. You can see this described in the Installation Guide for the thermostat linked from the article (it's called 'binding' in the guide).
And so the heat-stroke-killer was born, offing his victims with rapid changes between coldest and hottest setting, natural death has never been this human-made.
Ah yes, the classic problem of people using crypto primitives without fully understanding the problems they're trying to solve. Anyone even remotely interested should look into a full protocol like TLS or PGP to see how many primitives like block ciphers, hashes, etc. are involved and why.
In fact I'm quite surprised by this announcement. Gl.inet is famous for claiming that their os is based on openwrt, while it can be some vendor SDK that is based on some decade-old version of openwrt and have little in common today