Great article! I taught full-time for a while and progressed to program coordinator and eventually department chair. I hated the bureaucracy and ended up leaving. I hang on as an adjunct and still teach one or two sections a semester. My favorite is an intro to programming class using Python - I love to see the lightbulb come on when it all falls into place. That's usually a couple of students out of 25.
I don't get why students don't come to office hours - hardly anyone ever does. I see it as a critical part of my job as service to the students. Some of them are just flailing, yet they don't reach out.
I miss teaching in person. Since Covid, all my classes have been online. I would follow the lecture material, but would also demonstrate important aspects of each topic as we went through them and encouraged the students to do the same on their laptops.
My biggest challenge are these online learning platforms. We use ZyBooks. There are two components, the "book" part where the student reads and the programming part where they write some code. The second part sucks. It's not real programming; it's a padded cell where the student writes code and provides any input. The output is automatically evaluated pass/fail. The student has no interaction with the operating system or interpreter and in my opinion, it loses something without that context. They could have an extra CR/LF in the output and they'd fail the assignment. In the real world, who cares? The problems are often absurd; asking for things that nobody would ever encounter.
My final rant is student-focused. I get a lot of emails like, "I'm trying this and here's a screenshot of my code and I get this error message and I can't figure it out." Somedays I want so badly to tell them that if they pasted the contents of their email into google instead of sending it as an email, the solution would be one of the first three results!!!
>
I don't get why students don't come to office hours - hardly anyone ever does.
Honest answer from my student time: because as a student you are/were learning and solving exercise sheets nearly all the time. Thus you typically didn't have time to come to the office hours.
Before you are able to ask questions about lecture topics, you better first understand where your understanding problem is and which questions you actually have to ask. This requires quite some time that you typically don't have - because of learning of solving exercise sheets.
It's already on BitTorrent. IPFS doesn't do much BitTorrent doesn't already, most of it is a new coat of paint and making the same mistakes BitTorrent figured out years ago.
It does one thing BitTorrent doesn't — you can compose a new CAR file by combining a few new chunks with a bunch of existing chunks. So you don't get the problem where releasing a new version of an archive means nobody's seeding it; and anyone moving over to seeding the new version stops seeding the old version. Instead, the new file is already pre-seeded by all the old version's seeders on all but the new chunks (because they're seeding the chunks, not the file); and the old file stays seeded as the seeders find the new version and seed its blocks too.
Really, BitTorrent could do this by making all torrent files a small fixed size and then having "torrent files of a directory of torrent files" where the torrent client knows to queue the sub-torrents as they're discovered+downloaded in the parent torrent. But that's not how any part of the ecosystem works. IPFS is a "do over" that allowed them to fix this.
>releasing a new version of an archive means nobody's seeding it; and anyone moving over to seeding the new version stops seeding the old version
BitTorrent v2 would in theory be able to seed individual files even if they come from a different torrent. But clients have no reasonable way to look for other versions of a torrent that contain a file they already have.
The main Bittorrent clients already support creating and seeding v2 torrent. But there's just no infrastructure for seeding at the individual file level.
One major benefit of IPFS is that people seeding individual works and people seeding the large archive groups can share data. It seems that these torrents are blocks of data that aren't of direct use.
That being said while the IPFS protocol is decent the implementations kind of suck. Bittorrent is well established with many high quality implementations.
I spent 15 years managing a VMware-centric data center. I ran the free version at home for at least 5 years. When I ran out of vCPUs on my free license I switched to Proxmox and the migration was almost painless. This new tool should help even more.
For most vanilla hosting, you could get away with Proxmox and be just fine. I've been running it for at least 5 years in my basement and haven't had a single hiccup. I bet a lot of VMware customers will be jumping ship when their licenses expire.
This has been a great resource - I love this book! For the last 5 years, I've taught an intro to programming class at the college level and I always recommend that my students augment their resources with this book.
I remember working on a project to put scantron machines in every public school in DC back in the 80s. We built the interface from the scantron machine to the DECmate II (a micro-PDP-8, if I remember correctly); async io in assembler... I learned a lot on that part of the project. Then we wrote the scanning software to allow lots of teachers to scan their tests in at the end of the day. Next we built a network over dial-up phone lines to allow the DECmates to upload their daily scans to a VAX (using Kermit, I think). Finally we built the tools to load all of the daily scans into a database and do all kinds of analysis and reporting. All pre-internet -- good times!
I remember learning a lot about the scantron forms and realized that if you made a black box at a certain place, that form would be interpreted as the answer key and would screw up a whole pile of scanning!
In the left margin, about 2/3 of the way down... between a couple of alignment marks. But that was 35 years ago... I could be wrong. I always felt like I held an immense power with that knowledge, but never used it!!!
The scantron forms that were in use while I was in high school had a sort of meta-data section, where students would put their name, the date, etc. One of those boxes was labeled 'key'. I always assumed it performed that function, but I never tried it to see.
Maybe the intention was to make it easier for teachers to identify a student maliciousmy marking it?
My recollection is that the "key" there allowed a teacher to give out multiple tests with questions in different orders, and then you would mark the letter for the test that you were given in that section. So test A would have questions in order 1 2 3 4 5, while B would be 2 5 4 3 1 and C would be 3 4 1 5 2. They could also change the letter of the answer, or really whatever you imagine.
Presumably it reduced people copying answers from their neighbors.
That was my first exposure to Linux in 1995. I remember downloading 30-something floppy disks over a painfully slow T1. I deployed our company's sendmail email server a few months later, running on an old PC. In 2006 I switched to Linux as my daily driver and if I need windows these days, it runs as a VM.
Check out Mr. Fancy T1 line over here. I remember downloading Slackware floppy images over 28.8 dialup. Talk about pain. I recently stumbled on a dusty box of them whilst cleaning out the attic.
2400 baud. I wish. I had to manually dial my rotary phone and then place the handset into my 300 baud CAT acoustic coupler modem.
To be fair by the time I 1st used Slackware that 300 baud modem had long been replaced and I at the time I had bonded ISDN channels for 128kbs connection paid for by my employer. Soon replaced by a T1 to my home also paid for by my employer.
Did you employer give you a spec-'d out computer/workstation too? I don't even know how you hook up ISDN equipment, but I know an SGI Indy had it built in.. ~1993.
I'm thankful I never experienced that. My family switched from AOL dial up to an AT&T ADSL connection (either started out as 128, 256 or 512kbps down circa 2006 IIRC). It still didn't make downloading FreeBSD and Fedora Core 6 ISOs easy but it definitely was doable in a reasonable amount of time!
I set a kitchen timer to 12 minutes for each floppy disk's worth of download at 14.4k. It took many evenings of interrupting my TV watching every 12 minutes to kick off a new zmodem download. If it was indeed 30 disks (which seems reasonable; some of the disksets were only 3 or 4, others were 8 or more, depending on which packages you wanted), that would have been 6 hours in one shot with no overhead.
I have fond memories of installing Slackware and messing around with it in high school, around ‘98. I had access to a cable modem and cd burner. If I recall correctly at that time booting directly from cdrom was still not always available so the default method to install Slackware (and also to boot it after installation) was using a floppy plus cdrom.
I soon figured out that it was easy to skip the cdrom altogether and make a minimal install using just the boot disk and ftp. So easy yet you had to be deliberate and understand what you were doing. Such a great learning exercise.
A T1 was about ten times faster than a 3-1/2" floppy drive wasn't it?
I never thought of a T1 as "slow" until I started downloading CD images for Linux distros. The first cable modem connections I had were so much faster for downloads.
I remember being a kid and hearing about T1 lines and being amazed at their speed and how the server for the game I played was running off one. I pictured it as some special commerical offering that was rare to have.
Funny learning how slow those are by today standards.
In 1995, many early ISPs were still on fractional T1s or even 56K leased lines! I remember upgrading one from 56K to T1 in mid 1995. T3s were rare: large regional ISPs and backbone providers.
Where I’m living ADSL is still exciting (and one of only 2 options for wired Internet service). Currently on 2 (semi-stable) 60mbps down / 15mbps up pairs bonded together sold as a 100mbps down / 20 mbps up service. I keep it because the cable provider charges a ton for anything over 10mbps up when not on a promotional rate.
I use this a couple of times a year when I crank up my Pandas version of TOPS-20 (http://panda.trailing-edge.com/). Now that VMS is ported to x86 I may use it more!