After getting shoulder bursitis two years ago--although the direct cause was sports, not desk habits--I dove into the world of split ergo keyboards. I did get one (a Kyria v3) and learned to type on it at an acceptable speed--although still significantly slower than my speed on a regular keyboard.
Wanting to optimize my layout, I did research into my typing behavior and logged my keystrokes (and storing these logs as securely as I would a password). Analysis did give me notable insights (e.g. by far my most used keys are arrow keys, for selecting text), but my main conclusion was that even during a regular full day of programming preferring my keyboard over my mouse (tiling window manager, hotkeys, browser extension to virtually click on elements using keys), I don't actually type that much, and if I do, it is in bursts, never more than 20 seconds or so.
Although I find typing on a split fun and comfortable, I went back to a regular keyboard because the hit in productivity is not worth it for me. The experiment did teach me how to improve my ergonomics. I optimized my desk height and bought a very flat and less wide keyboard, with the completely unused numpad section chopped off ("TKL") so if I do grab my mouse there is less travel.
This kind of translation problem is the focal point of Star Trek: The Next Generation, season 5 episode 2, "Darmok" (1991).
I watched it for the first time after somebody referenced it, as I did just now, as an example of this kind of problem. Despite my knowing the point of the plot beforehand, I found the episode was still interesting.
I wish I could mention this episode here for language enthusiasts to enjoy without revealing the main plot point (that idioms in languages are hard to translate). Shaka, when the walls fell. But I think the very act of mentioning it in a thread on this topic does so unavoidably. Temba, at rest.
Not being able to solve basic math problems in your mind (without a calculator) is still a problem. "Because you won't always have a calculator with you" just was the wrong argument.
You'll acquire advanced knowledge and skills much, much faster (and sometimes only) if you have the base knowledge and skills readily available in your mind. If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
> if you have the base knowledge and skills readily available in your mind.
I have the base knowledge and skill readily available to perform basic arithmetic, but I still can't do it in my mind in any practical way because I, for lack of a better description, run out of memory.
I expect most everyone eventually "runs out of memory" if the values are sufficiently large, but I hit the wall when the values are exceptionally small. And not for lack of trying – the "you won't always have a calculator" message was heard.
It wasn't skill and knowledge that was the concern, though. It was very much about execution. We were tested on execution.
> If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
I can't imagine anyone is still using a four function calculator. Certainly not in an application like learning linear algebra. Modern calculators are decidedly designed for linear algebra. They need to be given the rise of things like machine learning that are heavily dependent on such.
Critical thinking is not a generic/standalone skill that you can practise targetedly. As in, critical thinking doesn't translate across knowledge domains. To think critically you need extensive knowledge of the domain in question; that's one reason why memorizing facts will always remain necessary, despite search engines and LLMs.
At best what you can learn specifically regarding critical thinking are some rules of thumb such as "compare at least three sources" and "ask yourself who benefits".
Great initiative, but the landing page on Firefox Android is quite annoying because the animated text wraps and keeps changing the vertical position of the text I'm trying to read.
At https://www.hedy.org/learn-more many of the listed 'research' items are bachelor's theses supervised by the creator of Hedy, with topics such as 'creating Syntax highlighting for Hedy'. To me this comes across as padding and disingenuous.
As a teacher, my answer to publicly available AI, regarding grading, has been to only let work by students I have seen been done myself (or staff, trusted implicitly) count towards passing/failing/diplomas.
This means graded longer tasks such as writing or programming, which in the past were done outside of class and then handed in, are instead done in person (like most exams are, traditionally). To acknowledge the longer process, the time alotted for such a test is much longer (up to 3 hours instead of 1) and I can expect less of work created in 3 hours than work created over 6 weeks.
Students still have the opportunity to hand in work that was done outside of such an in-person test, which I will grade, but that grade isn't on the record is serves only as an indication.
I wish I could trust students, but plagiarism has become so easy now (added to a zeitgeist of personal gain above all) that a critical mass fails to resist.
I think diploma's are very useful. They get their usefulness from the trustworthiness of the party handing out the diploma. Only recording grades for work that has been done in an environment controlled by that party has downsides, but in the current and foreseeable situation I don't see a better way of maintaining a base level of trustworthiness.
I love teaching without grades or diplomas involved so much more. Then it's all about doing what's best for the students.
It's also worth pointing out that this approach is valuable in protecting against some very traditional risks too: Content that was ghostwritten or plagiarized from another human. LLMs simply made it easier and cheaper to cheat in the same sets of circumstances.
> but that grade isn't on the record is serves only as an indication.
I think that kind of output is often under-appreciated, especially by students. Even as pure advisory feedback, a grade on a subject or section helps people recognize where to allocate their limited time and effort.
I can see the value in this approach, but I also know for myself this would have been horrible in school.
I was always a horrible test taker (the pressure being a large part of that, but not all) and I often needed the time outside of class for any papers I need to write. No matter how much I tried, it just wasn’t happening in class.
I acknowledge the problem but I would be worried how this could negatively impact a lot of people.
You're right that some students fail only because of the way they are graded. Although with the right preparation this number can be quite low, it's never zero. It's heartbreaking to see some fall between the cracks, but I don't think it's feasible to get to a system that works for every single person.
Right. At my school, the accessibility option for tests is to give us more time. A three hour test would be made "accessible" by extending it to six hours or even more.
I was really hoping AI would make our world more accessible, not less.
(eta) Additionally, it would take more instructor or docent time, because no one can't be trusted to actually learn the material we're paying tens or even hundreds of thousands of dollars for.
It ain't the future dystopia I'm afraid of, it's the one we're creating this week.
Thanks for bringing this up. I nearly failed out of college before getting accommodations for anxiety. I was a physics major and was solid with math, but sitting down to take a test I’d forget how the most basic properties of multiplication and exponentiation worked. Once I started being able to do untimed take home tests (due to accommodations) I started doing well. With the pressure off I was also able to finish the tests much more quickly than before.
I have no idea what it is like now but a significant part of the testing environment at Georgia Tech in the early '80s in the highly competitive engineering fields was "impossible" exams. These involved a timed in-class exam of an hour (sometimes two) where it was not possible to solve the major points problems analytically within the time limit, but a variety of approximation techniques were available. The highest scores came from understanding how to deploy the fastest approximation techniques.
Unlike many of my classmates I was really good at the mathematical part of the exam questions and could straightforwardly solve them analytically. However, this took too much time! I bombed several important exams learning that "good enough" is an important attitude to have when solving engineering problems. I hated this intensely at the time. (Never give that Stalinist institution a dime till I die.) BUT. There's a reason I went into numerical analysis in graduate school. Which I loved then, turned into a transient career that I enjoyed immensely, and remember fondly.
I doubt this approach is feasible for non-engineering disciplines.
Teach critical thinking before the college level - ideally well before. Much of the issues with overly relying on these tools boils down to a lack of critical thought - analyzing the output of these things critically, once you reach a certain competence of writing, will lead you to quickly realize you shouldn't overly rely on it, or maybe even use it at all.
> my answer to publicly available AI, regarding grading, has been to only let work by students I have seen been done myself (or staff, trusted implicitly) count
Of all of the tactics I've heard teachers taking, this is the one that seems both the most effective and most fair. It's a terrible thing that it's necessary because it's a less effective use of classroom time, but it's hard to think of a better approach.
My prediction has always been that schools will shift to proctored exams, and job placement rates will eventually be the only metric that matters for students. I think it's an advantage for the schools to ignore AI for as long as they can though, since higher pass rates means more students paying the full cost of tuition, and better graduation statistics.
I think the problem is we think using another KPI will obviously solve the problem. Using job placement as a rate just encourages nepotism and self dealing though - uni's have been caught giving say, an average salary of an outgoing degree by including an NBA player who got it before making millions shooting hoops.
It's really difficult to plagiarize if you write the paper or take the exam in class using paper and pen-or-pencil. It's also difficult to plagiarize if the teacher/professor/TA actually wanders around the room during the test (without stopping behind a student and raising the student's blood pressure in the process.)
Of course, that involves more on your part as teacher/professor -- you actually have to teach and test and act as editor -- on an ongoing basis. At university, that's part of what TA's are (or should be) expected to do. As a teacher, it's your responsibility.
So, if you happen to be a CS 101 instructor ask for the evolutionary copies of a program. If you happen to teach poetry and cover Herrick, ask for an original poem in the style of "Upon Julia's Dress" and see what the students happen to see from the same poem.
Teaching without ensuring that learning happens is a waste of time and a disservice to your profession.
It seems to me that AI detection is best done in the editor itself. If you track a student's mouse and keyboard there should be a clear distinction between someone plagiarizing (even if they're re-typing word for word) vs writing genuine work - which would involve lots of backtracking and editing.
Are you seeing a shift in class duration to accomplish this? For my kids, I don't see how their classes can squeeze both instruction time + work time in the allocated slots. Is instruction time being sacrificed or "replacing" homework (watching videos/reading text at home)?
A friend of mine seems to have switched to a "exercise/exam in school, learn at home" methodology, where the next lesson is given in sheet format at the end of the hour, and "school" is mostly exercises he uses to see how much each student understand. It seems to be quite happy with this, and told me good students are quite happy with this (lessons are learned at their own pace), and poor ones now have more time with him helping them during the exercises. Only issue is that he now have troubles not talking to/helping students during exams and now really hate those (always disliked them as well as grading, but now it seems worse).
He also heavily encourage student to learn with each other during "perm" hours (basically in highschool, hours where student have to stay in school but have nothing to do and can work on their own, go to the library or play/chill/whatever), but that was before and because he fondly remember the days where all 26 of us, the full class, all teamed together against homework (yeah, i had great classmates)
In principle nothing about the structure of the course has to change. You can keep an existing course with instruction during class and and working on a project outside of class. Even handing in the project and grading could stay the same, although the grade for the project wouldn't be recorded and would serve only as an indication. You then add a longer test at the very end, for a grade.
How I set up my course is different from traditional education. Most instruction happens from self study (including videos and software). Face to face time is for questions. All elements, material, staff, and students, have to be tuned for this setup to work though and is hard to pivot to in existing setups because education (in the Netherlands at least) is very entrenched.
This self-study centered learning approach, like the in-person tests instead of projects, comes from necessity. Great (and I expect ever increasing) teacher shortages.
I worry that the uncomfortable truth AI is revealing is that essay writing is a less essential skill than we'd like to believe. We can say things like "personal gain above all" and the pursuit of a diploma being our downfall, but I can't help but wonder if it would be better if we just got rid of essay writing altogether or wait until the student knows what kind of writing they want to do.
Of course you prefer teaching without grades; the only people in the room are people who want to be there.
Really it's not revealing anything at all about the value of essay-writing as a skill. It's just revealing that people will cheat in ways that are hard to directly prove, and grading writing is really hard when people have access to an infinite bullshit generator.
I don't know about this. Take away the moral gatekeeping of calling it cheating and look only at outcomes. If students use AI instead of doing it themselves, are they worse off in a material way that only essay writing could provide? If they aren't, couldn't we call essay writing busy work at worst and elective at best?
Essay writing shows that you actually know the domain material, that you are capable of holding onto a thought for more than 3 seconds and that you can communicate thoughts clearly to other people.
I never wrote a single essay for a math class, so I think there are alternatives.
But I think you're missing my point. If a student can go through their education never writing an essay the way everything thinks they're meant to do it and end up doing just fine in life, maybe the merits of essay writing aren't all what they're cracked up to be? Save the skill for specializations where the students are more motivated to learn it, like metalworking or pottery.
If argue that if you were ever asked to show your work, you were writing the equivalent of a mathematical essay. Maybe you never had to learn proofs, but I did.
“I didn’t have to do that and I turned out fine” isn’t a very rigorous pedagogy.
Sure, but now we're stretching the definition of "essay" past what's useful for the topic at hand. This is a thread about AI checkers for essays, not math proofs.
And no, it isn't rigorous, but it's a pretty good hint that you've got correlation and not causation. Perhaps it's worth entertaining the possibility there's a better way.
I kind of agree with this and am willing to take it even further: Why should I even study a subject that I can just ask a computer to explain to me when I need it? AI isn't quite there yet in terms of reliability, but there may be a point where it's as reliable as the calculator app, at which point, does it even make sense for me to study a subject just to get to the mastery level that is already matched by an AI system?
If I need to know when Abraham Lincoln was born or when the Berlin Wall fell, I could either 1. memorize it in high school history class to demonstrate some kind of "knowledge" of history, or 2. just look these things up or ask an AI when I need them. If the bar for "mastery" is at the level of what an AI can output, is it really mastery?
> Why should I even study a subject that I can just ask a computer to explain to me when I need it?
Because studying a thing is a world apart from having it explained. When you study a thing to gain understanding, your understanding is not only deeper but you are also learning and practicing essential skills that aren't directly related to the topic at hand.
If you just have a thing explained to you, you miss out on most of the learning benefit, and the understanding you end up with is shallow.
This is, sadly, an idealized notion of education that just doesn't match the reality of a general ed classroom. Students don't study to gain understanding in a majority of their classes; they study to pass. True, not all students all the time, but in the world you just described no amount of extrinsic motivation can force a student to deeper understanding, so why are we even talking about AI checkers?
Unless you're telling me you never did that in any of your classes growing up, but I'm going to be highly dubious of such a claim.
> Unless you're telling me you never did that in any of your classes growing up
I did extremely poorly in school, actually. It wasn't an environment that I could function in at all. But I got a great education outside of school.
I'm really talking about what's needed in order to get a good education rather than anything school-specific. Technically, school itself isn't needed in order to get a great education. But you do want to get educated, whether school is a tool you employ to that end or not.
"If you want to get laid, go to college. If you want an education, go to the library." -- Frank Zappa
But, outside if reading, writing, and arithmetic, the thing I did learn in school that was the most valuable was how to learn. So, that's my bias. The most important thing you learn in school is how to learn, and much of what teachers are doing in the classroom is trying to teach that.
My fundamental point is that what we need in order to learn is not just getting answers to questions. That approach alone doesn't get you very far.
I don't think we're too far at odds. I think the difference is that I'm talking about the classroom...especially general education, where AI essays are the problem. To your point, not every student chooses to spend time at the library, and you can't make them.
When I was younger, I was a bit of an idealist about education reform. As I grew old, I began seeing this the failings of education as a reflection on human nature. Now, I just don't think we should be wasting student's time trying to make them do something that, for whatever reason, they cannot or will not do the way we want them to.
> If you just have a thing explained to you, you miss out on most of the learning benefit, and the understanding you end up with is shallow.
Sorry, but I don't get this. Isn't this exactly what the teachers/lecturers and books do - explain things?
Sure, you have to practice similar things to test yourself if you got everything right. And, of course, it's different for manual skills (e.g. knowing how to make food is kind of different from actually making food).
But a language model trained on a education materials is no different from a book with a very fancy index (save for LLM-specific issues, such as hallucinations), so I fail to see the issue in ability to get answers for specific questions. As long as the answers are accurate, of course.
And - yeah - figuring out if the answer is accurate requires knowledge.
> Isn't this exactly what the teachers/lecturers and books do - explain things?
In part, sure, but not solely. I wasn't saying that getting an explanation is a bad thing, I was saying that only getting an explanation doesn't advance your learning much.
> And, of course, it's different for manual skills
I don't think that's different. It's the same for intellectual skills as for manual in this regard.
> I fail to see the issue in ability to get answers for specific questions. As long as the answers are accurate, of course.
There's nothing wrong with getting answers to questions. But that's not the process that leads to learning anything other than the specific answers to those specific questions.
Getting an education is much, much more than that. What you are (or should be) learning goes far beyond whatever the subject of the class is. You're also learning how to learn, how to organize your thoughts, how to research, and how the topic works at a deep enough level that you can infer answers on it even when you've not been told what those answers are.
If what you're learning in class is just an compendium of facts that you can look up, you're missing out on the most valuable aspects of education.
Why lift weights when I could just use a forklift?
At some point someone actually has to do some thinking. It's hard to train your thinking if you just offload every simple task throughout your entire education.
So you're saying you've never used StackOverflow in your life?
I find your analogy works against your point, because manual labor does use a forklift and other heavy machinery whenever possible. It's better for human health (and the backs of blue collar workers) that way. Now the only people lifting weights in gyms are those who choose to be there for their health and not because they're forced to.
If you’re, say, not clear on whether Abraham Lincoln was president when the Berlin Wall fell, you might have trouble asking the AI a good question to begin with.
This line of thinking will leave you like some of the high school kids my wife works with, who can't solve 19 + -1 without a calculator. If you don't integrate anything into your understanding, you will understand nothing.
> As a teacher ... tasks such as writing or programming
I hadn't realized programming was being taught pre-university. From a quick look online it seems high-schoolers may be learning Python. That's pretty cool. Wonder how widespread this is, and how early children are taught.
My kid is in a programming class, so far it has been... kind of a joke. Extremely controlled environments and dragging "code" around in the form of colored blocks.
I wish they taught JS/HTML web dev stuff. Even if someone only gets a year in the knowledge will stay relevant because they are using the internet daily. Just basic understanding of cookies, http, IP addresses, etc. is something kids should know.
Scratch is great. I used it when teaching and it allows kids to focus on the goal rather than deal with concepts like syntax errors and compilers. Sure, they can learn more later, but at the level of "computers follow instructions, you know?" it's a very appropriate tool.
Eh, they might hit LabView in university. If they're really unlucky they'll end up in an industry that uses LabView in practice, and Scratch is really good prep for that pain.
So you've just completely dismissed Scratch/Blockly which has an illustrious and fairly respectable history. I'm not saying it's the be all and end all, but the way you tossed it aside with scare quotes felt a little... flippant?
In the Netherlands, programming in secondary education (ages 12-18) is not uncommon but also not widespread. If it's offered, it's not great. Like programming taught at universities, it's mostly people with a knack for it that actually improve.
For my msc thesis, which I finished 6 years ago, I researched how to teach programming to people without natural talent. I found a method with great results, and have yet to see anyone do something remotely similar. I'm not teasing a sale, all my stuff will be fully free. I've been working on it fulltime for the last three years, and I expect to have a 1.0 in two years, so let's say four years.
My high school had several years worth of programming courses available, and that was in the 2000s.
Now days American high schools are becoming much more focused on teaching technical skills again (thankfully). There's a renewed focus on teaching things like CNC machine programming and robotics alongside traditional computer science courses.
Hey there is a company examind.io that has a product that lets you assign writing assignments for students to do on their own time in their UI that runs in the browser, and it tracks every keystroke and interaction by the student and analyzes it to determine whether it's the student writing or if they are copy/pasting or transcribing it from AI. Also it has option to give student access to AI research assistant right in the software so you can inspect how the student is using AI to help with their work.
I think it might give you the assurance you need while giving the best experience and opportunity for your students to do honest / hight integrity work with the latest (AI) tools.
The benefit of forcing students to write their grades work in person is you know it's coming from them, but the downside is it's a very artificial test not representative or what real world work will be like. I think examind can give you the best of both.
It evaluates the process the student used to write the essay, not only the final result. It makes it transparent to the professor. So the professor will see the student typed it out word for word. this is in contrast to an authentic essay writing process that involves a lot of editing.
Re-reading my original reply and your response and I think we had a misunderstanding. I never intended to make you feel assured with my post. I was trying to communicate that the features the product provides could help you feel assured that the student actually completed the work themselves and if they used AI to help, you can see exactly how. (And that they appropriately paraphrased, etc)
The whole point of the product is to give professors more flexibility in the kind of assignments they use (and even allowing students to use LLMs in a controlled way and be evaluated in how they use them) while ensuring academic integrity.
For example allowing students to use LLMs as research assistants and even to help them consider and structure ideas, while ensuring the student paraphrases everything sufficiently to prove they actually understand it and can put it in their own words.
To be clear I understand and respect your desire to protect the integrity of the diplomas and credentials you are giving out (especially in contrast to the so many who let cheating run rampatnt), but at some point you may want to be able to accurately evaluate how students use industry-standard tools. (like when calculators were first introduced).
So sure, be skeptical, but maybe be careful about throwing the baby out with the bathwater.
I appreciate your taking the time to elaborate. I see you've responded to all comments on your original comment and remained civil despite people's negative sentiment. There are too few places on the internet left where such civility remains, and I thank you for contributing.
I would not feel assured of students actually completing the work themselves with _only_ something like examind.io as an extra measure. For it to be used in their own time, they would use it on their own hardware. As user viraptor pointed out, whenever there's anti-cheat software, someone is going to create targeted anti-anti-cheat software. That's what I meant by arms race.
For me to feel assured of students not fooling the anti-cheat software, for their input device they would have to use hardware controlled by me. It's not feasible to let them use hardware controlled by me in their own time.
I can see how a tool such as examind.io might help in accurately evaluating how students use other tools on a computer. For that they could use hardware controlled by me, during a test.
Their approach is to bring maximum transparency into the process the student used to write the essay, rather than the final result.
I don't really see how it's about an AI vs anti-ai arms race.
It's not my company and I'm not responsible for selling it so I'm probably doing a poor job...
But if you want to evaluate your students writing and ensure integrity and also provide them with longer windows to work on bigger writing assignments (and even allow them to use LLMs to help them write in accordance with your rules) then wouldn't an application like this help you?
I don't understand why I'm getting such a negative reaction from everyone for sharing this... I genuinely thought I was helping by pointing out a solution to your problem...
Assuming you're the founder, this is the type of BS comment that makes the rest of us hate AI founders.
It's vacuous, makes vague claims that don't leave room for proof/disproof, and doesn't offer any reason that it's any better than a prompt that asks GPT4o "was this generated by AI y/n"
I am not the founder. Also I pointed out they track user actions and keystrokes and analyze them, which is clearly distinct from just pasting the students work into an LLM and asking if it was generated by AI. I'll go further to say that they can tell if the student left the browser window and for how long. Also natural essay writing patterns are different than transcribing something from another source. I'm pretty sure they have more methods but I don't remember them all.
Your comment is full of unnecessary animosity and resentment and I think it is inappropriate.
I know the founders and I also know they have very happy customers.
Just because you have experienced some vapor ware startups or whatever doesn't mean every company is one...
I shared it with the OP because it sounds like they care about the problem the product solves.
Lastly, I want you remind you that the very first line in the community guidelines is "Be kind. Don't be snarky". I find your comment both unkind and snarky.
Wanting to optimize my layout, I did research into my typing behavior and logged my keystrokes (and storing these logs as securely as I would a password). Analysis did give me notable insights (e.g. by far my most used keys are arrow keys, for selecting text), but my main conclusion was that even during a regular full day of programming preferring my keyboard over my mouse (tiling window manager, hotkeys, browser extension to virtually click on elements using keys), I don't actually type that much, and if I do, it is in bursts, never more than 20 seconds or so.
Although I find typing on a split fun and comfortable, I went back to a regular keyboard because the hit in productivity is not worth it for me. The experiment did teach me how to improve my ergonomics. I optimized my desk height and bought a very flat and less wide keyboard, with the completely unused numpad section chopped off ("TKL") so if I do grab my mouse there is less travel.