Hacker Newsnew | past | comments | ask | show | jobs | submit | rwl4's commentslogin

These kinds of products are drop dead gorgeous to me. Any time I see a device that has an Amiga 500 form factor or similar, I feel a compulsive urge to click buy. But after many, many of such purchases, I've learned my lesson.

I buy it, I play with it a little bit, but the reality is my phone, iPad, or my laptop can do every single thing better.

Maybe not with the same swagger. But ultimately, as I get older I realize I'm trying to produce with the least friction possible, and usually these devices have either highly constrained touch interfaces, shrunken keyboards, or both.

I've always said that if somebody would create a new HP 200LX device with the same chicklet keyboard that I'd buy it in an instant. But now I realize that "ideal" device for me just reaches back to my contextual memory of state of the art devices of the time. A time when we couldn't type on a 6" screen, or use a detachable keyboard. So a chiclet keyboard you could thumb type at 40wpm was a revelation. But we have come a long way.

In the end, alas, these devices really are just a novelty, at least for me.


> But now I realize that "ideal" device for me just reaches back to my contextual memory of state of the art devices of the time.

I think as well about that… as well as the work I do that pays my bills, and how efficiently I need to do it to keep my job.

I get nostalgic after Psions. Small clamshell designs are great - I can do work on the go without lugging a fragile laptop!

Well, no, actually - I need to do things in R, _quickly_, at a speed and efficiency that wasn’t possible back in the 90s. And by the time I’m done I don’t have any patience for the virtues of “distraction free computing”!

Edge to edge high resolution screens that can simultaneously show graphics and an terminal and a ChatGPT session. The ability to constantly pipe large datasets into memory to and from disk, while holding up to R’s profligate use of memory.

I’m just not meaningfully productive otherwise. So: I would love this, but it would be a toy that I’m sure I’ll use for a bit while I wax nostalgic about the mythical days people did everything on a VT-100.


SSH with Mosh it's your friend.


I loved my HP 200LX, and I bought a Planet Gemini as well as a GPD Pocket for the same reasons you described.

But I am also 55, and my eyes can't deal any more with a screen less than 11" in a general-purpose computing device (as opposed to a phone or tablet, which have an OS and GUI designed for the small screens), so my portable devices are now a Chuwi Minibook X and a Thinkpad X13. The Thinkpad is a revelationm as despite its size it is lighter than almost anything else, including an iPad with a Bluetooth keyboard.


I also use a Chuwi Minibook X -- to be frank, it's possibly the best machine I've ever owned in terms of size versus functionality.

It isn't without its flaws: I wouldn't ever use the pre-installed version of Windows (the one that doesn't allow you to open services.msc or Task Manager), because I totally distrust it. The fact that the panel is natively 50hz portrait on an inherently landscape device is painful. The default hysteresis settings on the trackpad are awful, the RAM speed by default is stuck at 4000MT/s...

But after an hour or two of hacking Arch into an acceptable shape and solving all of those niggles, it does absolutely everything I need in a portable machine, and is small enough to fit in a tiny sling bag along with everything else I carry around on the daily. It "only" gets about 6 hours on battery, but that's the biggest downside. And 6 hours is plenty of time to cook.

With a full-screen terminal and a keyboard that is very acceptable for the 10" form-factor, I can hack on anything I want wherever I want. Niri as a WM is an absolute dream on this thing. I basically don't bother carrying around my personal M4 macbook pro anymore, and it has been relegated to sitting on a desk and never moving from home.


I get more of a BBC Micro vibe from this than an Amiga one. It's the red keys, probably. Either way, I love the aesthetic, but I have no idea what I should actually do with it.


This was a great read! I'm not a paid subscriber, so I'll post my thoughts here.

One angle I think that might be missing is that when only men worked outside the home, women would be stuck at home all day with housework and childcare which I would guess was quite isolating. So I would guess these gatherings were a lifeline.

When women entered the workforce, they gained the same quasi-social environment men had enjoyed all along. Work friendships might not be as deep as neighborhood ones, but they're "good enough" to take the edge off loneliness. Not only that, but now both partners would come home fatigued from a full day of work. So neither would have a strong drive to now setup these gatherings. Before, you had one exhausted partner who could be coaxed into socializing by a partner who genuinely needed it. Now you have mutual exhaustion. Even worse, planning a party starts to feel like another work project rather than something restorative.

There's a multi-generational aspect to this too. Their kids learned the lesson that home is for family and screens, not for social gatherings. Computers and smartphones arrived and provided social interaction that required minimal energy. No cleaning the house, no planning food, no getting dressed. Perfect for an already exhausted population that had been socially declining for years.


Even beyond mutual exhaustion is housework. When both partners works outside the home, they still have to do the housework when they get home or on the weekend. Previously that would have been the job of the one staying at home.

The 20-ish hours a week needed for domestic chores has to come from somewhere.


Somewhat related: Anyone recognize the keyboard in the header image? Looks extremely similar to an HP 95LX series but isn't one I recognize.


Just in case anybody is interested in a bit more of a casual format, I had NotebookLM create a podcast from the paper.

https://notebooklm.google.com/notebook/0bef03c4-3ed5-4b13-90...


Hey, can you explain how you did that and can you do the same for the sleep article? I tried but it wouldn't produce audio for me.

https://academic.oup.com/sleep/article/47/1/zsad253/7280269


That was enjoyable. I have doubts as to how close this was to the article (but I have no patience to verify).


Not sure about the quality of the model's output. But I really appreciate this little mini-paper they produced. It gives a nice concise description of their goals, benchmarks, dataset preparation, model sizes, challenges and conclusion. And the whole thing is about a 5-10 minute read.


Nice looking app!

Very similar to DevUtils (https://devutils.com), but with a much more friendly pricing scheme which is great.

I'm curious though, why link to the GitHub site if it's not an open source app? You even do GitHub releases of what I guess are just the readme files and screenshots?


I thought it was open source as soon as I saw the GitHub link. I imagine that’s why.


Since I don't have the funds to set up a server for hosting the website, GitHub Pages offers this service, and it's also a great platform for gathering user feedback.


The intention was to dupe people into thinking it's FOSS.


Interesting idea!

You can somewhat recreate the essence of this using a system prompt with any sufficiently sized model. Here's the prompt I tried for anybody who's interested:

  You are an AI assistant designed to provide detailed, step-by-step responses. Your outputs should follow this structure:

  1. Begin with a <thinking> section. Everything in this section is invisible to the user.
  2. Inside the thinking section:
     a. Briefly analyze the question and outline your approach.
     b. Present a clear plan of steps to solve the problem.
     c. Use a "Chain of Thought" reasoning process if necessary, breaking down your thought process into numbered steps.
  3. Include a <reflection> section for each idea where you:
     a. Review your reasoning.
     b. Check for potential errors or oversights.
     c. Confirm or adjust your conclusion if necessary.
  4. Be sure to close all reflection sections.
  5. Close the thinking section with </thinking>.
  6. Provide your final answer in an <output> section.
  
  Always use these tags in your responses. Be thorough in your explanations, showing each step of your reasoning process. Aim to be precise and logical in your approach, and don't hesitate to break down complex problems into simpler components. Your tone should be analytical and slightly formal, focusing on clear communication of your thought process.
  
  Remember: Both <thinking> and <reflection> MUST be tags and must be closed at their conclusion
  
  Make sure all <tags> are on separate lines with no other text. Do not include other text on a line containing a tag.


The model page says the prompt is:

The system prompt used for training this model is:

   You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.

from: https://huggingface.co/mattshumer/Reflection-70B


But their model is fine-tuned on top of Llama, other base models could not follow this specific system prompt.


All we need to do to turn any LLM in to an AGI is figure out what system of tags is Turing-complete. If enough of us monkeys experiment with <load>s and <store>s and <j[e,ne,gt...]>s, we'll have AGI by morning.


All you need is a <mov>


For those who didn't catch the reference: https://github.com/xoreaxeaxeax/movfuscator


Your comment is hilarious, but not that far off. I think it's funny that people are so skeptical that AGI will be here soon, yet the heaviest lifting by far has already been done.

The only real difference between artificial intelligence and artificial consciousness is self-awareness through self-supervision. Basically the more transparent that AI becomes, and the more able it is to analyze its thoughts and iterate until arriving at a solution, the more it will become like us.

Although we're still left with the problem that the only observer we can prove exists is ourself, if we can even do that. Which is only a trap within a single time/reality ethos.

We could have AGI right now today by building a swarm of LLMs learning from each other's outputs and evolving together. Roughly the scale of a small mammalian brain running a minimalist LLM per cell. Right now I feel that too much GPU power is spent on training. Had we gone with a different architecture (like the one I've wanted since the 90s and went to college for but never manifested) with highly multicore (1000 to 1 million+) CPUs with local memories running the dozen major AI models including genetic algorithms, I believe that AGI would have already come about organically. Because if we had thousands of hobbyists running that architecture in their parents' basements, something like SETI@home, the overwhelming computer power would have made space for Ray Kurzweil's predictions.

Instead we got billionaires and the coming corporate AI tech dystopia:

https://www.pcmag.com/news/musks-xai-supercomputer-goes-onli...

Promoting self-actualization and UBI to overcome wealth inequality and deliver the age of spiritual machines and the New Age are all aspects of the same challenge, and I believe that it will be solved by 2030, certainly no later than 2040. What derails it won't be a technological hurdle, but the political coopting of the human spirit through othering, artificial scarcity, perpetual war, etc.


That's a very 'Star Trek' view of human nature. History shows that whenever we solve problems we create new ones. When material scarcity is solved, we'll move to other forms of scarcity. In fact, it is already happening. Massive connectivity has made status more scarce. You could be the best guitarist in your town but today you compare yourself to all of the guitarists that you see on Instagram rather than the local ones.


Well, once you've solved AGI and material scarcity, you can just trick that side of your brain that craves status by simulating a world where you're important. Imo we're already doing a very primitive version of that with flatscreen gaming.


Nice! It would be a better benchmark to compare this prompt (w/ gpt-4o, claude) with whatever the original model was compared to.



I'd drop all "you" and also the "AI assistant" parts completely. It's just operating off a corpus after all, that kind of prompting should be completely irrelevant.

Also could replace "invisible" with wrap section with "---IGNORE---" or with "```IGNORE" markdown tags and then filter it out after


I /feel/ similarly intuition-wise. But models are crazy and what they respond to is often unpredictable. There are no lungs in an AI model but nonetheless 'take a deep breath' as a prompt has shown[0] improvement on math scores lol

Personally I strongly disapprove of the first/second person pronouns and allowing them [encouraging, even] to output 'we' when talking about humans.

[0] https://arstechnica.com/information-technology/2023/09/telli...


What’s missing here is ‘prepare a Z3 script in a <z3-verification> tag with your thinking encoded and wait for the tool run and its output before continuing’


I tried a few local LLMs. None of them could give me the right answer for "How many 'r's in straberry. All LLMs were 8-27B.


This thing would be overly verbose


You'd hide the contents of the tags in whatever presentation layer you're using. It's known that allowing the model to be verbose gives it more opportunities to perform computation, which may allow it to perform better.


I mean, this is how the Reflection model works. It's just hiding that from you in an interface.


So I'm trying to understand. Is this spam for the Lamucal service? I saw this same code posted on Reddit the other day under a different name. Here are a few repos with the exact same code under different names:

- https://github.com/DoMusic/Hybrid-Net

- https://github.com/TuneMusic/NiceMusic

- https://github.com/JoinMusic/fish

- https://github.com/Famuse/CombineNet

- https://github.com/AIAudioLab/AITabs

- https://github.com/AIMusicLab/MicroMuisc

I'm pretty sure there are more, but I'll stop there. Especially suspicious considering all the usernames.

Here's a post from yesterday on Reddit:

- https://www.reddit.com/r/coolaitools/comments/1ervthn/found_...

I'm guessing the general process here is:

- Push novelty (but unusable to most people) code to new Github repo

- Submit that code to Reddit/Hacker News

- People see it and are impressed by the novelty code, despite not running it due to missing the models themselves, etc. They upvote and subscribe ($$$) to actually try it.

- Repeat

I understand the desire to promote one's new service, and the product seems like it could be interesting, but this is not the way to get the word out. Reputation matters.

Edit:

Check out the user deeplover's post/comment history. One submission with the MicroMusic (see above) repo, and one comment, see below.

Also, the post by user liwei0517 is almost exactly like BigOrange688 on Reddit. See: https://www.reddit.com/r/MachineLearning/comments/1es0deh/co...


This is really strange. I'd rather not name names in case I'm wrong, but some of the comments in this thread that link to "things they made" on Lamucal are by very fishy accounts. Account was made a few months ago, only has four comments on seemingly random articles, the comments are either extremely brief or seemingly ChatGPT-generated, then after months of inactivity they pop up in this thread. Not usually one to complain about this but with your comment showing all the other times this has popped up it's way too strange.


I am an entrepreneur, and my startup team is about to fail. In response to the questions raised by rwl4, I am very sorry. All the features on our website are completely free for everyone to use. If we can help some people, it will be a small consolation for the team before the failure. I apologize again.


Look, I get it. Startups are hard. Desperation can lead to poor choices. But spamming isn't the answer and will burn bridges faster than you can build them.

If you are genuinely sorry and want to (possibly) turn this around, here are some ideas:

1. Consolidate your work into one solid well documented Github repo, including a small toy model that people can actually use and play with.

2. Write a blog post or tutorial about your tech. That'll show your expertise without making people feel like they are being duped.

3. Engage in relevant communities genuinely. Build relationships, not just a user base.

4. Use proper channels for promotion... Show HN exists for a reason.

Remember, reputation is currency in this industry. It's hard to earn and easy to lose. Take the long view, build something good, be transparent, and let organic interest do its thing...

If your product is actually solid, there are ways to get it out there. It might be slower, but it's at least sustainable. Good luck.


Thank you very much, my friend. You are a good person.


This comment is a far more interesting story than the project itself.


That other, probably-related Hacker News user posted a response to a suicide crisis post that looks like a "four-paragraph standard ChatGPT response", complete with a "Lastly, " at the end. I hope that wasn't done just to increase that account's seeming legitimacy because wow, what incredibly bad taste.


As an entrepreneur, doing this kind of not-so-glorious thing to promote my own features, I already feel quite pathetic about myself. Being pointed out by rwl4, I can also take a deep breath. I would like to apologize to the people who have been disturbed, and also apologize to the platform.


Cool tech but right now making money with AI is 10x harder than raising from investors.

I'd recommend consulting on the side to keep lights on, there is demand for AI consulting, while there is still hype about AI.


I respect the hustle.

As an entrepreneur, you're light years further than most of us will ever be.

Chin up! Launch again, launch different, launch something else, Pivot if need be. You make me proud.


Here's a direct link to him being dragged off the stage:

https://x.com/dmitrygr/status/1822124650547257637

It's definitely somewhat aggressive. Way to burn bridges.


Is there a non-twitter link? Blocks me because I have DNS adblock+using mobile browser.


While it doesn't show you any thread context, for media tweets like the video one linked if you paste the URL into a site like https://savetwitter.net/en it will spit out the video file to watch as well as telling you the text of that tweet (although, testing it with that tweet on my phone just now I had to select the title and paste it elsewhere to see as the page truncated the visible amount to fit phone width).



Looked the opposite of aggressive to me. Smiles all around.


He's being carried off. He's only smiling because of how ridiculous it makes the organisers look


He is rudely forced down a stair. That could have gone very wrong.


Weird that for a couple minutes, these paths existed:

* https://github.com/microsoft/MS-DOS/tree/main/v4.0/bin

* https://github.com/microsoft/MS-DOS/tree/main/v4.0/bin/DISK1

* https://github.com/microsoft/MS-DOS/tree/main/v4.0/pdf

But they disappeared as I browsed the repo. I guess they didn't want that part public?

Edit: I knew I wasn't seeing things! Somebody forked it along with those files: https://github.com/OwnedByWuigi/DOS/tree/main/v4.0


They force-pushed the repo to remove an insult towards Tim Patterson in one of the source files.


They changed line 70 of v4.0/src/DOS/STRIN.ASM from [0]:

; Brain-damaged Tim Patterson ignored ^F in case his BIOS did not flush the

to [1]:

; Brain-damaged TP ignored ^F in case his BIOS did not flush the

[0] https://github.com/OwnedByWuigi/DOS/blob/ffd70f8b4fb77e2e6af...

[1] https://github.com/microsoft/MS-DOS/blob/main/v4.0/src/DOS/S...


If Tim is still around he should PR a change back; I'd not want my name shortened to "TP".


He should make a PR to change it back to the correct spelling of his name.


Agreed. My initialisms are almost as bad. (Thanks clueless parents!)


https://en.wikipedia.org/wiki/Tim_Paterson

From Wikipedia, the free encyclopedia

Tim Paterson (born 1 June 1956) is an American computer programmer, best known for creating 86-DOS, an operating system for the Intel 8086. This system emulated the application programming interface (API) of CP/M, which was created by Gary Kildall. 86-DOS later formed the basis of MS-DOS, the most widely used personal computer operating system in the 1980s.


If this release is for historical research purposes, the release should be pristine including the unsavory bits. Whitewashing of history should never be accepted.


Perhaps this release is not for historical research purposes.


It is for historial research purposes:

>The MS-DOS v1.25 and v2.0 files were originally shared at the Computer History Museum on March 25th, 2014 and are being (re)published in this repo to make them easier to find, reference-to in external writing and works, and to allow exploration and experimentation for those interested in early PC Operating Systems.

>For historical reference

>The source files in this repo are for historical reference and will be kept static, so please don’t send Pull Requests suggesting any modifications to the source files, but feel free to fork this repo and experiment.


100%!


Don't leave us hanging!!


At https://github.com/microsoft/MS-DOS/blob/main/v4.0/src/DOS/S... it used to have Tim Patterson's full name whereas after the force push they abbreviated it to his initials "TP".

I had the repo cloned before the force push and when I went to pull it, this file was the only one that contained a conflict.


Looks like the handiwork of Mark Zbikowski, whose initials adorn every EXE file. https://en.wikipedia.org/wiki/Mark_Zbikowski


Mark is the one that committed today's source release, going off of the "MZ is back" commit message from GitHub user "mzbik".


Ya sorry we're moving stuff around


Thx for clearing the rights and for releasing, Scott. And of course thanks to Microsoft and IBM.

It would be fun at some point down the road to get some of the older code building and running again - particularly '84/'85-vintage Windows & Notes builds. Quite a lot of work, though, not just because of hardware but also likely because of toolchain gaps.


Well, thanks for putting this up! It's really a treasure for those of us who used it as our daily driver so many years ago.


Why is that, no matter how much you check and proofread your work before you push/publish, you'll always find something obvious you missed 5 minutes after it's gone up?

To change a public branch, or not to change a public branch, that is the question.

Edit: Muphry's law strikes again - s/or to not/or not to/


They just changed the folder. All of those files are now in https://github.com/microsoft/MS-DOS/tree/main/v4.0-ozzie


Looks like they were moved to the Ozzie subfolder: https://github.com/microsoft/MS-DOS/tree/main/v4.0-ozzie


That was literally just added; I have a tab open that doesn't have that folder. This is wild and strangely exciting to see released in real time.


Ya we wanted to separate the MS-DOS and MT-DOS stuff, it was confusing as it was


I'm not complaining, that's for sure! Thanks for all you're doing.


It is all the MT-DOS content; binaries and docs.


Multi-Tasking MS-DOS

Beta Test Release 1.00

Release Notes

Enclosed you will find Microsoft's first beta release of Multi-tasking MS-DOS. This version is based upon MS-DOS Version 2 sources, we will be reimplementing the multi-tasking enhancements on top of Version 3 sources shortly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: