I was a pretty active member in the comments for a long time and left a few years ago after getting chastised by a moderator and accused of spamming for sharing a link to a blog post I had written, even though the content was purely technical, not promoting any product, and does not contain ads or monetize content in any way.
My impression is that the site was actively looking for any possible reason to remove people from the platform. It’s their site to moderate as they wish, but that’s not a community I want to continue participating in.
You did not share a link to a blog post. The title was "Effective Haskell is a hands-on practical book way to learn Haskell. No math or formal CS needed" and it linked to the site advertising your book for sale. I removed it because we don't get good discussions out of ads.
I shared the story as I remember it. Memory is imperfect. It's been years since I deleted my account, and I don't have the luxury of access to server or moderation logs.
What I do remember unambiguously is being an active member of the site, contributing regularly and in good faith, being accused of spamming, and the general feeling of hostility that I got from the site.
You got a DM and email with the title and URL when your story was removed. This would've been 2023-08-03 with the subject "Your story has been edited by a moderator", if you want to look back: https://github.com/lobsters/lobsters/blob/86e1d0b6ac6bac5210...
But you're correct on the second part, there isn't a level of activity that entitles anyone to post a sales page with nothing to discuss on it. Your activity was taken into account, though. Typically if a new user's first activity is to post an ad I'll also ban the site or user. I understand the rules aren't as permissive as you wanted, but ads don't start good discussions.
IMO, the lobste.rs admin's assertion that the post had "nothing to discuss" is a misjudgment that undercuts the rest of their rationalization. My guess is that they're looking for a win on technicality, instead of addressing the myriad of concerns raised elsewhere in this thread.
I don't know why you think I want a "technical win" from you, but I'm not seeking your approval. I corrected your mistake about the URL and the policy, like I corrected the author's mistake about what I removed. If you and other sites prefer different policies, it's no skin off my nose.
It’s much shorter than my first book, Effective Haskell, and leans more advanced, especially toward the end. Although the format is puzzle focused I’m trying to avoid simple gotcha questions and instead use each puzzle as a launchpad for discussing how to reason about programs, design tradeoffs, and nuances around maintainability.
I worked on a foveated video streaming system for 3D video back in 2008, and we used eye tracking and extrapolated a pretty simple motion vector for eyes and ignored saccades entirely. It worked well, you really don't notice the lower detail in the periphery and with a slightly over-sized high resolution focal area you can detect a change in gaze direction before the user's focus exits the high resolution area.
Anyway that was ages ago and we did it with like three people, some duct tape and a GPU, so I expect that it should work really well on modern equipment if they've put the effort into it.
Foveated rendering very clearly works well with a dedicated connection, wiht predictable latency. My question was more about the latency spikes inherent in a ISM general use band combined with foveated rendering, which would make the effects of the latency spikes even worse.
Although thus isn’t directly related
to the idea in the article, I’m reminded that one of the most effective hacks I’ve found for working with ChatGPT has been to attach screen shots of files rather than the files themselves. I’ve noticed the model will almost always pay attention to an image and pull relevant data out of it, but it requires a lot of detailed prompting to get it to reliably pay attention to text and pdf attachments instead of just hallucinating their contents.
Hmm. Yesterday I stuck a >100 page PDF into a Claude Project and asked Claude to reference a table in the middle of it (I gave page numbers) to generate machine readable text. I watched with some bafflement as Claude convinced itself that the PDF wasn’t a PDF, but then it managed to recover all on its own and generated 100% correct output. (Well, 100% correct in terms of reading the PDF - it did get a bit confused a few times following my instructions.)
This is probably because your provider is generating embeddings over the document to save money, and then simply running a vector search across it instead of fitting it all in context.
Looking at my own use of AI, and at how I see other engineers use it, it often feels like two steps forward and two steps back, and overall not a lot of real progress yet.
I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough.
Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time.
My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win.
> the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves
Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text.
You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact.
The reason companies, or at least sales and marketing, are so incredibly after AI is that it can raise response rates on spam, and on ads, by "Hyper-personalizing" them by actually reading the social media accounts of the people looking at the ads and making ads directly based on that.
Your millage may vary, but I just got Cursor (using Claude 4 Sonnet) to one shot a sequence of bash scripts that cleanup stale AWS resources. I pasted the Jira ticket description that I wrote, with a few examples and the script works perfectly. Saved me a few hours of bash writing and debugging because I can read bash, but not write it well.
It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting.
I’ve had similar experiences where AI saved me a ton of time when I knew what I wanted and understood the language or library well enough to review but poorly enough that I’d gave been slow writing it because I’d have spent a lot of time looking things up.
I’ve also had experiences where I started out well but the AI got confused, hallucinated, or otherwise got stuck. At least for me those cases have turned pathological because it always _feels_ like just one or two more tweaks to the prompt, a little cleanup, and you’ll be done, but you can end up far down that path before you realize that you need to step back and either write the thing yourself or, at the very least, be methodical enough with the AI that you can get it to help you debug the issue.
The latter case happens maybe 20% of the time for me, but the cost is high enough that it erases most of the time savings I’ve seen in the happy path scenario.
It’s theoretically easy to avoid by just being more thoughtful and active as a reviewer, but that reduces the efficiency gain in the happy path. More importantly, I think it’s hard to do for the same reason partially self driving cars are dangerous: humans are bad at paying attention well in “mostly safe and boring, occasionally disastrous” type settings.
My guess is that in the end we’ll see less of the problematic cases. In part because AI improves, and in part because we’ll develop better intuition for when we’ve stepped onto the unproductive path. I think a lot of it too will also be that we adopt ways of working that minimize the pathological “lost all day to weird LLM issues” problems by trying to keep humans in the loop more deeply engaged. That will necessarily also reduce the maximum size of the wins we get, but we’ll come away with a net positive gain in productivity.
Same. I interface with a team who refuses to conduct business in anything other than Excel, and because of dated corporate mindshare, their management sees them more as wizards instead of the odd ones out.
"They're on top of it! They always email me the new file when they make changes and approve my access requests quickly."
There are limits to my stubbornness, and my first use of LLMs for coding assistance was to ask for help figuring out how to Excel, after a mere three decades of avoidance.
After engaging and learning more about their challenges, it turned out one of their "data feeds" was actually them manually copy/pasting into a web form with a broken batch import that they'd give up on submitting project requests for, which I quietly fixed so they got to retain their turnaround while they planned some other changes.
Ultimately nothing grand, but I would never have bothered if I'd had to wade through the usual sort of learning resources available or ask another person. Being able to transfer and translate higher level literacy, though, is right up my alley.
I've found it to be a significant productivity boost but only for a small subset of problems. (Things like bash scripts, which are tedious to write and I'm not that great at bash. Or fixing small bugs in a React app, a framework I'm not well versed in. But even then I have to keep my thinking cap on so it doesn't go off the rails.)
It works best when the target is small and easily testable (without the LLM being able to fudge the tests, which it will do.)
For many other tasks it's like training an intern, which is worth it if the intern is going to grow and take on more responsibility and learn to do things correctly. But since the LLM doesn't learn from its mistakes, it's not a clear and worthwhile investment.
I have found out that the limit of LLMs good use of coding abilities is basically what can be reasonably done as a single copy-paste. Usually only individual functions.
I basically use it for google on steroids for obscure topics, for simple stuff I still use normal search engines.
It would be much less interesting than the actual chat histories. My experience with chatGPTs memory feature is that about half the time its storing useful but uninteresting data, like my level of expertise in different languages or fields, and the other half it’s pointless trivia that I’ll have to clear out later (I use it for creating D&D campaigns and it wastes a lot of memory on random one-off NPCs).
Maybe it’s my use of it, but I’ve never had it store any memories that were personally identifiable or private.
I remember back in the gnome2 days there was still a lot of fragmentation. Gnome, KDE, WindowMaker, AfterStep, Enlightenment, ratpoison.
Linux has always appealed to tinkerers and that was always going to lead to some amount of fragmentation. I don’t think it’s a bad thing necessarily. For all of the complaints about it, systemd has unified a lot of things that used to be handled through desktop environments and made things less fragmented as a whole.
No the fragmentation is worse now, GNOME wasn't even going to support the same DRM-leasing protocol (needed for VR) that all the other Wayland compositors agreed on until Valve told them it was adamant it wasn't going to support their custom protocol.
Afterstep and Windowmaker were also just window managers (you can kinda argue Windowmaker with the whole GNUSTEP thing, but that never really took off).
I believe ratpoison is the granddaddy of today's tiling desktops, which have a decent following.
The biggest issue to me is that beyond and impossible aren’t just making replacements that are worse than meat, they are making things that are worse than the alternatives we already had.
A beyond burger might be more like meat than a patty made from beans or lentils, but it tastes worse and has a worse nutritional profile. Beyond chicken isn’t even all that similar to chicken and it’s a worse substitute than seitan for something like wings.
join has the type `m (m a) -> m a`. That's the thing that really shows off the monoidal structure. People normally implement monads in terms of bind, but you can easily define join in terms of bind for any Monad: `join ma = ma >>= id`. So really, as long as you have a lawful instance of Monad written with bind the existence of join is your proof.
My impression is that the site was actively looking for any possible reason to remove people from the platform. It’s their site to moderate as they wish, but that’s not a community I want to continue participating in.
reply