The escooters also are supposedly equipped with cameras and other deterants. Has anyone ever gotten in trouble for kicking them in to a bush when they are in the way?
A few years ago I was visiting a friend of mine in Ft. Lauderdale. We wanted some scooters to ride around on but there were none near him, so we drove and grabbed some off the sidewalk in downtown and threw them in the drunk and went back to his house. Heh they were beeping and vibrating like how you’d imagine some AGI would while being kidnapped. When we got them out at his house we scanned using the app and they unlocked no problem. (I think these were Lime scooters)
this isn't an investing site but Coreweave is what I watch. All those freaking datacenters have to get built, come online, and work for all the promises to come true. Coreweave is already in a bit of a picklye, I feel like they are the first domino.
/not an investing/finance/anything to do with money expert.
i've seen many attempts to turn a widely used spreadsheet into a webapp. Eventually, it becomes an attempt to re-implement spreadsheets. The first time something changes and the user says "well in Excel i would just do this..." the dev team is off chasing existing features of excel for eternity and the users are pissed because it takes so long and is buggy, meanwhile, excel is right there ready and waiting.
I always see this point mentioned in "App VS Spreadsheet" but no one gives a concrete example. The whole point of using a "purpose" build app is to give some structure and consistency to the problem. If people are replicating spreadsheet feature then they needed "excel" to begin with since that is a purpose built tool for generalizing a lot of problems. It's like I can say well my notebook and pen is already in front of me, I can use this why would I ever bother opening an app? well because the app provides some additional value.
i mentioned this upthread but an LLM with enough access to be fully integrated into all apps/services/files in an enterprise managed workstation sounds like privilege escalation attacks just waiting to happen.
> This lack of real integration is basically the core design of most Copilot products.
I think they're scared of the very real security issues with LLMs that may be unsolvable. It's not wise to give an llm free reign, at best maybe across your local computer but to be fully integrated into every application and every file it needs root. That would be the front door to many privilege escalation incidents in an enterprise managed laptop/desktop.
> to guarantee they've learned from trustworthy sources.
i don't see how this will every work. Even in hard science there's debate over what content is trustworthy and what is not. Imagine trying to declare your source of training material on religion, philosophy, or politics "trustworthy".
"Sir, I want an LLM to design architecture, not to debate philosophy."
But really, you leave the curation to real humans, institutions with ethical procedures already in place. I don't want Goole or Elon dictating what truth is, but I wouldn't mind if NASA or other aerospace institutions dictated what is truth in that space.
Of course, the dataset should have a list of every document/source used, so others can audit it. I know, unthinkable in this corporate world, but one can dream.
> The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding
The other day i was on a call with 3 or 4 other people solving a config problem in a specific system. One of them asked chatgpt for the solution and got back a list of configuration steps to follow. He started the steps but one of them mentioned configuring an option that did not exist in the system at all. Textbook hallucination. It was obvious on the call that he was very surprised that the AI would give him an incorrect result, he was 100% convinced the answer was what the LLM said and never once thought to question what the LLM returned.
I've had a couple of instances with friends being equally shocked when an LLM turned out to be wrong. One of which was fairly disturbing, I was at a horse track and describing LLMs and to demonstrate i took a picture of the racing form thing and asked the LLM to formulate a medium risk betting strategy. My friend immediatately took it as some kind of supernatural insight and bet $100 on the plan it came up with. It was as if he believed the LLM could tell the future.Thank god it didn't work and he lost about $70. Had he won I don't know what would have happened, he probably would have asked again and bet everything he had.
> So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it
I would kind of feel sorry for a super-intelligent AI having to deal with humans who have their fingers on on/off switch. It would be a very frustrating existence.
On my list for 2026 is a suite of tools for my coworkers who work in the same tech I do (we’re consultants). I get thrown a small bone from the silverbacks come bonus season for these kinds of thing.
Your project fits perfectly with what I need, I’ve built the functionality (using python even) now I need all the other stuff to get it up and running on the web. Thanks for doing this!
A small VPS + LetsEncrypt + Dokku is a fantastic way to run personal side projects/hustles at minimal cost.
reply