agreed. Additionally it may reduce the power of fringe groups from pressuring private companies who are doing legal business. The recent Steam and itch.io takedowns due to collective shout come to mind.
These days, there are some engines with (oil immersed) belts, but that's for fuel efficiency, not longevity. Chains last longer, typically, although I used to have a 97 Honda VFR that had neither. They started with chains in earlier models but had reliability issues so switched to straight cut gears. Man I miss the way those things sounded -- like a supercharger whine. Amazing
This is a great and important topic, one that I’ve bumped into against in past code. I think I understand the basics of time/zones/storage.
However, on one codebase I worked on in the past, dates were stored in mysql in a datetime field, with the time set to 00:00:00, iirc.
I’m hoping someone knowledgeable about this topic could comment on this. I never could discover why it was done that way. It wrong? Is it right? Based on the other threads here, it seems wrong, but maybe there’s a non-obvious reason to do it this way?
I'm kinda sick of buying these cheap lead-acid based UPS units (only tried APC and Cyberpower so far) to have the batteries die in just a couple of years.
I also have YT premium. On desktop, I use firefox and the popout video feature. It is easily resizable and movable. Downside is that js-player controls still show on the webpage, not the popout. On ios, once paused, just tap anywhere on the video anywhere there isn't a widget. That hides all the video controls for me.
Edit: Also there are keyboard controls for going frame-by-frame, if you need that much control. Or there was last time I used it a long time ago.
Do we need some way to grade these services based on vertical or use-case?
I actually tried the same tech questions to multiple services when I first started playing around with these commercial LLMs. I would copy and paste the same question to GPT4, MS Bing (I soon stopped using that since I already have a sub to gpt4), claude, bard, and recently You (https://you.com) and while Claude.ai was rarely as good as GPT4, it wasn't too far off for tech questions.
I'm not very creative, so maybe the use of it helping with writing fiction or roleplay might help me, I haven't tried it yet.
Did you try Claude with non-fictional tasks, and if so, how does that compare to GPT4?
I did not try Claude for a research based task based on non-fictional content.
I think it's good that LLMs becomes specialized tools that can go deep into their expertise, I just think 'a fact engine' -- if that's what Claude is aiming to be -- needs to have correctly rigid controls on what defines fact. From that POV, I think I agree with the 'over-censored' label for Claude earlier in the thread... The intention may not be censorship, but if the LLM is so gunshy about what is fact vs. not, it's going to have a really narrow (and therefore potentially unreliable) lens.
heh, I know right? I see all those comments shitting on ruby and rails, and it's like, yeah, I get it dude. They hate rails, but luckily nobody is forcing them to use it. The more choices the better. Some people hate dynamic types, some people hate static types. There's room for both, imo. It's not like it's a secret there are faster or more efficient stacks.
I guess they can't help themselves and need to make their position known. Personally, I love ruby, and rails is pretty good too.
As another non-python dev, interested in and trying to get into AI/ML, I think the limitation of venv is that it can't handle multiple versions of system libraries.
CUDA for example, different project will require different versions of some library like pytorch, but these seem to be tied to cuda version. This is where anaconda (and miniconda) come in, but omfg, I hate those. So far all anaconda does is screw up my environment, causing weird binaries to come into my path, overriding my newer/vetted ffmpeg and other libraries with some outdated libraries. Not to mention, I have no idea if they are safe to use, since I can't figure out where this avalanche (literally gigs) of garbage gets pulled in. If I don't let it mess with my startup scripts, nothing works.
And note, I'm not smart, but I've been a user of UNIX from the 90's and I can't believe we haven't progressed much in all these decades. I remember trying to pull in source packages and compiling them from scratch and that sucked too (make, cmake, m4, etc). But package managers and other tech has helped the general public that just wants to use the damn software. Nobody wants to track down and compile every dependency and become an expert in the build process. But this is where we are. Still.
I am currently in the trying to get these projects working in docker, but that is a whole other ordeal that I haven't completed yet, though I am hopeful that I'm almost there :) Some projects have Dockerfiles and some even have docker-compose files. None have worked out-of-the-box for me. And that's both surprising and sad.
I don't know where the blame lies exactly. Docker? The package maintainers that don't know docker or unix (a lot of these new LLM/AI projects are windows only or windows-first and I hear data scientists hate-hate-hate sysadmin tasks)? Nvidia for their eco-system? Dunno, all I know is I'm experiencing pain and time wastage that I'd rather not deal with. I guess that's partly why open-ai and other paid services exist. lol.
I'm in the same situation. I found this cog project to dockerise ML https://github.com/replicate/cog : you write just one python class and a yaml file, and it takes care of the "CUDA hell" and deps. It even creates a flask app in front of your model.
That helps keep your system clean, but someone with big $s please rewrite pytorch to golang or rust or even nodejs / typescript.