Hacker Newsnew | past | comments | ask | show | jobs | submit | SeriousStorm's commentslogin

Before you buy an Apple TV you can try installing ProjectIvy launcher and see if that suits your needs. It's basically a simplified launcher UI for Android TV devices.

It's not perfect, not if it suits your needs you won't have to buy another device.


That's the one I tried on the Fire Stick, it doesn't always launch, sometimes launches a few minutes after start, and when pressing home it would go back to the default launcher.

I think it's related to the accessibility settings it wants enabled, I was never able to turn it on like it says because the setting just refuses to enable.


I like FLauncher! Stupid simple and does the job.


My wife and I play this every day. It's the only fault word games that has ever caught my interest.

The UI is fantastic too.


Thanks! I’m glad you and your wife are enjoying it!


IMO I shouldn't change the onboarding much. The game is very intuitive. Everyone I showed it to has picked it up in about 30 seconds.

It's very much a learn-by-doing game.

PS - This game is so fun. I don't usually do word games, but I can't stop playing this one.


Thanks, that’s good to hear!


Off topic, but what is your process for blocking the entire domain in email? I want to start doing something similar.


Can't you do this with most email filtering systems with a rule similar to `.*@example.com` -> trash/bin?


I ran into this issue with the Sonoma update. My display (4k LG) was negotiating RGB just fine before, but not anymore. The BetterDisplay workaround hasn't worked for me. The poor colors and fuzzy edges around all the text is causing eye strain too. I'm beyond furious.


I used to use an EDID patcher written in Ruby but it stopped working on some version of macOS. Contained in that script is how it patches the EDID data which is what I got to work with BetterDisplay.

FWIW, here's the hacked script[0] which only keeps the EDID data patching part. Be warned it's very hacky with the base64 EDID to be patched hard-coded in line 8 of the script. It prints out the patched EDID base64 which should be entered back into BetterDisplay (which is also where you can get the unpatched base64 EDID).

[0] https://gist.github.com/karmakaze/f795171a6a795491e754c3d092...


Love this app. The only thing on my wishlist is a way to "discover" stuff I've bookmarked before when googling. So the extension would search Hoarder when I search Google (or wherever search engine) and give me a list of those next to my Google search or in the extension drop-down. I sometimes forget that I bookmarked a solution 3 weeks ago, and now I'm searching for that solution again.

I think Evernote had something like this when I was using it.


I haven't tried Hoarder yet, but Linkding has a linkding-injector extension that does this. It's pretty useful.


Different user here, all of my upgrades have been completely seamless.


I found this recently, but I haven't tried it yet.

https://github.com/budtmo/docker-android


that is a cool find :)


Every time I look into building a workflow with langchain it seems unnecessarily complex. So I end up stopping.

Are you just running an LLM server (Ollama, llama.cpp, etc) and then making API calls to that server with plain Python or is it more than that?


I suppose ollama and llama.cpp, or at least any corresponding Python SDKs, would be good for using self-hosted models, especially if they support parallel GPU use. If it's something custom, Pytorch would come into the picture. In production workflows, it can obviously be useful to run certain LLM prompts in parallel to hasten the job.

For now I have used only cloud APIs with their Python SDKs, including the prompt completion, TTS, and embedding endpoints. They allow me to run many jobs in parallel which is useful for complex workflows or if facing heavy user demand. For caching of responses, I have used a local disk caching library, although I guess one can alternatively use a standalone or embedded database. I have used threading via `concurrent.futures` for concurrent jobs, although asyncio too would work.

The one simple external Python library I found so far is `semantic-text-splitter` for splitting long texts using token counts, but this too I could have done by myself with a bit of effort. I think langchain has something for it too.


None of Tailwind's dependencies ship to your production app. Tailwind is a CSS build system (mostly) so when you build for prod, the only thing that ships to prod is a CSS file.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: