One of the more annoying software that does this is the copilot Office 365 on the web. Every time (!) I open it, it shows a popup on how to add files to the context. That itself would be annoying, but it also steals focus! So you would be typing something and suddenly you’re not typing anymore for M$ decided it’s time for a popup.
I finally learned to just wait for the pop up and then dismiss it with esc. Ugh!
If you login to the exchange online admin center you first have to complete a short "on-rails-shooter" video game. They constantly shuffle shit around and want to give you a tour via popups about it.
I have the admin accounts for multiple companies, so I have to play the game repeatedly.
I built this recently. I used nvidia parakeet as STT, open wake word as the wake word detection, mistral ministral 14b as LLM and pocket tts for tts. Fits snugly in my 16 gb VRAM. Pocket is small and fast and has good enough voice cloning. I first used the chatterbox turbo model, which perform better and even supported some simple paralinguistic word like (chuckle) that made it more fun, but it was just a bit too big for my rig.
> Is anyone doing true end-to-end speech models locally (streaming audio out), or is the SOTA still “streaming ASR + LLM + streaming TTS” glued together?
Gave it four of my vibe questions around general knowledge and it didn’t do great. Maybe expected with a model as small as this one. Once support in llama.cpp is out I will take it for a spin.
I’ve tried the voice clinking and it works great. I added a 9s clip and it captured the speaker pretty well.
But don’t do the fake mistake I did and use a hf token that doesn’t have access to read from repos! The error message said that I had to request access to the repo, but I’ve had already done that, so I couldn’t figure out what was wrong. Turns out my HF token only had access to inference.
I’ve recently bought the LG with 4th generation OLED, and for me that works for long coding sessions (I use it for work). They shifted or did something with the pixel arrangement for this generation just for text legibility.
Interesting experiment. I would hazard to guess that Google is on top when it comes to these sorts of things (spatial ability), then OpenAI and last Anthropic. I would like to see the same experiment using Google’s Live view or whatever it’s called in their Gemini App.
330 nits in SDR is good relative to other OLED monitors and good enough for most indoor environments but not good enough for my indoor environment. Windows are too big and not tinted, just too much ambient light for anything below 500 nits.
“ Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025”
reply