> Self hosting website as local server running in your room? what’s the point?
It depends on how important the web site is and what the purpose is. Personal blog (Ghost, WordPress)? File sharing (Nextcloud)? Document collaboration (Etherpad)? Media (Jellyfin)? Why _not_ run it from your room with a reverse proxy? You're paying for internet anyway.
> Same with LLMs, you can use providers who don’t log requests and SOC2 compliant.
Sure. Until they change their minds and decide they want to, or until they go belly-up because they didn't monetize you.
> Small models that run locally is a waste of time as they don’t have adequate value compared to larger models.
A small model won't get you GPT-4o value, but for coding, simple image generation, story prompts, "how long should I boil an egg" questions, etc. it'll do just fine, and it's yours. As a bonus, while a lot of energy went into creating the models you'd use, you're saving a lot of energy using them compared to asking the giant models simple questions.
It depends on how important the web site is and what the purpose is. Personal blog (Ghost, WordPress)? File sharing (Nextcloud)? Document collaboration (Etherpad)? Media (Jellyfin)? Why _not_ run it from your room with a reverse proxy? You're paying for internet anyway.
> Same with LLMs, you can use providers who don’t log requests and SOC2 compliant.
Sure. Until they change their minds and decide they want to, or until they go belly-up because they didn't monetize you.
> Small models that run locally is a waste of time as they don’t have adequate value compared to larger models.
A small model won't get you GPT-4o value, but for coding, simple image generation, story prompts, "how long should I boil an egg" questions, etc. it'll do just fine, and it's yours. As a bonus, while a lot of energy went into creating the models you'd use, you're saving a lot of energy using them compared to asking the giant models simple questions.