Hacker Newsnew | past | comments | ask | show | jobs | submit | mshenfield's commentslogin

25k per unit, assuming the battery lasts 5 years, is uneconomically high at $5k/year per home.

A 100kw generator can produce enough power for 100 homes, costs ~$50k all in, and ~$1k per day in fuel costs to run during an outage. Assuming the generator also lasts 5 years, that's only $100/year per home, 50x less than the battery solution.

Batteries are for the foreseeable future way too expensive.


It's back


No. Companies just have to re-hire fired employees and pay lost wages and benefits. IMO Starbucks should pay a large punitive fine that gets split between the fired workers and the NLRB.


I've never seen anyone directly make this case. Most people seem to want their colleagues (many of whom are friends) to keep their jobs more than they want the stock price to go up.


I'm still not convinced VSCode is faster and better. My Atom setup was never slow, the only thing that wasn't instantaneous was code completion. For me, it felt like VSCode came out of nowhere and surpassed Atom for inexplicable reasons.


Twitter won't die, but it won't make much of a profit either. Elon Musk will find another buyer in 2 years for $25 billion and make most of his money back while leaving the company saddled with most of the $11 billion in debt financing.


Also a good reminder why exceptions and optionals exist. Bonkers that "I didn't get a response" defaults to 0.


It's not that "no response" defaults to 0, it's that in Factorio's circuit network, "no signal" and "signal with a value of 0" are exactly synonymous.


One thing I hated about working with Go. POSTed JSON bodies missing a field could not be distinguished from those where that field pointed was present and pointed to the empty value for that (an empty array or string or something). This may be entirely work-around-able and simply a failure of how that job worked, but it was a whole thing.


In this case, instead of, say, an int, you could have the field be a pointer to an int.


And now you've traded one problem for another.


These tools are amazing for prototyping. I had an idea for a promotional poster, and seeing my idea just by writing it felt like magic. The generated image had too many artifacts to use, but gave me a guideline to follow when creating the real thing in Pixlr.

AI content generation (text, image, source code, video, music) will be a huge boon for prototyping where applied judiciously.


Google hasn't released squat.

Google's product is vaporware and we shouldn't afford them any airtime until they release something usable. They're just trying to butt in and get press off of the backs of the teams actually working in the open, and that's super lame.

Release your model, Google, or stop bragging and talking over the others here. You're greedily sucking oxygen out of the conversation, and as a trillion dollar monopoly you don't deserve anything for free off of the backs of others. Not when you're not contributing. Stop being the rich kid talking over everyone else about how awesome your toys are.

Anyhow, the real story is Stable Diffusion. They're actively demonstrating the correct way to run this as opposed to the entirely closed OpenAI DALL-E or the (again vaporware) Google non-product.

Even MidJourney uses Stable Diffusion under the hood, using sophisticated prompt engineering to make their product distinct and powerful.


I feel there's a strong argument to be made that these organizations should be required to release these models publicly. These are built on the works of the public at large, and the public should get the full benefit of them.

Whatever effort Google has put into building the model is infinitesimally small compared to the work of the creators they're harvesting.

I don't expect this to happen easily, if at all, but I'm strongly in favor of it, and would even support legislation to that effect.


They are afraid of being sued because they are using all the images they have scraped on all the website ever created. They are probably even using images not publicly available.


Well… midjourney used stable diffusion (with an additional guidance model I believe, not just prompt engineering) for their beta model which they already closed down again… it’s back to their old far inferior model.


Why did they close it down?


The rumours are that it was too good at generating nudity for their comfort, and in particular that some users may have combined that with younger subjects.


I kind of get the sentiment about openness but I think it's way more nuanced than you are making out.

There are very good reasons for withholding SOTA models, primarily from the info hazard angle and avoiding escalating the capabilities race which is basically the biggest risk we have right now.

Google / Deepmind have actually made some good decisions to try and slow down the race (such as waiting to publish).


They're not slowing down anything. The cat's out of the bag.

What good does a few months lag do when nobody is bracing for impact?


I'm not saying they are doing a good enough job, but that doesn't mean their approach isn't entirely without merit.

Even ignoring the infohazard angle if they published everything immediately that would escalate the race. By sitting on their capabilities and waiting for others to publish (e.g. PaLM, Imagen vs GPT-3, DALL-E) they are at least only playing catch up.


Capabilities race, seriously? This is not nuclear warfare my guy. It's mathematics.


Nuclear warfare is much less concerning than misaligned AI.

Take a look into scaling laws and alignment concerns, this is a very real challenge and existential risk not some crackpot theory.


In the same sense that deep learning is just linear regression with a steroid problem.


Information warfare is pretty dangerous too!


Can you talk more about the prompt augmentation the midjourney is doing behind the scenes? It's certainly true that you can put in a two-word phrase like "Time travelers" and get an amazing result back, which reveals just how much your prompt is getting dropped into a prompt soup that also gives it that midjourney look by default.


Yep, I feel that exact way about Nvidia Canvas [1]. It does not produce anything even close to usable as a final product, but it can produce an amazing start to a concept.

[1] https://www.nvidia.com/en-us/studio/canvas/


wow, that's kinda insane and almost looks more fun for those of us short on words.


This was the first thing I tried with DALL-E. Took some photos of my house where I'm renovating, wiped out the construction debris and told it to fill it in with what I wanted.

It worked okay - one issue was DALL-E wants to keep "style" consistent so any stray bit of debris greatly affected the interpretation, but I did in fact get 1 design idea out of it which changed how I think we'll do a bit of it.

These things in many ways are just extremely enhanced search tools - "describe what you want to see"


> Creating the real thing in Pixlr

is it that good now? note that Pixlr was bought by Google


Pixlr was not bought by Google. So many people spreading lies on the internet :(


I like Pixlr, I use pixlr.com/e. It's free and web based and has always worked well for me.

Did that happen recently? It says in Pixlr's site they're owned by INMAGINE.


I had the luck of getting to interact with Simon a little at my first real programmer job at Eventbrite. You would never know meeting him that he was one of the creators of Django, (and TIL querySelector!). He was infectiously curious and excited about programming in a way that bubbled over to most people he interacted with. He would also enthusiastically engage with what you were working on, even the beginner project I had at the time. I don't know him well, but my impression was not just of a great programmer, but a great colleague and an authentically positive person.

Congrats on twenty years!


Working with Simon was one of my fave parts of being at EB - the guy is just one of the smartest, most interesting, nicest people around. Strongest lunchtime banter in the industry too.


I miss our lunchtime banter so much!


Love everything about this. The 6+ lawyers of parallax background really make it feel alive.

Is the background generative in some way, or is each layer in a loop?


Each layer is a loop, but the offsets and number of layers helps hide the repetitions


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: