Hacker Newsnew | past | comments | ask | show | jobs | submit | blkhawk's commentslogin

I don't want to defend musk in any way but I think you are making a mistake there using him as an example because what boosted him quite a lot is that he actually delivered what he claimed. Always late but still earlier than anybody was guesstimating. And now he is completely spiraling but its a lot harder to lose a billion than to gain one so he persists and even gets richer. Plus his "fanatical" followers are poor. It just doesn't match the situation.


Sounds a lot like "I'm not racist but". There's a website dedicated to all of his bs https://elonmusk.today

He is the definition of a cult. Collects money from fanatical followers who will praise every word he says, never delivers, "oh next year guys, for sure, wanna buy a not a flamethrower, while you are at it?". Not to mention what once were laughable conspiracy theories about him turned out to be true(such that even I laughed when I heard them). Torvalds is right with his statement about musk: "incompetent" and "too stupid to work at a tech company".


I am just saying that he is a bad example because he is a different beast from the run of the mill corporate potemkin-ism.


How is any of that different from AI evangelists, be it regular hype kids or CEOs? "All code will be written by AI by the end of {current_year+1}". "We know how to build AGI by the end of {current_year+1}". "AI will discover new sciences". A quick search will draw a billion claims from everyone involved. Much like on here, where I'm constantly told that LLMs are a silver bullet and the only reason why they aren't working for me is because my prompts are not explicit enough or I'm not paying for a subscription. All while watching people submit garbage LLM code and breaking their computers by copy-pasting idiotic suggestions from chatgpt into their terminals. It is depressing how close we are to the Idiocracy world without anyone noticing. And it did not take 500 years but just 3. Everyone involved - altman, zukerberg, musk, pichai, nadella, huang, etc. are well aware they are building a bullshit economy on top of bullshit claims and false promises.


because people hedge their bets almost always. basically how likely something is vs costs vs what everybody else is doing vs how you are personally affected.

So in case of the current AI there are several scenarios where you have to react to it. For example as a CEO of a company that would benefit from AI you need to demonstrate you are doing something or you get attacked for not doing enough.

As a CEO of an AI producing company you have almost no idea if the stuff you working on will be the thing that say makes hallucination-free LLMs, allows for cheap long term context integration or even "solve AGI". you have to pretend that you are just about to do the latter tho.


"Watch the world explode as I try to make a table!!!!!!!!!!!!" is unlikely its more like "Watch the world explode as I try to make this thing!!!!!!!!!!!!".


uh, maybe you only have the issue that you need redundancies because you have so many pieces of software that can barf?

I mean it will happen regardless just from the side effects of complexity. With a simpler system you can at least save on maintenance and overhead.


Yes, but if your web server goes down for whatever reason, you’d rather have some more for your load balancer to round robin. Things like physical host dying are not exactly unheard of. Same with DB, once you take money, you want that replication and quick failover and offsite backup.


People without a serious server environment or in all the myriad places where you need a local terminal temporarily or setting up one is super inconvenient. Say a small 19" rack in a cellar. with this you just plug in in, memorise the IP and go somewhere where you can sit comfortably.


because the cards already sell at very very good prices with 16GB and optimizations in generative AI is bringing down memory requirements. Optimizing profits means yyou sell with the least amount of VRAM possible not only to save the direct cost of the RAM but also to guard future profit and your other market segments. the cost of the ram itself is almost nothing compared to that. any intel competitor can more easily release products with more than 16GB and smoke them. Intel tries for a market segment that was only served by gaming cards twice as expensive up until now. this frees those up to be finally sold at MSRP.


Right, but Intel is in no position to do that. So, they could play and put 32GB VRAM and also, why not produce one with 64GB just for kicks?


If intel was serious about staging a comeback, they would release a 64GB card.

But intel is still lost in it's hubris, and still thinks it's a serious player and "one of the boys", so it doesn't seem like they want to break the line.


oh god - I had that come up in an issue at work just about a month ago. A development system used really simple usernames and passwords since it was just for testing but all the lines with one of those got gobbled up because they had "secrets" in them.

I have very strong opinions on this issue that boils down to. _why are you logging everything you lazy asses_ and _adding all the secrets into another tool just to scan for them in logs just adds another point for them to leak_...

Especially since the ability of lines getting censored even when the secrets were just part of words showed that probably no hashing was involved.

But its a security tool so it stays. I kinda feel like Cassandra but I think I can already predict a major security issue with it or others with the same functionality in the future. its like some goddamn blind spot that software that is to prevent X cannot be vulnerable to X but somehow often is vulnerable because prevention of X and not being vulnerable to X are two separate things somehow.


Why is logging everything considered lazy?


First "everything":

Logging "everything" could include stack traces and parameter values at every function call. Take the information you can get from a debugger and imagine you log all of it. Would that be necessary to determine why a defect is triggered?

Second, "lazy":

Logging has many useful aspects, but it is also only a step or two above adding print statements to the code, which again leads to the "lazy." If you have the inputs, you should be able to reproduce the execution. The exceptions include "poorly" modularized code, side effects, etc.

Alternatives.

I've found it helpful for complex failures to make sure that I include information about the system. For example, the program couldn't allocate memory: Was it continuous chunks of memory or a memory leak? How much free memory is there, versus the shapes of the free memory (Linux memory slabs)? What can I do to reset this state? (reboot was the only option)

Finally, a quote a colleague shared with me when I once expressed my love of logging. In the context of testing online games:

"Developers seem drawn to Event Recorders like moths to a flame. Recording all game/ mouse/ network/ whatever events while playing the game and playing them back is a bad idea. The problem is that you have an entire team modifying your game's logic and the meaning or structure of internal events on a day-to-day basis. For The Sims Online and other projects, we found that you could only reliably replay an event recording on the same build on which it was recorded. However, the keystone requirement for a testing system is regression: the ability to run the same test across differing builds. Internal Event Recorders just don't cut it as a general-purpose testing system. UI Event Recorders share a similar problem: when the GUI of the game shifts, the recording instantly becomes invalid."

Page 181, "Section 2.1 Automated Testing for Online Games by Larry Mellon of Electronic Arts", in Massively multiplayer game development 2, edited by Thor Alexander, 2005


for one it's extremely costly, in vcpu , storage , transfer rates. and if you're paying a third-party logger , multiply each by 10x


Axiom wants $60/m if you send them a terabyte of logs, which is basically nothing compared to the cost of developers trying to debug issues without detailed logs.


not to mention the performance impact of synchronous logging. Write a trivial benchmark and add logging and you will see cost per operation 1000x


I think you're being naive on the costs but that's just me. That's the intro price, plus you have transfer fees , vcpu .

I've never used axiom, but all the logging platforms I've used like splunk, datadog, loggly are a major op-ex line item.

And telling your developers their time is priceless means they will produce the lowest quality product.


If you're in a testing environment, where your SIT and UAT are looking to break stuff though, don't you usually want to be able to look to a log of everything?


I could see a couple reasons against. For one, it's expensive to seralize/encode your objects into the logger , even if you reduce logging level on prod.

Secondly, you can't represent the heap & stack well as strings. Concurrent threads and object trees are better debugged with a debugger (e.g. gdb).


That makes it foolish, but I'm not sure if it's lazy.


the lazy part comes from the fact that it's easier to be foolish in this case than to be selective about what gets logged. So lazy & foolish.


it's not lazy, it's a good use of time, you don't go back and forth when you realize you forgot to log something important.


plus if you aren't making your packs unrepairable on purpose with foamed construction (like Tesla). you can par out modules in the packs into new configurations somewhat easily for the amount of work needed.


uh - you clearly misunderstood something. The video is about the port of the Arduino framework that is running on the ESP32. on the ESP32-S* that have native USB that has implications that makes the option of setting a baud rate for them using the Arduino Framework superfluous. The ESP32 Variants have pretty good documentation themself.


TLDR:

“A cut of revenue not derived specifically from broadcast on the cable channel” went from “meaningless” to “huge significance” to “boner-inducing” arguably the greatest clause ever in TV contract history…at a minimum, it’s one of the most improbable all things considered.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: