Hacker Newsnew | past | comments | ask | show | jobs | submit | echelon's commentslogin

A lot of online culture laments the modern American life and blames the Boomers for all of our "woes".

The 1950s - 2000s post war boom was a tailwind very few countries get to experience. It's funny how we look back at it as the norm, because that's not what the rest of the world experienced.

There's a reason everything in America was super sized for so long.

Things have averaged out a bit now, but if you look at the trendline, we're still doing remarkably well. The fact that our relatively small population supports the GDP it does is wild.


> The fact that our relatively small population supports the GDP it does is wild.

Yes and no. It is very impressive what humans can do and the US is a remarkable country for managing to achieve what they have. On the other hand, if we're talking GDP it is basically just a trendline [0] of whether you let people better their own lives or not.

The main reason for US success on the GDP front is that the median administrator chooses to make people fail and the US does the best job of resisting that tendency. To me the mystery is less why the US succeeds but more why polities are so committed to failing. It isn't even like there is a political ideology that genuinely wants to make it hard to do business [1]. It mostly happens by accident, foolishness and ignorance.

[0] https://www.grumpy-economist.com/p/the-cost-of-regulation - see the figure, note the logarithmic axis

[1] I suppose the environmentalists, maybe.


I think you have one big piece of it: economic progress has a lot of search problems and it is impossible to master-plan it; consequently free intelligence beats centralized regulation. It's a bit out-dated now[0] but The Fifth Discipline distinguishes between 'detail complexity' (things that have a lot of bits you have to figure out) and 'dynamic complexity' (systems that have feedback loops and adaptive participants). It might simply be that handling systems with dynamic complexity is out of the reach of most humans. Economic regulation strikes me as something that can be particularly like a thing that modifies a dynamic system.

In fact, creating good policy in a modern economy might be so dynamically complex that no mind alive today can simultaneously comprehend an adaptive solution and act in such a way as to bring it about.

Perhaps, given this, we are simply spoiled by the effectiveness of certain economic actors (e.g. the Federal Reserve) in maintaining an monetary thermostat. Their success is not the norm so much as it is extraordinary.

0: which is humorous given this, because the Seinfeld Isn't Funny effect applies to things that become mainstream - insight and humor both disappear as the spark or joke become common knowledge


> On the other hand, if we're talking GDP it is basically just a trendline [0] of whether you let people better their own lives or not.

Focusing on GDP handwaves away so much around externalities that it's hard to know where to start with it.

How much worse off would people be if the US GDP was 20% lower but FB/Instagram/Google/everybody-else weren't vacuuming up ad dollars by pushing as-addictive-as-possible mental-junk-food in people's faces to make them feel bad about themselves? How much of that GDP is giving anyone optimism for improving their own individual condition?

How much of the nostalgia for the olden days is about agency and independence and perceived trajectory vs purely material wealth (from a material standpoint, many people today have more and better stuff than boomers did as kids, when a single black and white TV may have been shared by a whole family)?

Would regulation preventing the heads of big-tech advertising firms from keeping as much of that profit for themselves really be a net drain? Some suggestions for that regulation, harkening back to US history:

1) bring back super-high marginal tax rates to re-encourage more deductions and spread of salaries vs concentration in the top CEOs and execs. worked for the booming 50s! preventing the already-powerful, already-well-off from having another avenue to purely focus on "better their own lives" seemed wise there. seems like there were mega-wealthy super-tycoons both before the "soak the rich" era in US history and after it, but fewer minted during it?

2) instead of pushing more and more people into overtime or second jobs, go the other way and revitalize the earlier 20th-century trends towards limited work hours. get rid of overtime-exempt classifications while at it. Preventing people from working 100 hours a week to "better their own lives" and preventing them from sending their kids to work as early to "better their own lives" seems to have worked out ok.

3) crack down on pollution, don't let people "better their own lives" by forcing others to breathe, eat, and walk through their shit

4) crack down on surveillance, don't let people "better their own lives" by monetizing the private lives of others; focus on letting others enjoy their own lives in peace instead


> The main reason for US success on the GDP front is that the median administrator chooses to make people fail and the US does the best job of resisting that tendency.

Every component here is ill-defined and doubtful, especially the claim that lower regulation is the "main" reason.


Well; in some sense. The only person on HN who talks seriously about economics is patio11 because he writes those long-form articles that go on for days and could use a bit of an edit. Which is imperfect but certainly the best the community has come up with because it takes a lot of words to tackle economics.

That acknowledged, I did link to a profession economist's blog and he goes in to excruciating detail of what all his terms mean and what he is saying. I'm basically just echoing all that, so if you want the details you can spend a few hours reading what he wrote.


1850-1950 is much closer to a norm over human history -

3+ catastrophic major wars

3+ other minor ones.

2+ great depressions (each of which was as large as ever financial panic 1951-current combined)

3+ financial panic events

At least one pandemic - plus local epidemics were pretty common.

When I tell people "its never been better than it is today" they dont believe me, but its the honest to god truth.


>A lot of online culture laments the modern American life and blames the Boomers for all of our "woes".

>The 1950s - 2000s post war boom was a tailwind very few countries get to experience. It's funny how we look back at it as the norm, because that's not what the rest of the world experienced.

Especially ironic when perpetrated by youth from countries outside of America - like mine. I'm not a boomer, but my parents generation had it rough and my life was much easier in comparison. Importing "boomer" memes is a bit stupid in this context. Hell, even the name makes no sense here, because our "baby boom" happened later, in 1980-1990s.


> The 1950s - 2000s post war boom was a tailwind very few countries get to experience.

All countries who had participated in WWII experienced it, winners and losers.

What you said is the compete opposite of the truth.


Probably worthwhile to separate that span into smaller chunks.

We blame boomers not for what happened in the 50s or 60s, we blame them for voting in and supporting Ronald fckng Reagan and all the bullshit his policies have affected since his presidency.

See: https://thelinknewspaper.ca/article/why-almost-everything-is...


Blaming boomers is stupid ... it conflates many different and different kinds of people. I'm a boomer who helped develop the ARPANET (so I'm not technically illiterate ... that's my parents' generation) and I'm a democratic socialist who protested vehemently against Nixon and Reagan (who many in my parents' generation supported). The people to really blame are right wingers and corporations and the uber rich who create bogeymen and false targets like "boomers" for gullible people to be distracted and deflected by.

Oh I see, all our bogeymen are created by a shadowy conspiracy of very rich bogeymen.

Yeah, like I said, we blame boomers who voted for and supported Reagan.

I’m very aware that a healthy minority opposed him and his policies.

Thank you for your work on ARPANET and remaining a proud socialist! Computer networking is what drew me in to the technology space (not programming like most folks here, I presume), and socialism just might finally be having its due time here in the US (e.g., Mamdani, Katie Wilson).


This is early days, too. We're probably going to get better at this across more domains.

Local AI will eventually be booming. It'll be more configurable, adaptable, hackable. "Free". And private.

Crude APIs can only get you so far.

I'm in favor of intelligent models like Nano Banana over ComfyUI messes (the future is the model, not the node graph).

I still think we need the ability to inject control layers and have full access to the model, because we lose too much utility by not having it.

I think we'll eventually get Nano Banana Pro smarts slimmed down and running on a local machine.


>Local AI will eventually be booming.

With how expensive RAM currently is, I doubt it.


It's temporary. Sam Altman booked all the supply for a year. Give it time to unwind.

I’m old enough to remember many memory price spikes.

I remember saving up for my first 128MB stick and the next week it was like triple in price.

[flagged]


Is this a joke?

Image and video models are some of the most useful tools of the last few decades.


Is this a joke?

So does this finally replace SDXL?

Is Flux 1/2/Kontext left in the dust by the Z Image and Qwen combo?


Yeah, I've definitely switched largely away from Flux. Much as I do like Flux (for prompt adherency), BFL's baffling licensing structure along with its excessive censorship makes it a noop.

For ref, the Porcupine-cone creature that ZiT couldn't handle by itself in my aforementioned test was easily handled using a Qwen20b + ZiT refiner workflow and even with two separate models STILL runs faster than Flux2 [dev].

https://imgur.com/a/5qYP0Vc


SDXL has long been surpassed, it's primary redeeming feature is fine tuned variants for different focus and image styles.

IMO HiDream had the best quality OSS generations, Flux Schnell is decent as well. Will try out Z-Image soon.


SDXL has been outclassed for a while, especially since Flux came out.

Subjective. Most in creative industries regularly still use SDXL.

Once Z-image base comes out and some real tuning can be done, I think it has a chance of replacing it for the function SDXL has


I don't think that's fair. SDXL is crap at composition. It's really good with LoRAs to stylize/inpaint though.

Source?

Most of the people I know doing local AI prefer SDXL to Flux. Lots of people are still using SDXL, even today.

Flux has largely been met with a collective yawn.

The only thing Flux had going for it was photorealism and prompt adherence. But the skin and jaws of the humans it generated looked weird, it was difficult to fine tune, and the licensing was weird. Furthermore, Flux never had good aesthetics. It always felt plain.

Nobody doing anime or cartoons used Flux. SDXL continues to shine here. People doing photoreal kept using Midjourney.


> it was difficult to fine tune

Yep. It's pretty difficult to fine tune, mostly because it's a distilled model. You can fine tune it a little bit, but it will quickly collapse and start producing garbage, even though fundamentally it should have been an easier architecture to fine-tune compared to SDXL (since it uses the much more modern flow matching paradigm).

I think that's probably the reason why we never really got any good anime Flux models (at least not as good as they were for SDXL). You just don't have enough leeway to be able to train the model for long enough to make the model great for a domain it's currently suboptimal for without completely collapsing it.


> It's pretty difficult to fine tune, mostly because it's a distilled model.

What about being distilled makes it harder to fine-tune?


The examples shown in the links are not filters for aesthetics. These are clearly experiments in data compression

These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". (Did they ever stop to ask themselves why or to what end?)

This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of.


"My, what big eyes you have, Grandmother." "All the better to compress you with, my dear."

Why would data compression make his eyes bigger?

Because it's a neural technique, not one based on pixels or frames.

https://blog.metaphysic.ai/what-is-neural-compression/

Instead of artifacts in pixels, you'll see artifacts in larger features.

https://arxiv.org/abs/2412.11379

Look at figure 5 and beyond.


Like a visual version of psychoacoustic compression. Neat. Thanks for sharing.

Agreed. It looks like over-aggressive adaptive noise filtering, a smoothing filter and some flavor of unsharp masking. You're correct that this is targeted at making video content compress better which can cut streaming bandwidth costs for YT. Noise reduction targets high-frequency details, which can look similar to skin smoothing filters.

The people fixated on "...but it made eyes bigger" are missing the point. YouTube has zero motivation to automatically apply "photo flattery filters" to all videos. Even if a "flattery filter" looked better on one type of face, it would look worse on another type of face. Plus applying ANY kind of filter to a million videos an hour costs serious money.

I'm not saying YouTube is an angel. They absolutely deploy dark patterns and user manipulation at massive scale - but they always do it to make money. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Improving compression would do all three. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell.


I agree I don't really think there's anything here besides compression algos being tested. At the very least, I'd need to see far far more evidence of filters being applied than what's been shared in the thread. But having worked at social media in the past I must correct you on one thing

>Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs.

You can't know this. Almost everything at YouTube is probably A/B tested heavily and many times you get very surprising results. Applying a filter could very well increase views and time spent on app enough to justify the cost.


Activism fatigue is a thing today.

Whatever the purpose, it's clearly surreptitious.

> This level of AI paranoia is getting annoying.

Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? Should you be afraid!? YES!!!" and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. Ironically/suspiciously it's all the same re-hashed redundant takes by everyone from Hank Green to CNBC to every podcast ever, late night shows, radio, everything.

So to me the target of one's annoyance should be the propaganda machine, not the targets of the machine. What are people supposed to feel, totally chill because they have tons of control?


This is an experiment in data compression.

If any engineers think that's what they're doing they should be fired. More likely it's product managers who barely know what's going on in their departments except that there's a word "AI" pinging around that's good for their KPIs and keeps them from getting fired.

> If any engineers think that's what they're doing they should be fired.

Seriously?

Then why is nobody in this thread suggesting what they're actually doing?

Everyone is accusing YouTube of "AI"ing the content with "AI".

What does that even mean?

Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. All because "AI" is "evil" and "big corp" is "evil".

Use occam's razor. Videos are expensive to store. Google gets 20 million videos a day.

I'm frankly shocked Google hasn't started deleting old garbage. They probably should start culling YouTube of cruft nobody watches.


Videos are expensive to store, but generative AI is expensive to run. That will cost them more than storage allegedly saved.

To solve this problem of adding compute heavy processing to serving videos, they will need to cache the output of the AI, which uses up the storage you say they are saving.


https://c3-neural-compression.github.io/

Google has already matched H.266. And this was over a year ago.

They've probably developed some really good models for this and are silently testing how people perceive them.


If you want insight into why they haven't deleted "old garbage" you might try, The Age of Surveillance Capitalism by Zuboff. Pretty enlightening.

I'm pretty sure those 12 year olds uploading 24 hour long Sonic YouTube poops aren't creating value.

1000 years from now those will be very important. A bit like we are now wondering what horrible food average/poor people ate 1000 years ago.

I’m afraid to search… what exactly is a “24 hour long sonic Youtube poop?”


What type of compression would change the relative scale of elements within an image? None that I'm aware of, and these platforms can't really make up new video codecs on the spot since hardware accelerated decoding is so essential for performance.

Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there.


> What type of compression would change the relative scale of elements within an image?

Video compression operates on macroblocks and calculates motion vectors of those macroblocks between frames.

When you push it to the limit, the macroblocks can appear like they're swimming around on screen.

Some decoders attempt to smooth out the boundaries between macroblocks and restore sharpness.

The giveaway is that the entire video is extremely low quality. The compression ratio is extreme.


They're doing something with neural compression, not classical techniques.

https://blog.metaphysic.ai/what-is-neural-compression/

See this paper:

https://arxiv.org/abs/2412.11379

Look at figure 5 and beyond.

Here's one such Google paper:

https://c3-neural-compression.github.io/


One that represented compressed videos as an embedding that gets reinflated by having gen AI interpret it back into image frames.

AI models are a form of compression.

Neural compression wouldn't be like HVEC, operating on frames and pixels. Rather, these techniques can encode entire features and optical flow, which can explain the larger discrepancies. Larger fingers, slightly misplaced items, etc.

Neural compression techniques reshape the image itself.

If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are.


Maybe such a thing could exist in the future, but I don't think the idea that YouTube is already serving a secret neural video codec to clients is very plausible. There would be much clearer signs - dramatically higher CPU usage, and tools like yt-dlp running into bizarre undocumented streams that nothing is able to play.

If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.

I don't think that's actually what's up, but I don't think it's completely ruled out either.


That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.

The law of entropy appears true of TikToks and Shorts. It would make sense to take advantage of this. That is to say, the content becomes so generic that it merges into one.

Storage gets less cheap for short-form tiktoks where the average rate of consumption is extremely high and the number of niches is extremely large.

A new client-facing encoding scheme would break utilization of hardware encoders, which in turn slows down everyone's experience, chews through battery life, etc. They won't serve it that way - there's no support in the field for it.

It looks like they're compressing the data before it gets further processed with the traditional suite of video codecs. They're relying on the traditional codecs to serve, but running some internal first pass to further compress the data they have to store.


The resources required for putting AI <something> inline in the input (upload) or output (download) chain would likely dwarf the resources needed for the non-AI approaches.

Totally. Unfortunately it's not lossless and instead of just getting pixelated it's changing the size of body parts lol

Probably compression followed by regeneration during decompression. There's a brilliant technique called "Seam Carving" [1] invented two decades ago that enables content aware resizing of photos and can be sequentially applied to frames in a video stream. It's used everywhere nowadays. It wouldn't surprise me that arbitrary enlargements are artifacts produced by such techniques.

[1] https://github.com/vivianhylee/seam-carving


I largely agree, I think that probably is all that it is. And it looks like shit.

Though there is a LOT of room to subtly train many kinds of lossy compression systems, which COULD still imply they're doing this intentionally. And it looks like shit.


It could be, but if compression is codecs, usually new codecs get talked about on a blog.

> This is an experiment

A legal experiment for sure. Hope everyone involved can clear their schedules for hearings in multiple jurisdictions for a few years.


As soon as people start paying Google for the 30,000 hours of video uploaded every hour (2022 figure), then they can dictate what forms of compression and lossiness Google uses to save money.

That doesn't include all of the transcoding and alternate formats stored, either.

People signing up to YouTube agree to Google's ToS.

Google doesn't even say they'll keep your videos. They reserve the right to delete them, transcode them, degrade them, use them in AI training, etc.

It's a free service.


> People signing up to YouTube agree to Google's ToS

None of which overrides what the law says or can do.

> It's a free service

I've paid for it. Don't anymore, in large part because of crap like this reducing content quality.


Its not the same when you publish something on my platform as when i publish something and put your name on it.

It is bad enough we can deepfake anyone. If we also pretend it was uploaded by you the sky is the limit.


That's the difference between the US and European countries. When you have SO MUCH POWER like Google, you can't just go around and say ItSaFReeSeRViCe in Europe. With great power comes great responsibility, to say it in American words.

"They're free to do whatever they want with their own service" != "You can't criticize them for doing dumb things"

Ye it is such a strange and common take. Like, "if you don't like it why complain?".

Holy shit, I'd love to see NaN as a proper sum type. That's the way to do it. That would fix everything.

I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.

An approach that I think would have most of the same correctness benefits as a proper sum type while being more ergonomic: Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

(A slightly more out-there extension of this idea: The finite-float type also can't represent negative zero. Any operation on finite-float-typed operands that would return negative zero returns positive zero instead. This means that finite floats obey the substitution property, and (as a minor added bonus) can be compared for equality by a simple bitwise comparison. It's possible that this idea is too weird, though, and there might be footguns in the case where you convert a finite float to an arbitrary one.)


> I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.

I was thinking about this the other day for integer wrapping specifically, given that it's not checked in release mode for Rust (by default at least, I think there's a way to override that?). I suspect that it's also influenced by the fact that people kinda expect to be able to use operators for arithmetic, and it's not really clear how to deal with something like `a + b + c` in a way where each step has to be fallible; you could have errors propagate and then just have `(a + b + c)?`, but I'm not sure that would be immediately intuitive to people, or you could require it to be explicit at each step, e.g. `((a + b)? + c))?`, but that would be fairly verbose. The best I could come up with is to have a macro that does the first thing, which I imagine someone has probably already written before, where you could do something like `checked!(a + b + c)`, and then have it give a single result. I could almost imagine a language with more special syntax for things having a built-in operator for that, like wrapping it in double backticks or something rather than `checked!(...)`.


> Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

I suppose there's precedent of sorts in signaling NaNs (and NaNs in general, since FPUs need to account for payloads), but I don't know how much software actually makes use of sNaNs/payloads, nor how those features work in GPUs/super-performance-sensitive code.

I also feel that as far as Rust goes, the NonZero<T> types would seem to point towards not using the described finite/arbitrary float scheme as the NonZero<T> types don't implement "regular" arithmetic operations that can result in 0 (there's unsafe unchecked operations and explicit checked operations, but no +/-/etc.).


Rust's NonZero basically exists only to enable layout optimizations (e.g., Option<NonZero<usize>> takes up only one word of memory, because the all-zero bit pattern represents None). It's not particularly aiming to be used pervasively to improve correctness.

The key disanalogy between NonZero and the "finite float" idea is that zero comes up all the time in basically every kind of math, so you can't just use NonZero everywhere in your code; you have to constantly deal with the seam converting between the two types, which is the most unwieldy part of the scheme. By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.


> By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.

I suppose that's a fair point. I guess a better analogy might be to operations on normal integer types, where overflow is considered an error but that is not reflected in default operator function signatures.

I do want to circle back a bit and say that my mention of signaling NaNs would probably have been better served by a discussion of floating point exceptions more generally. In particular, I feel like existing IEEE floating point technically supports something like what you propose via hardware floating point exceptions and/or sNaNs, but I don't know how well those capabilities are actually supported (e.g., from what I remember the C++ interface for dealing with that kind of thing was clunky at best). I want to say that lifting those semantics into programming languages might interfere with normally desirable optimizations as well (e.g., effectively adding a branch after floating point operations might interfere with vectorization), though I suppose Rust could always pull what it did with integer overflow and turn off checks in release mode, as much as I dislike that decision.


Most of the herpesvirus family have associations with neurodegenerative disorders, also including HSV.

A lack of oral hygiene and gum disease is associated with nerodegeneration.

Lots of metabolic diseases have associations with nerodegenerative disorders. Insulin, kidney, liver dysfunction.

The gut microbiome...

Putting immune or metabolic stress on the brain can cause it to go into this disease state death spiral.


> A lack of oral hygiene and gum disease is associated with nerodegeneration.

It's important to remember that association/correlation is not causality. People who brush their teeth reliably are probably more likely to exercise and do other healthy behaviors, too (avoid smoking, ...).


It's also very possible to practice great oral hygiene and have bad gum disease. Gum disease seems to be carried by a potentially strong genetic risk factor.

Oral flora is also mutable and impacted by environment and life.

Kiss the wrong person (totally by chance) and you'll start getting cavities left and right.


That's been studied and the evidence suggests that there is some causation. Bacteria that cause your gum disease can get into the bloodstream and reach the brain, where they release enzymes that cause inflammation and can damage cells.

In particular this can seriously impair microglial cells which is something you really don't want to have happen if you value maintaining a well functioning brain.


There's a hypothesized mechanism, but again, no actual demonstration of causality. No one is RCTing brushing teeth, for obvious reasons.

Proposed mechanisms are better than statistical handwaving.

This gives researchers (the lab kind) something to investigate.

I respect this kind of science a lot more than statistical paper pushing.


As nice as this statistical thinking alone is, it can also slow things down.

There's a reason why this finding is valuable. It suggests a mechanistic hypothesis that bacteria are entering the bloodstream, heart, and passing the blood-brain barrier.

This is a very valuable line of investigation that can lead to a smoking gun for one class of casual mechanisms and potentially to preventative care or treatment.

If we blindly follow just the statistics, we'd never get any real science done.

Correlation does not imply causation. But when it gives you something to investigate, don't sit on it.


We're too early.

This is AI's "dialup era" (pre-56k, maybe even the 2400 baud era).

We've got a bunch of models, but they don't fit into many products.

Companies and leadership were told to "adopt AI" and given crude tools with no instructions. Of course it failed.

Chat is an interesting UX, but it's primitive. We need better ways to connect domains, especially multi-dimensional ones.

Most products are "bolting on" AI. There are few products that really "get it". Adobe is one of the only companies I've seen with actually compelling AI + interface results, and even their experiments are just early demos [1-4]. (I've built open source versions of most of these.)

We're in for another 5 years of figuring this out. And we don't need monolithic AI models via APIs. We need access to the AI building blocks and sub networks so we can adapt and fine tune models to the actual control surfaces. That's when the real take off will happen.

[1] Relighting scenes: https://youtu.be/YqAAFX1XXY8?si=DG6ODYZXInb0Ckvc&t=211

[2] Image -> 3D editing: https://youtu.be/BLxFn_BFB5c?si=GJg12gU5gFU9ZpVc&t=185 (payoff is at 3:54)

[3] Image -> Gaussian -> Gaussian editing: https://youtu.be/z3lHAahgpRk?si=XwSouqEJUFhC44TP&t=285

[4] 3D -> image with semantic tags: https://youtu.be/z275i_6jDPc?si=2HaatjXOEk3lHeW-&t=443

edit: curious why I'm getting the flood of downvotes for saying we're too early. Care to offer a counter argument I can consider?


This is AI's Segway era. Perfectly functional device, but the early-2000s notion that it was going to become the primary mode of transportation was just an investor-fueled pipe dream.

Just add a stick and sharing: the scooters are quite successful

I never said they weren't successful.

AI is going to be bigger than Segway / personal mobility.

I think dialup is the appropriate analogy because the world was building WebVan-type companies before the technology was sufficiently wide spread to support the economics.

In this case, the technology is too concentrated and there aren't enough ways to adapt models to problems. The models are too big, too slow, not granular enough, etc. They aren't build on a per-problem domain basis, but rather a "one-size fits all" model.


You want to build a world where roll back is 95% the right thing to do. So that it almost always works and you don't even have to think about it.

During an incident, the incident lead should be able to say to your team's on call: "can you roll back? If so, roll back" and the oncall engineer should know if it's okay. By default it should be if you're writing code mindfully.

Certain well-understood migrations are the only cases where roll back might not be acceptable.

Always keep your services in "roll back able", "graceful fail", "fail open" state.

This requires tremendous engineering consciousness across the entire org. Every team must be a diligent custodian of this. And even then, it will sometimes break down.

Never make code changes you can't roll back from without reason and without informing the team. Service calls, data write formats, etc.

I've been in the line of billion dollar transaction value services for most of my career. And unfortunately I've been in billion dollar outages.


"Fail open" state would have been improper here, as the system being impacted was a security-critical system: firewall rules.

It is absolutely the wrong approach to "fail open" when you can't run security-critical operations.


Cloudflare is supposed to protect me from occasional ddos, not take my business offline entirely.

This can be architected in such a way that if one rules engine crashes, other systems are not impacted and other rules, cached rules, heuristics, global policies, etc. continue to function and provide shielding.

You can't ask for Cloudflare to turn on a dime and implement this in this manner. Their infra is probably very sensibly architected by great engineers. But there are always holes, especially when moving fast, migrating systems, etc. And there's probably room for more resiliency.


Appears to be fixed now. Just lost 30 minutes of working.

If this is unwrap() again, we need to have a talk about Rust panic safety.


Time to rewrite Rust’s unwrap() in Rust obviously.

Does it make it worse or better if I say it's RSC?

https://www.cloudflarestatus.com/incidents/lfrm31y6sw9q


Well, technically RSC was messed up, and then the hotfix for the messed up RSC was itself messed up. I guess there’s a lot of blame to go around.

Now multiply that 'just' by the number of people affected.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: