What do you mean interior EV design? Why does it have to differ from an EV to a gas powered car? You might have some different gauges, a control or two that is different, but other than that, why does an EV have to look a certain way?
Lexus CT200h is one of the best interiors ever designed. The design language was tactile: every single button or control had a different action or feel.
There’s a roughly 7 inch above the vents that flips up whenever the car is off, but using the screen is optional. The screen is up near the road, and it’s very safe to use. There’s a small joystick to move the cursor.
CT also has a stateless “springy gear selector” which works the same way as a manual gear selector, but after selecting the gear it springs back, so it’s stateless. It also has tactile blocking for gears you can’t enter yet. It felt extremely satisfying.
CT got a 10/10 from me, like a small aircraft cockpit. Enough knobs and computers to be exciting, but not OTT. Made a hybrid micro hatchback feel exciting.
BMWs interior pre-iPad-glued-to-the-dash is of the same quality. The automatic gear shifter is stateless, by it has an extremely satisfying clunk, buttons and dials for everything. Note that a stateless gear shifter isn't ideal if you ever need to move your car on a dead battery. In a BMW you need to go under the car and screw in a bolt that pushes the parking pall into neutral.
It still looks like a big computer screen, I'm afraid. Although, making it seamless with the dash is a step up, you're right. That tiny paddle gear shift looks horrendous, though.
I would really like to have analog features back, buttons and all that, in an EV.
Rivians don't even have a physical vent control (to aim the vents). That alone disqualifies it from anything close to "excellent". And that's before mentioning all the missing physical buttons that should've been there.
Touch screen buttons, especially the ones on the far edge of the center screen, are harder to accurately hit for most people. More physical buttons = better = more premium.
looks like a weird mix of nothing, pointless clock, that screen on the right, that only creates discomfort. The big screen that is big only for the trend.
In tesla ( trend setter for this) big screen is functional, and it can show you multi media, when you charge you watch netflix.
Polarr's Mission: For over a decade, Polarr has provided photographers with intelligent, intuitive AI-powered tools for photo and video editing, culling, and workflow automation. The company pioneered web and mobile photo editing apps and powered photo enhancements on hundreds of millions of devices through edge AI SDKs. Pixieset is a Vancouver-based all-in-one SaaS platform serving over 600,000 photographers with client galleries, websites, and online stores. Polarr is a San Jose-based startup that has built a community of creators with AI-powered editing tools and a catalog of over 1 million filters generated monthly. 1
Founding & Early Years
Polarr was founded in 2014 by Stanford graduate Borui Wang and Derek Yan. The company launched its online photo editor in February 2015. The app achieved remarkable early traction, receiving 250,000 downloads in its first 48 hours.
Product Evolution
• June 2015: First mobile version of Polarr Photo Editor released
• Fall 2015: Launched Polarr Photo Editors for Windows 10 and macOS
Polarr was named Apple's Best of the App Store for 2015 and 2016.
• December 2017: Released Album+, an app using on-device AI to organize photos
• March 2019: Announced $11.5M Series A funding round led by Threshold Ventures. Other Investors: Threshold Ventures, Cota Capital, Pear VC, StartX, and ZhenFund
• April 2022: Launched Polarr 24FPS app for video editing with Polarr filters
• January 2023: Launched Polarr Next, an AI web app that learns user style for automatic photo updates
• 2023: Introduced Polarr AI Copilots (beta) for transforming text into photos, videos, and designs
Public revenue figures for Polarr are not disclosed. However, the company demonstrated strong early traction:
• 4 million Monthly Active Users (MAUs) as of 2019, with only 30% based in the US
• Enterprise partnerships with major OEMs including Samsung, LG, Oppo, and Lenovo, whose native camera apps integrated Polarr's technology
• Enterprise value estimated at $46–69M as of recent valuation data
The company operates a freemium model with premium subscription tiers ($2.39/month for filter storage and premium filters, $4.79/month for all features), but specific ARR or annual revenue figures have not been publicly released.
The government is bound by acquisition processes for these large contracts: they put out RFPs and companies compete for the contract. All Google has to do is not bid for the next contract.
Pretty sure the 13th Amendment guarantees this, in theory. (Corporations aren't natural persons, but forcing a corporation to provide a service boils down to forcing people to provide a service.)
The Supreme Court has upheld the Selective Service Act as allowable under the 13th Amendment as something other than involuntary servitude, repeatedly since 1918. So personhood isn't much of an obstacle to conscription.
Won't all the ad revenue come from commerce use cases ... and they seem to be excluding that from this announcement:
> AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce
Why bother with ads when you can just pay an AI platform to prefer products directly? Then every time an agentic decision occurs, the product preference is baked in, no human in the loop. AdTech will be supplanted by BriberyTech.
The only chance of that happening is if Altman somehow feels sufficiently shamed into abandoning the lazy enshittification track to monetization.
I don't think they have an accurate model for what they're doing - they're treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They're not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.
The total mismatch between what they're doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He's savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn't seem like he's got the right mental map for AI, for all he talks a good game.
There are a number of different llms - no reason they all need to do things the same. If you are replacing web search then ads are probably how you earn money. However if you are replacing the work people do for a company it makes more sense to charge for the work. I'm not sure if their current token charges are the right one, but it seems like a better track.
What other interaction models exist for Claude given that Anthropic seems to be stressing so much that this is for "conversations"?
(Props for them for doing this, don't know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue/margin pressures)
This affects static libc only. If you pass -dynamic -lc then the libc functions are provided by the target system. Some systems only support dynamic libc, such as macOS. I think OpenBSD actually does support static libc though.
> I think OpenBSD actually does support static libc though.
How does that work, with syscalls being unable to be called except from the system’s libc? I’d be a bit surprised if any binary’s embedded libc would support this model.
For static executables, “the system’s libc” is of course not a thing. To support those, OpenBSD requires them to include an exhaustive list of all addresses of syscall instructions in a predefined place[1].
(With that said, OpenBSD promises no stability if you choose to bypass libc. What it promises instead is that it will change things in incompatible ways that will hurt. It’s up to you whether the pain that thus results from supporting OpenBSD is worth it.)
> How does that work, with syscalls being unable to be called except from the system’s libc?
OpenBSD allows system calls being made from shared libraries whose names start with `libc.so.' and all static binaries, as long as they include an `openbsd.syscalls' section listing call sites.
You can. There is a thread-unsafe implementation here <https://gist.github.com/oguz-ismail/72e34550af13e3841ed58e29...>. But the listing needs to be per system call number, so this one only supports system calls 1 (_exit) and 4 (write). It should be fairly easy to automatically generate the complete list but I didn't try it.
Good point. C's "freestanding" mode, analogous to Rust's nostd, does not provide any functions at all, just some type definitions and constants which obviously evaporate when compiled. Rust's nostd not only can compute how long a string is, it can unstably sort a slice, do atomic operations if they exist on your hardware, lots of fancy stuff but as a consequence even nostd has an actual library of code, a similar but maybe less organized situation occurs in C++. Most of the time this is simply better, why hand write your own crap sort when your compiler vendor can just provide an optimised sort for your platform? But on very, very tiny systems this might be unaffordable.
Anyway, C doesn't have Rust's core versus std distinction and so libc is a muddle of both the "Just useful library stuff" like strlen or qsort and features like open which are bound to the operating system specifics.
reply