Viewing corporations as amoral bots that are justified in squeezing every bit of profit out of humans is exactly what is wrong with our society. Someone in a big tech was the inventor of this dark pattern and they think they're awesome for finding a loophole in the well-meaning regulation, at the cost of the costumer they supposedly should serve. That person is the problem, and so are the people that followed them
How else should we view them? Walks like a duck, quacks like a duck, probably a duck.
Nobody justified the behavior, only stated that corporations have proven over time to generally seek profits over all else. They provide legal cover to bad-faith actions. That wasn't the original intention, but it is absolutely the current state of the world.
corporations are the mechanism by which bad actors are shielded from responsibility. limited liability is used in bad faith in these cases; regulating this bad-faith usage should impact the individuals responsible for the implementation, but should also impact those not directly involved for allowing it to happen in the first place, including board members, management and investors (if you really want to see change, start fucking with peoples' money when they allow bad things to happen through inaction).
Unless you're advocating slaughtering 90% of humanity, what is the purpose of this line of thinking?
Sure, some of you are just so good and nice that you're going to spend all of your time trying to better your fellow man no matter the incentives. The rest of us are spending our time and energy trying to better ourselves. It's better for everyone if the rules of the game are set up so those actions create positive externalities.
We tend to lock up people whose actions are amoral enough. The problem with thinking that its normal for corporations to act amorally is that that also means we forget its people making these decisions and that those people should be held accountable.
For this "modern" view, you have to look back to 1896, when New Jersey made it easy to create for-profit corporations beholden only to shareholders as a way to attract investment to the state.
It's really not even primarily the privately-held corporations that are the problem. Some family business, even if it's big, is more likely to care about its reputation because that's their family's company and it's still going to be their family's company in 50 years or more.
Whereas you get publicly-traded companies and the primary shareholders are investment funds, whose managers get bonuses based on short-term results and who may not be in the same job or having the fund hold the same companies in as little as a year from now. So their incentive is to have companies squeeze customers for short-term gains and then choose the right time to pawn the shares off on some bag holders who see strong recent numbers and don't realize what that strategy does to the company's long-term prospects.
It's basically the further removed those benefiting financially are from the actual operation the more likely the business will care about money over everything else. The public private distinction matters less - private companies can still have external investors calling the shots.
> Viewing corporations as amoral bots that are justified in squeezing every bit of profit out of humans
Literally what a corporation is.
This is capitalism mate. People will do basically anything with the "for the company" excuse. If they don't, they will be out of a job and eventually starve.
Laws are the only things that can limit corporations. Without those we'd still have children working, 14 hour shifts and no weekends.
Why is that person a problem? That is why rule of law exists, ideally, so that we don't run society on arbitrary outraged moral judgement. E.g. many people are morally outraged by presence of any illegal immigrants and others are outraged by any enforcement against undocumented immigrants. If we base decisions on arbitrary outraged moral judgement it's not going to go well.
A "loophole" is only a "loophole" to someone who agrees with yours. And I say it as someone who agrees in this particular instance.
That person is a problem because low-trust environments are inherently low-privacy and low-efficiency environments. Allowing a small portion of the population to destroy trust and then justifying it with "well there was no explicit rule against it" is parasitic on the whole society. It's better to stand up and say "this is unacceptable and clearly not what was asked for".
That is only as far as you or I are concerned. The environment where you first write the rules then someone can arbitrarily come and say nah that's not what we meant (with any consequences) is far worse than any low trust environment. Vague rules with selective/interpretative enforcement is in fact what authoritarian countries like Russia/China tend to use. Disturbing social harmony is illegal and all the right thinking people know it when they see it.
> And most of the steps people do to mitigate privacy violations (TOR, pihole, VPNs, etc.) probably make any signal you do put out more scrutinized.
If you're using them correctly there is no way to scrutinize your traffic more, these comments just spread FUD for no good reason. How are "they" unable to catch darkweb criminals for years and even decades, but somehow can tell if it's me browsing reddit over Tor?
My take: if you do it correctly you're a very small minority of people and most would probably be concerned at your level of paranoia if you told them every detail of your setup. Turns out opsec is pretty difficult to achieve. Also unless you're a criminal you're probably wasting a lot of time for no real gain.
I use a pihole, ublock, a vpn for some devices, and I'm using my own OPNSense router w/ very strict settings. The amount of privacy I think I have from all that is next to nothing if someone were actually interested in what I was doing. I'd probably just get one of my boxes shelled and that's the end of that. Mostly what I'm trying to do is block 1) Some for the lulz Russian teenager 2) the shady ad networks hoovering up everything all the time and 3) my IoT devices like TVs and Hue light bulbs from ever accessing any part of the rest of my network.
You'll also notice that darkweb criminals are getting caught more and more frequently these days because governments have decided to no longer tolerate it. I feel bad for you if you're in a ransomware gang these days.
And if you do follow these arrests you'll notice that it's old-fashioned investigations that catch them, by tracing behavior, log in times, etc. The comment I was answering was implying you lose anonymity by using these tools, which you don't
Excited for stuff like this as a CRPG player. A possible future for designing a crpg NPC might be to write a bunch of memories, descriptions, likes and dislikes, etc. in text instead of trying to convey those through branching dialogue.
Yeah, that’s the kind of direction I’m hoping for too. It could be hard to follow a narrative without branching dialogue, but maybe it will raise a new kind of NPC
This content isn't as overt as it may seem, maybe you did come across it and just didn't notice flashing. Those "in the know", generally younger people whose friends told them about flashtok, know what to look for
Also: kids click on links adult ignore without thinking. Our brains have built in filters for avoiding content we don't want; for kids everything is novel.
That's a silly distinction to make, because there's nothing stopping you from giving an agent access to a semantic search.
If I make a semantic search over my organization's Policy As Code procedures or whatever and give it to Claude Code as an MCP, does Claude Code suddenly stop being agentic?
The results are interesting for showing the efficacy of small, fine-tuned models that can be run locally. AI providers as a business need their do-all models to be better than these if they want long-term revenue through the APIs, right?
It depends on the provider and their goals. We have recently seen a schism between OpenAI and Anthropic, whereby Athropic is going all in on automation and programming and OpenAI is going all in for I guess a personal assistant / personal tasks AI.
Yes, that's what it is. Kagi as a brand is LLM-optimist, so you may be fundamentally at odds with them here... If it lessens the issue for you, the sources of each item are cited properly in every example I tried, so maybe you could treat it as a fancy link aggregator
Kagi founder here. I am personally not an LLM-optimist. The thing is that I do not think LLMs will bring us to "Star Trek" level of useful computers (which I see humans eventually getting to) due to LLM's fundamentally broken auto-regressive nature. A different approach will be needed. Slight nuance but an important one.
Kagi as a brand is building tools in service of its users, no particular affinity towards any technologies.
You claimed reading LLM summaries will provide complete understanding. Optimistic would be a charitable description of this claim. And optimism is not limited to the most optimistic.
Another LLM-pragmatist here. I don't see why we should treat LLMs differently than any other tool in the box. Except maybe that it's currently the newest and most shiny, albeit still a bit clunky and overpriced.
Fwiw, I love your approach to AI. It's been very useful to me. Quick answers especially has been amazingly accurate and I've used it hundreds of times, if not thousands, and routinely check the links it gives
I'm about as AI-pessimist as it gets, but Kagi's use of LLMs is the most tasteful and practical I've seen. It's always completely opt-in (e.g. "append a ? to your search query if you want an AI summary", as opposed to Google's "append a swear word to your search query if you don't want one"), it's not pushy, and it's focused on summarizing and aggregating content rather than trying to make it up.
Google thinks the same of me and I don't even edit the URL. I can have a session working just fine one night and come back the next day, open a new tab to search for something, and get captcha'd to hell. I'm fairly sure they just mess with Firefox on purpose. I won't install Brave, Chrome, or Edge out of principle either. Safari works fine, but I don't like it.
Google has gotten amazingly hostile toward power users. I don't even try to use it anymore. It almost feels like they actively hate people that learned how to use their tools
I consider myself a major LLM optimist in many ways, but if I'm receiving a once per day curated news aggregation feed I feel I'd want a human eye. I guess an LLM in theory might have less of the biases found in humans, but you're trading one kind of bias for another.
This isn't really comparable. A newspaper is a single source. New York Times is a newspaper, CNN (a part of it) is a newspaper. Services like Kagi News, whether AI or human-curated, try to do aggregation and meta-analysis of many newspaper.
Yeah, I agree. The entire value/fact dichotomy that the announcement bases itself on is a pretty hot philosophical topic I lean against Kagi on. It's just impossible to summarize any text without imparting some sort of value judgement on it, therefore "biasing" the text
> It's just impossible to summarize any text without imparting some sort of value judgement on it, therefore "biasing" the text
Unfortunately, the above is nearly a cliché at this point. The phrase "value judgment" is insufficient because it occludes some important differences. To name just two that matter; there is a key difference between (1) a moral value judgment; (2) selection & summarization (often intended to improve information density for the intended audience).
For instance, imagine two non-partisan medical newsletters. Even if they have the same moral values (e.g. rooted in the Hippocratic Oath), they might have different assessments of what is more relevant for their audience. One could say both are "biased", but does doing so impart any functional information? I would rather say something like "Newsletter A is compromised of Editorial Board X with such-and-such a track record and is known for careful, long-form articles" or "Newsletter B is a one-person operation known for a prolific stream of hourly coverage." In this example, saying the newsletters differ in framing and intended audience is useful, but calling each "biased in different ways" is a throwaway comment (having low informational content in the Shannonian sense).
Personally, instead of saying "biased" I tend to ask questions like: (a) Who is their intended audience; (b) What attributes and qualities consistently shine through?; (c) How do they make money? (d) Is the publication/source transparent about their approach? (e) What is their track record about accuracy, separating commentary from factual claims, professional integrity, disclosure of conflicts of interest, level of intellectual honesty, epistemic standards, and corrections?
> The entire value/fact dichotomy that the announcement bases itself on
Hmmm. Here I will quote some representative sections from the announcement [1]:
>> News is broken. We all know it, but we’ve somehow accepted it as inevitable. The endless notifications. The clickbait headlines designed to trigger rather than inform, driven by relentless ad monetization. The exhausting cycle of checking multiple apps throughout the day, only to feel more anxious and less informed than when we started. This isn’t what news was supposed to be. We can do better, and create what news should have been all along: pure, essential information that respects your intelligence and time.
>> .. Kagi News operates on a simple principle: understanding the world requires hearing from the world. Every day, our system reads thousands of community curated RSS feeds from publications across different viewpoints and perspectives. We then distill this massive information into one comprehensive daily briefing, while clearly citing sources.
>> .. We strive for diversity and transparency of resources and welcome your contributions to widen perspectives. This multi-source approach helps reveal the full picture beyond any single viewpoint.
>> .. If you’re tired of news that makes you feel worse about the world while teaching you less about it, we invite you to try a different approach with Kagi News, so download it today ...
I don't see any evidence from these selections (nor the announcement as a whole) that their approach states, assumes, or requires a value/fact dichotomy. Additionally, I read various example articles to look for evidence that their information architecture group information along such a dichotomy.
Lastly, to be transparent, I'll state a claim that I find to be true: for many/most statements, it isn't that difficult nor contentious to separate out factual claims from value claims. We don't need to debate the exact percentages or get into the weeds on this unless you think it will be useful.
I will grant this -- which is a different point that what the commenter above made -- when reading various articles from a particular source, it can take effort and analysis to suss out the source's level of intellectual honesty, ulterior motives, and other questions I mention in my sibling comment.
Hard pass then. I’m a happy Kagi search subscriber, but I certainly don’t want more AI slop in my life.
I use RSS with newsboat and I get mainstream news by visiting individual sites (nytimes.com, etc.) and using the Newshound aggregator. Also, of course, HN with https://hn-ai.org/
You can also convert regular newspapers into RSS feeds! NYTimes and Seattle Times have official RSS feeds, and with some scripting you can also get their article contents.