Congrats you turned Slack into (an even worse version of) Jira.
This got me thinking though. Instead of turning your chat into a ticketing system, has anyone ever turned the ticketing system into a chat?
Like instead of “comments” on a ticket, you get a live chat channel (that can be reviewed later as “comments” for historical data).
Most companies I’ve worked for will setup channels on slack in a similar fashion for projects and initiatives but these are not really ergonomic for someone that prefers to “live” in the ticketing system for work management. And context gets lost the older things get.
You could pair this with a general channel for unfocused chitchat, but all relevant work would be discussed under the work item itself. Perhaps a “slack-like” view where you can see all your watched work items at once in “channels”.
* The (brilliant) infrastructure engineer who described his modus operandi as 'I read stuff on Reddit and then try it out.' This engineer is now worth, as a conservative estimate, in the neighborhood of $50 million. So maybe more of us should be doing that.
* Another infrastructure engineer, also very effective, who made a habit of booking an external training session (sometimes a series, weekly) for how to set up and integrate every piece of technology we used.
* An engineer (this one is quite famous, and we have only interacted professionally a few times) who simply wrote the best comments in the world. Every method was adorned with a miniature essay exploring the problem to be solved, the tradeoffs, the performance, the bits left undone. I think about those comments quite often.
As an addendum, though, I will say that the best engineers overall all shared a trait - they kept trying things until they got something working. That alone will take you pretty far.
Not that Google is any better, but I really want Google to put more effort into Workspace/GSuite and bring it up on par with M365 and all it includes, at least make Microsoft sweat a little bit that one day there might be a possibility for a competing product that can lure enterprises away. Workspace needs better DLP controls, and more of the enterprise-y things that MS wins at, and a bundled MDM that can manage all OSes, and better identity.
Even if the behemoths won't switch due to re-training & switching costs, MS desperately needs a competitor in this space. Barring that, they need to be broken up and forced to sell each bundled product separately and priced appropriately. Otherwise, who can compete with getting MDM, Identity, 2TB personal storage, 2TB sharepoint storage, Teams, DLP, EDR all for $22/user/month.
>> 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy. That’s really all it takes.
One of the truest things I've read on HN. I've also tried to visit this concept with a small free image app I made (https://gerry7.itch.io/cool-banana). Did it for myself really, but thought others might find it useful too. Fed up with too many options.
> It is interesting though how this same conversation doesn't exist in the same way in other areas of computing like video game consoles
Historically, when the first game consoles with game cartridges existed, the hardware was much more niche than the available personal computers. Game system developers designed hardware specifically for games, and game developers developed for those specific systems. Also, physical media for games provided an ownership model and DRM.
In 2003, Apple released the iTunes Music Store partnering with music labels to counteract the prevalence of music pirating. That was the first major digital marketplace with DRM and way before the App Store in 2008!
In 2005, digital distribution for video game consoles came with the Xbox 360, PlayStation 2, and Wii. Being game consoles with unique hardware, they kept their restricted licensed development model of previous generations.
The iPhone and App Store just followed that pattern. Unique hardware and a licensed digital marketplace to go with it.
Now, the hardware between video game consoles, smartphones, and personal computers are mostly unified; and the only real difference is software, but the restricted marketplace model still remains.
---
> The fact that mobile phones aren't yet just a standard type of portable computer with an open-ish harware/driver ecosystem that anybody can just make an OS for (and hence allow anybody to just install what they want) is kind of wild IMHO. Why hasn't the kind of ferver that created Linux driven engineers to fix their phones?
DRM. There are already devices where you can unlock the bootloader and install any OS on it. But then you won't be able to install apps that use the Play Integrity API to ensure DRM. Companies/developers want revenue and develop apps that require Play Integrity.
Any device that doesn't have DRM will never support a paid digital marketplace or paid content streaming.
> Is Android and iOS just good enough to keep us complacent and trapped forever?
Probably. Microsoft tried a DRM supported OS with Windows Phone and that failed.
---
That being said, digital marketplaces and DRM have there place to prevent piracy and allow developers and creators to make a living.
If someone has a solution to prevent piracy without a root of trust that would be ideal.
I've been meaning to ask you what the motivation of your project is? Why would you want a safe-c? When I saw the headline I was worried that all my runtime code would break (or get slower) because I do some very unsafe things (I have a runtime that I compile with -nostdlib)
I'm also tempted to write a commercial C++ compiler, but it feels like a big ask, paying for a compiler sounds ridiculous even if it reduces your compile time by 50x
This is exactly the direction I am seeing agent go. They should be able to write their own tools and we are soon launching something about that.
That being said...
LLMS are amazing for some coding tasks and fail miserably at others. My hypothesis is that there is some sort of practical limit to how many concepts an LLM can hold into account no matter the context window given the current model architectures.
For a long time I wanted to find some sort of litmus test to measure this and I think I found one that is an easy to understand programming problem, can be done in a single file, yet complex enough. I have not found a single LLM to be able to build a solution without careful guidance.
I have a pretty good cross-platform dotfiles setup for both Mac OS and Linux that I use Chezmoi to provision. I try not to repeat myself as much as possible.
I use Linux at work and for gaming, and Mac OS for personal stuff. They both build from the same dotfiles repository.
Some things I've learned is:
- Manually set Mac's XDG paths to be equal to your Linux ones. It's much less hassle than using the default system ones.
- See my .profile as an example on how I do this: https://github.com/lkdm/dotfiles/blob/main/dot_profile.tmpl
- Use Homebrew on both Linux and Mac OS for your CLI tools
- Add Mac OS specific $PATH locations /bin, /usr/sbin, /sbin
- Do NOT use Docker Desktop. It's terrible. Use the CLI version, or use the OrbStack GUI application if you must.
- If you use iCloud, make a Zsh alias for the iCloud Drive base directory
- Mac OS ships with outdated bash and git. If you use bash scripts with `#!/usr/bin/env bash`, you should install a newer version of bash with brew, and make sure Homebrew's opt path comes before the system one, so the new bash is prioritised.
I hope this is helpful to you, so feel free to ask me anything about how I set up my dotfiles.
2.4Ghz is pretty much only used by IoT, you generally don't care about channel width there. When your client device (laptop, phone) downgrades to 2.4Ghz it might as well disconnect because it's unusable.
5Ghz get stopped by a drywall, so unless your walls are just right to bounce off single, you need AP in every room. Ceiling mounting is pretty much required and you're pretty much free to use channels as wide as your device support and local laws allow.
6Ghz get stopped by a piece of paper, so the same as 5Ghz except you won't get 6Ghz unless you have haev direct line of sight to the AP.
As Nevermark mentioned, the vast majority of people don't have the resources needed to protect their privacy via individual actions. In other words, without institutional support there will be no privacy.
With regard to the original article, it's well written as is another one by the same author: "How They Control You" [1]
The two topics are tightly connected when you consider the media angle. I would encourage the author to write about the two issues together from that angle.
Yes, BUT, I want very strong guarantees that any such agents are loyal to me, the enterprise customer, first and foremost.
If anything, companies have MORE to lose than individuals here. If ChatGPT can anticipate your corporate decisions, then it can front run you and steal your whole business.
This absolutely happens:
- Amazon watches for successful third party products, then releases clones under their “Amazon Basics” brand and crushes the original.
- Robinhood makes most of their money selling real-time activity data to market makers, ensuring that big financial firms win over retail investors every time.
If OpenAI manages to do this in the general case… well then the valuations start to make a lot more sense.
I noticed I had an immediate bias against candidates that showed up to interviews using Windows (except for one person who was in WSL and seemed very comfortable in bash), or, not having their SSH key set up for cloning the github repo we used for our interview, or fumbling back and forth with their mouse between vscode and the browser, not using all their screen real estate, or not knowing even the most basic of keyboard shortcuts (I nearly cut an interview short once when I saw someone right click copy right click paste in vscode but I wanted to give them a fair shake so gritted my teeth and went through with the rest of the interview. They did poorly.). I never used it as a for/against factor but for me lack of interest in computers, and a lack of familiarity with the tools of our trade, is a red flag.
On the flip side, immediate green flags for me were: using linux, using keyboard shortcuts to manipulate windows / within the IDE, using an IDE other than vscode (vim/nvim or emacs are huge green flags), having custom scripts, having custom themes, or, the biggest one, self-hosting some applications. And Lo, these candidates also seem to perform the best in my experience.
> Their ability to refactor a codebase goes in the toilet pretty quickly.
Very much this. I tried to get Claude to move some code from one file to another. Some of the code went missing. Some of it was modified along the way.
Humans have strategies for refactoring, e.g. "I'm going to start from the top of the file and Cut code that needs to be moved and Paste it in the new location". LLM don't have a clipboard (yet!) so they can't do this.
Claude can only reliably do this refactoring if it can keep the start and end files in context. This was a large file, so it got lost. Even then it needs direct supervision.
Unsurprising given it’s been an open secret for over a decade that Meta employees will (if you have the right contacts or amount of money), orchestrate banning or seizing long-standing active accounts with desirable usernames and giving them to their friends or the highest bidder.
I'm banned from a few subreddits for correctly pointing out that ricing is not a pejorative, and the history of the culture that led to extreme customization.
You have malevolent third-party bots taking advantage of poor moderation to conflate similar/same word different context pairs to silence communication.
For example, the reddit AI bots considers "ricing" to be the same as "rice boy". The latter definitely is pejorative, but the former is not.
Just wild and absolutely crazy-making that this is even allowed, since communication is the primary means to inflict compulsion and torture these days.
Intolerable acts without due process or a rule of law lead to only one possible outcome. Coercion isn't new, but the stupid people are trying their hand for another bite at the apple.
The major platforms will not remain usable because eventually you get this hollowing out of meaning, and this behavior will either drive away all your rational intelligent contributors, or lead to accelerated failures such as evaporative cooling in the social networks. People use things because they provide some amount of value. When that stops being the case, the value disappears not overnight, but within a few months.
Just take a look at the linuxquestions subreddit since the mod exodus. They have a automated trickle of the same questions that don't really get sufficiently answered. Its all slop.
All the experienced people who previously shared their knowledge as charity have moved on because they were driven out by caustic harassment and lack of proper moderation to prevent that. The mod list even hides who the mods are now so people who have had moderated action can't appeal to the Reddit Administrators with the specific moderator who did something as a fascist dictator incapable of basic reading level comprehension common to grade schoolers (AI).
I stopped using Facebook because I saw a video of a little Australian girl maybe 7 years old age wise holding a spider bigger than her face in her hand. I wrote the most internet meme comment I could think of “girl let go of that spider, we gotta set the house on fire” hit the button to post, only it did not post, it gave me an account strike. At the time I was the only developer at my employer who managed our Facebook app integration, so I appealed it, but another AI immediately denied my appeal, or maybe a really fast human idk but they sure didnt know meme culture.
We already live in this world for health insurance. The ai can make plausible sounding denials which a doctor can rubber stamp. You have no ability to sue the doctor for malpractice, you cannot appeal the decision.
Medical insurance is quickly becoming a simple scam where you are forced to pay a private entity that refuses to ever perform its function.
Great start! I like the aesthetic and focus on a single language. Most of all, making it open source and just getting it out there!
I'd love to collaborate, but I think we've got to look at overall concept first. There's a lot of information on the screen and it's not really clear how the learner journeys through. Greatly reducing the amount of info on the screen at once, focusing learner's attention on a single path would be helpful.
There's many theories of language acquisition, but I think Krashen is most on-point: we learn through comprehensible input. New vocabulary really needs to be encountered in context of meaningful sentences that are understandable to the learner. Further, when training, production with spaced repetition is really the most effective strategy.
I'd love to see there be a really great free learning tool that brings a pedagogically sound approach to Japanese learners!
The churn is real. For this reason, I’ve thought for a while now that a DE that intentionally locks its overall design and feature set once it hits 1.0, with 95% of engineering effort being put towards optimization and bug fixes afterwards would do well. A DE that doesn’t unexpectedly change even over the course of many years is massively attractive to many.
Probably the closest to this that exists now are XFCE and Cinnamon, but it’s for the wrong reason (those projects’ lack of resources).
I tried I2P not so long ago and was quite impressed by the design decisions and the quality of the technology. It's truly an amazing piece of software that covers basically everything you need for a distributed network.
The only thing missing is actually the community and usage, because the technology has a network effect, and more users with stable routers provide faster and a more reliable network. So it's indeed slow at the moment. I highly recommend giving it a chance and playing a bit with it. Even for non-anonymity and security cases, it's fun to play with hole punching, global addressing by public keys, and stuff like that, which you can see in things like Iroh and libp2p.
It provides a simple universal SAM interface and libraries to work with it to plug other apps.
Yea the first few years of Facebook were magical. It felt like suddenly you could connect with your peers in a new way, discover old friends, etc. Went downhill pretty quickly though.
The rate at which AI is capable of producing code is intractable for humans to deal with. Right now the bottleneck is human reviewers. If AI ever becomes effective at generating provably correct code, it's joever.
The term that comes to mind, and one of my favorite concepts, is "progressive disclosure", which is a concept we really ought to be more mindful of.
One of the perks of just-a-text-file-with-a-bunch-of-addons is that it enables progressive disclosure - it takes no learning curve to just get in and use the tool on a basic level, but additional complexity (and power) can be introduced over time.
The problem with a purpose-built app is that there's a minimum level of new concepts to learn before the tool is even minimally useful, and that's a barrier to adoption.
A good example of this in action is something like Markdown. It's just text and will show up fine without you learning anything, but as you pick up more syntax it builds on top - and if you learn some markup syntax but not others, it doesn't prevent you from using the subset you know. There is a clear path to adding new knowledge and ability.
What this shows to me, as someone who has committed some of the unholy crimes above, is that people want their system, however esoteric, to come naturally to them.
I think reading docs, understanding a new system which someone else has designed, and fitting one's brain into _their_ organisational structure is the hard part. Harder than designing one's own system. It's the reason many don't stick with an off-the-shelf app. Including Org mode.
Adding entire files into the context window and letting the AI sift through it is a very wasteful approach.
It was adopted because trying to generate diffs with AI opens a whole new can of worms, but there's a very efficient approach in between: slice the files on the symbol level.
So if the AI only needs the declaration of foo() and the definition of bar(), the entire file can be collapsed like this:
Any AI-suggested changes are then easy to merge back (renamings are the only notable exception), so it works really fast.
I am currently working on an editor that combines this approach with the ability to step back-and-forth between the edits, and it works really well. I absolutely love the Cerebras platform (they have a free tier directly and pay-as-you-go offering via OpenRouter). It can get very annoying refactorings done in one or two seconds based on single-sentence prompts, and it usually costs about half a cent per refactoring in tokens. Also great for things like applying known algorithms to spread out data structures, where including all files would kill the context window, but pulling individual types works just fine with a fraction of tokens.
I wish they'd allow making issues and pull requests sponsor only. Could enable a business model.