Hacker Newsnew | past | comments | ask | show | jobs | submit | more CharlieDigital's commentslogin

Microsoft runs many of their demos on MacBooks. You missed the memo that Windows OS is no longer their bread and butter. Go check their GH OSS projects (e.g. .NET) and all of them have have shell scripts alongside PowerShell.


This is too logical, practical, and pragmatic. Which product owner/project manager would approve such a thing!?

Being able to think of simple, practical solutions like this is one of the hardest skills to develop as a team, IMO. Not everything needs to be perfect and not everything needs a product-level fix. Sometimes a "here's the workaround" is good enough and if enough people complain or your metrics show use friction in some journey, then prioritize the fix.

GP's example is so niche that it isn't worth fixing without evidence that the impact is significant.


2020 State of the Octoverse security report showed that .NET ecosystem has on average the lowest number of transitive dependencies. Big part of that is the breadth and depth of the BCL, standard libraries, and first party libraries.


The .NET ecosystem has been moving towards a higher number of dependencies since the introduction of .NET Core. Though many of them are still maintained by Microsoft.


The "SDK project model" did a lot to reduce that back down. They did break the BCL up into a lot of smaller packages to make .NET 4.x maintenance/compatibility easier, and if you are still supporting .NET 4.x (and/or .NET Standard), for whatever reason, your dependency list (esp. transitive dependencies) is huge, but if you are targeting .NET 5+ only that list shrinks back down and the BCL doesn't show up in your dependency lists again.

Even some of the Microsoft.* namespaces have properly moved into the BCL SDKs and no longer show up in dependency lists, even though Microsoft.* namespaces originally meant non-BCL first-party.


I think first-party Microsoft packages ought to be a separate category that is more like BCL in terms of risk. The main reason why they split them out is so that they can be versioned separately from .NET proper.


macOS daily driver for last 5 years because the hardware is better; I can stay unplugged for almost my entire workday.

Lots of things irk me about macOS UX. Finder's lack of tree view sidebar really irks me. Having to disable the silly animations and sounds when I get a new machine irks me. The absolutely terrible window tiling system irks me. When I minimize a window, I can no longer tab into it. The settings dialog's weird behavior with respect to resizing on both axes irks me. Can't use 3 monitors without an expensive DisplayLink dock and the secondary monitors end up with limited refresh rate options. Meanwhile, I can just plug just about any dock into my 5 year old Windows laptop and multiple monitors just work. Still can't find anything as good as IrfanView (as old and dated as it is, it made working with image libraries a breeze).

Finder and the poor external monitor support somehow irks me the most because now I end up typing into the CLI 90% of the time because the navigation experience is so bad and for me this is a work machine and the difficulty in using 3+ monitors is silly.

I get being an Apple Stan (love the hardware), but the software UX is 100% bottom of the barrel stuff. Basic OS stuff like Explorer is just light years ahead of Finder.


Exactly my thought, the last paragraph.


> Can't use 3 monitors without an expensive DisplayLink dock

Or you could buy a MacBook Pro with an M4 Max and plug in four monitors without displaylink.


Something that most sub 1000€ laptops could do over a decade ago, requires a macbook costing well above 2000€ in 2025.

I'm running multiple types of machines in my home for work and personal use but macs have been by far the worst offender for multiscreen support for a long time.


Most $1000 laptops supported three monitors a decade ago?


Vintage 2015? Yes. Any Dell Latitude (mid tier business line) would have supported 3 monitors and would have been in the ~$1000 range, especially picked up off of Dell Outlet. I have Dell Latitude Windows machines from 2011 that supported 3 monitors.


Using what ports? I had Dells back then and at most they had one VGA or HDMI port.

If you’re referring to using a dongle with its own GPU like some of the modern Displaylink adapters, any Mac can do that.


Not dongle; docking station (old school ones with port on the bottom).


That wasn’t a capability of the laptop. The docking station had its own dedicated graphics hardware that drove the extra monitors. Of course the lowest end Macs can do that. When we talk about how many monitors a certain laptop can drive, we mean plugging them in directly to the laptop in some combination of HDMI ports and USB C/Thunderbolt ports.


This is embarrassing, just stop. You can go look up the specs of any chip and how many discrete displays the iGPU supported instead of bickering about what is or isn't a laptop feature. Citing sources isn't illegal or passe.

Here is the chip in my dogshit 2015 Ultrabook I still lug around, it very clearly states that the chip supports 3 displays regardless of how IO bandwidth is distributed on the mobo: https://www.intel.com/content/www/us/en/products/sku/88192/i...


Exactly how did you attached three monitors to your laptop without a docking station? I bet you the docking station had displaylink based hardware/software like most of the Dell docking stations had

You didn’t give any citations on your specific model of laptop or docking station. You just admitted that you used an old school docking station - those came with GPUs.

And your citation said it wasn’t available in all configurations.


It wasn't DisplayLink. Docking station only added ports.

https://www.ebay.com/itm/274734034597?itmmeta=01KAWK2DRBFS5H...


The docking station didn't have GPUs; they only had the extra ports.


I haven't kept up, but do they support display port daisy chaining and multi display docks now?


Zod is quite unpleasant to use, IME, an has some edge cases where you lose code comments.

From experience, we end up with a mix of both Zod and types and sometimes types that need to be converted to Zod. It's all quite verbose and janky.

I quite like the approach of Typia (uses build-time inline of JavaScript), but it's not compatible with all build chains and questions abound on its viability post Go refactor.


> we end up with a mix of both Zod and types and sometimes types that need to be converted to Zod

In my code, everything is a Zod schema and we infer interfaces or types from the schemas. Is there a place where this breaks down?


Not that I know of aside from code comments (which I like), but I much prefer writing TypeScript to Zod


I think in most cases, outside of pure AI providers or think AI wrappers, almost every team will realize more gains from focusing on their user domains and solving business problems versus fine tuning their prompts to eek out a 5% improvement here and there.


I don’t think you can use this as a blanket statement. For many use cases the last 5-10% is the difference between demoware and production.


If that were true, just switching to TOON would make your startup take off.

That is obviously not true because a 5% gain in LLM performance isn't going to make up for a bad product.


Both of these are kind of silly and vendors trying to sell you tooling you probably don't need.

In a gold rush, each is trying to sell you a different kind of shovel claiming theirs to be the best when you really should go find a geologist and and figure out where the vein is.


Same way I see it.

Objects and inheritance are good when you need big contracts. Functions are good when you want small contracts. Sometimes you want big contracts. Sometimes you want small contracts.

Sometimes the right answer is to mix and match.


    > It's just the amount of money spent on Windows/Microsoft for small companies is rather large, compared to other alternatives that are just as good.
This is a complete mis-perception about the modern ecosystem.

We have a full team using C# at a series-C, YC startup with every developer on Macs (some on Beelinks and Linux). The team is using a mix of VS Code, Cursor, and Rider. We deploy to Linux container instances in GKE on Google Cloud running Postgres.

There is no more tie in to Microsoft licensing than there is say for TypeScript. Yes, C# DevKit is licensed like VS, but if you don't need the features, then you can also use DotRush or just use the free C# Extension.


Totally agree.

Ironically dotnet runs better on Linux/Mac systems in my experience. All our devs who use Windows for dotnet dev now use WSL2 as it matches production. We don't use any other 'commercial' Microsoft products like SQL Server or Azure. All postgres/redis/etc and deploy onto docker containers.


> Ironically dotnet runs better on Linux/Mac systems in my experience. All our devs who use Windows for dotnet dev now use WSL2 as it matches production. We don't use any other 'commercial' Microsoft products like SQL Server or Azure. All postgres/redis/etc and deploy onto docker containers.

I am pushing for Linux containers in the workplace... away from Windows, IIS, etc. I totally agree with you 100%. I'm also trying to push us away from SQL Server where possible.


My last comment, which you referenced... focused not just on C# or .NET.. but the focus of "you need Microsoft" in general.. this includes Windows, SQL Server, etc.

Again, my comment is focusing on someone on the outside looking in.. and WHY people end up making decisions away from C# in favour of (something like) Go.

I am aware of deploying to Linux containers, etc.


Yes, I am very aware of this and have spent a lot of time/effort trying to dispel some of these myths and mis-understandings through writing.

- TypeScript is like C#: https://typescript-is-like-csharp.chrlschn.dev/

- 6 .NET Myths Dispelled: https://medium.com/dev-genius/6-net-myths-dispelled-celebrat...

- The Case for C# and .NET: https://itnext.io/the-case-for-c-and-net-72ee933da304


I'm at a series-C, YC startup. We made a switch from TypeScript to C# two months back. Now we have a team of over a dozen backend engineers working on C# transitioning from TypeScript. 90% are working with C# for the first time. (We are still hiring backend C# engs!)

I can say that it has gone waaaaaay smoother than anyone would have thought. This is a decision (language switch) that the team has been putting off for a long time and simply suffering through some big time jank and complexity with TypeScript (yes, TS at scale becomes complex in a very different way from C# because it becomes complex at the tooling layer in an "unbounded" way whereas C#'s language complexity is "bounded").

Indeed, I think more teams should give C# a shot. My own experience is that C# and TypeScript at a language level are remarkably alike[0] that if you know one well, you can probably quickly learn the other. But the C# ecosystem tooling is more cohesive, easier to grok, and less fickle compared to JS/TS (as is the case with Go, Java, etc. as well).

There still remains a lot of mis-perceptions about C# and .NET in general and I think that many startups should spend the time to give EF Core a shot and realize how every option in JS-land ends up feeling like a toy. EF Core itself is worth the price of admission, IMO.

[0] https://typescript-is-like-csharp.chrlschn.dev/


It is no coincidence that C# and TS are similar. They are created by the same person, Anders Hejlsberg. The C# language may have some baggage from back in the day, but at least it has a very good, non-fragmented ecosystem. While Typescript may have learned from some of C#'s mistakes, the js/ts ecosystem is a dumpster fire imho.


So should I learn C# by learning Typescript? Does that make sense?


As much as I think C# at a platform level is a better tool for building backends, you'll get the better bang for the buck learning TypeScript if you don't already know TypeScript.

Then if you have the chance, you'll find C# an easy transition from TypeScript, IME. Learning C# first, on the other hand, will make you a better TS developer, in my opinion, because it will shape your approach to be more diligent about using types. This is something most JS/TS devs do very poorly and at scale, it's very hard to reason about code when it requires digging down several layers to find the actual types/shapes.

"Enterprise" frameworks like Nest.js are much more similar to ASP.NET or Spring Boot than they are to Express, Hono, or Elysia so once having experience with .NET Web APIs (or Spring Boot) will make Nest.js (for example) easier to pick up.


Not really, you should learn Typescript by learning JavaScript first. Then consider learning C#. Or if you want to focus on the back end side learn C# and skip TS/JS.

They are created by the same person but they are very different in my opinion.

TypeScript is "a tool" for JS, it is possible to compile without errors but still fail in runtime (e.g. wrong object type returned from API), on the other hand parsing JSON with C# will give you correct object type, it may fail if some properties are missing but it will fail at parsing call, not further down when you try to use missing property. In other words typing is not glued on top of the language it's core of the language.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: