The benefit is you can easily scale the complexity of the file. An .sh file is great for simple commands, but with a .ts file with Deno you can pull in a complex dependency with one line and write logic more succinctly.
> The benefit is you can easily scale the complexity of the file. An .sh file is great for simple commands, but with a .ts file with Deno you can pull in a complex dependency with one line and write logic more succinctly.
The use-case, as per the author's stated requirements, was to do away with pressing up arrow or searching history.
Exactly what benefit does Make.ts provide over Make.sh in this use-case? I mean, I didn't choose what the use-case it, the author did, and according to the use-case chosen by him, this is horrible over-engineered, horribly inefficient, much more fragile, etc.
To be clear for people reading wondering what this is about, this is only a hard recommendation for the public API types. The reason for it, is by adding explicit types to the boundary of your package, the package becomes way faster for users to type check because every user's machine doesn't need to do all the inference work and type check internal types or packages not related to the types. Additionally, it makes the published code more resilient to changes to TypeScript's inference in the future because it's not doing inference. It also becomes way easier to generate documentation for the package (also, the ability to generate .d.ts or bundled .d.ts files without a type checker becomes easier).
Right now, the publish command errors and asks you to fix the issues or bypass it entirely via `--allow-slow-types`. In the future there will be a `--fix` flag to write the explicit types for you.
It's being renamed to a no-slow-types lint rule that occurs on publish and in deno lint. Essentially it enforces that the types of the public API can be determined without inference. This enables a few things:
1. Type checking ts sources is really fast when Deno determines a package does not use slow types. It may entirely drop packages and modules that aren't relevant for type checking. It also skips parsing and building symbols for the internals of a package (ex. function bodies).
2. Documentation is really fast to generate (doesn't need a type checker).
3. A corresponding .d.ts file for relevant typescript code is automatically created for Node.
In Deno with JSR, only the public API gets used for type checking because publishing enforces that the public API can be determined without type inference. So it's similar to declaration files (d.ts files) and you wouldn't see errors that occur at the non-declaration level unless someone explicitly opted out of those publishing checks (which is heavily discouraged).
You can already mostly do this with dprint's TypeScript plugin if you tweak the config (and I'm open to adding more config to support this scenario in case something is missing). For example, the TypeScript team uses a line limit of 1000: https://github.com/microsoft/TypeScript/blob/1797837351782e7...
> Tying a language runtime to a specific KV interface which is tied to a specific hosted service is the opposite of forward thinking.
This is not the case. The Deno runtime itself is not tied to the Deno Deploy hosting service. The KV feature in the Deno runtime can be used without the hosting service.
> The KV feature in the Deno runtime can be used without the hosting service.
But one writes to foundation, and the other writes to a sqlite file. You wouldn’t be able to self host an app written for Deno Deploy and have it work out of the box.
Are there any plans to open source the KV backend so that people could host their own KV databases? Now that you can connect to remote Kv databases, I suppose someone could implement their own?
This is what Java did - JCP created, APIs invented to be implemented by Tomcat, JBoss, fill in your favorite here. Configure your actual instance - in the past this was done with beautiful UIs - total waste of time considering we're in the era of Yaml (FBFW?) configuration.
This instantly reminded me of Next.js, which is open source but has a special build format for serverless environments.
The 1st party implementation is closed source: 3rd parties start on the back foot trying to implement alternatives and have to keep up with a 1st party that can move in lockstep.
And sure enough, like every other time I see this kind of behavior: Deno was invested in by the CEO of Vercel.
"Javascript is taken over by venture capital" wasn't on my 2023 bingo.
Vercel needs to stop with this bullshit. It is straight up predatory ”open” source. Like a trapper’s cage, there’s a convenient, tasty bait and then it’s too late.
Is it too cynical to say this might be a lesson devs need to learn the hard way?
Right now the JS community has whipped themselves into a frenzy into building on VC backed technology.
- They refuse to acknowledge that the loudest voices in the room are openly sponsored and invested in by the same VCs who own the companies behind said tech
- They see no issue with a lack of diversity in implementations, instead settling for "it's a standard". Of course, defining a standard without a healthy variety of implementations means you end up with standards that don't benefit from a wide range of voices until well after they land (see RSC)
At the end of the day, those two alone are a pretty harsh combo: A VC-backed network effect machine built across multiple brands, and high technical costs to building something that meets the collection of standards.
I don't think anyone but FAANG can really compete with that without also getting VC dollars, thus reinforcing the loop.
You can build a Next.js app and run it on a docker container or regular linux host almost anywhere. Vercel has some nice continuous deployment stuff built-in but I'm not sure how a Next.js app is locked into it at all.
Next.js and Vercel heavily push serverless deployment: 13 reworked the built-in API support to leverage Web Standards, which discarded interop with the a much larger server ecosystem in order to enable better edge support.
Yeah on the same note I developed a moderately complex app on Next but I hit a roadblock when I needed background job support, which is not natively supported (or at least at the time wasn't) on Vercel/other Next platforms and so it was never a priority for Next. Pushing serverless so hard also made deployments janky and production bugs weird when you tried to use things not supported by the underlying platform, AWS (don't remember the details now, but Node version was one of those).
This sentiment has been repeated in a few comments. But, why can’t the deno deploy implementation be reimplemented, by yourself, by running a foundationDB server with mvSQLite[1]?
That shouldn’t require any changes to the code.
The `deno cache` command (ex. `deno cache main.ts` or `deno cache --node-modules-dir main.ts` if you want a node_modules folder) will ensure all npm packages used in the provided module are cached ahead of time. It acts similarly to a `npm install` if you need it.
Also, at the moment, npm specifiers aren't supported with `deno compile` (https://deno.land/manual@v1.28.0/tools/compiler), but in the future that will be one way to have a self contained executable with everything ready to go.
No, the --node-modules-dir flag doesn't create a symlink to the home directory cache. It creates a copy in the node_modules folder (in the future it will use hardlinks to reduce space, there is an open issue). It's stored at node_modules/.deno/<package-name>@<version>/node_modules/<package-name> (that flag is heavily influenced by pnpm)
You can do a patch package by doing something like the following (and then you can move this into a `deno task` https://deno.land/manual@v1.28.0/tools/task_runner when launching your app to ensure it happens):
deno cache --node-modules-dir main.ts
deno run --allow-read=. --allow-write=. scripts/your_patch_script.ts
deno run --node-modules-dir main.ts
reply