I've been using curl, like forever. I don't understand the preoccupation for using postman, et. al. -- why pay for something that literally requires a little bit of light RTFM?
https://news.ycombinator.com/item?id=9224 "you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software." -- on why Dropbox should not exist.
People absolutely will pay for software rather than reading or thinking, if it makes doing the work easier. You may have heard of this thing called chatgpt.
(not being a web developer, I've only lightly used Postman, and it is definitely handy for things like authentication. Especially once you touch OAuth. But I uninstalled it once they went unnecessarily cloud)
Because it's convenient. I use curl often, but admit to using Bruno even more often. And yes, I could have some organized scripts or something, but for playing with various APIs daily, sometimes importing whole .json collections, or even setting up credentials in one place and reusing them across all the requests from a collection - that's just fast, easy and convenient. Same for responses - yes, I could work with jq and analyze in the console, but often I don't really know what exactly I'm looking for, so it's just easier to have it visually parsed and click through items
For repeated commands, my projects have a Make/Just file that has cURL commands I wanna validate. Sometimes I even load JSON from a tests/fixtures/*.json file, that also can be reused for other non-E2E tests.
Not sure how some developers could be so allergic to the terminal, don't you already spend a lot of time there?
> Who says I'm allergic to the terminal? I already stated that I use curl.
Preferring "a couple of clicks" vs "run one command" seems to indicate so, otherwise I'm not sure why'd someone would prefer the former instead of the latter.
I have dozens of collections with hundreds of requests, most sending complex payloads, all perfectly organised hierarchically. I'd rather use a collapsible UI for that, if you prefer to have hundreds of scripts in folders that's fine too.
Actually I don't even create those collections, we have OpenAPI/Swagger docs for all of our APIs and I just import them with a couple of clicks (which I'm sure there's a way to do with curl).
For the odd requests, and sharing requests with others? I use curl, no problem. I actually think I know it pretty well and very rarely need to look up any docs for it.
> I'd rather use a collapsible UI for that, if you prefer to have hundreds of scripts in folders that's fine too.
No, I don't (what a shitty strawman), I create abstractions then, like any other project. Surely you don't have hundreds of completely original and bespoke requests? Previously I've handled thousands of requests by having a .csv to load from.
Maybe on work computer. But I can’t bother with installing, updating, and running these kinds of bloats on my personal computers. Only two software stay open for longer than a few hours: Emacs and firefox.
The thing with simple tools is that bootstrapping is easy and versatile.
I’m also a CLI old timer, but there’s undeniable utility in having a Postman-like collection to test drive a mobile app API. You can save state from responses and use it in subsequent requests. E.g. log in, save the access token, create a post, save the id, post a comment under the post using the id. It’s all very useful, to say nothing of the fact that you can give said collection to non-technical stakeholders and they can solve a lot of their own problems without going to get one of the engineers to Do A Command Line(tm).
All that said, I wouldn’t touch Postman. Last time I needed something to fit this bill I looked around to find the open source equivalent and found Bruno.
Some people like to think about the problems related to the actual work instead of looking up CLI tool manpages when they need to do something once in a blue moon.
I use curl liberally and also tend to create scripts around it to perform common tasks, but I still get why someone would prefer a GUI.
If you're doing a lot of requests for testing or some other purpose I could see an argument for a graphical interface. Curl is a masterpiece but it's not that simple to use. But again, we're in $current_year and I'd be surprised if "hey Claude, can you cook up a curl request to do this and that" doesn't work.
Even simpler and free: `tldr curl` in your terminal gives you like 80% of what you need for day to day requests, `man curl` gives you 100% of what you need.
cURL is an amazing tool, but it's more "HTTP client" and less "full blown API client".
The page even sort of acknowledges this... saying you manage your environments with environment variables. It doesn't mentioned how to extract data from the response, just jq for syntax highlighting. No explanation of combining these two into any sort of cohesive setup for managing data through flows of multiple requests. No mention anywhere on the page of working with an OpenAPI spec... many of the tools provide easy ways to import requests instead of manually reentering/rebuilding something that's already in a computer-readable format.
So the tl;dr here is "use cURL, and then rebuild the rest of the functionality in bash scripts you idiot".
I went down this path of my own accord when Insomnia was no longer an option. I very quickly found myself spending more time managing bash spaghetti than actually using tools to accomplish my goals.
That's why I use a full blown dedicated API client instead of a HTTP client. (Not Postman though. Never Postman.)