Just because it's useful to wrap exceptions in 5% of the cases doesn't mean you should wrap the rest 95% just in case. YAGNI.
1. You don't know which exceptions will be raised in advance. Anything that involves IO can fail in a plethora of ways, and you don't even know which calls involve IO (e.g. a library might choose to cache something on disk).
2. Consumers of your code will not know how to deal with those exceptions.
3. Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
4. You debug those exceptions by looking at stack trace. Adding extra levels just to give a fancy meaningless name to an exception does not help.
5. The whole point of exceptions is to propagate. Parthenon essentially suggests converting exceptions into return values.
I think there are two kinds of exceptions in languages that have them as their error handling mechanism.
1. Exceptions you expect your consumers to handle.
2. Exceptions you don't expect your consumers to handle.
The first one I would argue you should wrap third party exceptions. There is in nearly every case important context in your code that the thrower of the third party exception will not know and whoever is reading or handling the exception will want to know.
The second one should do whatever the equivalent of crashing is for your use case. Either exiting the program hard or bubbling to some top level handler where a "Something weird and unexpected happened and we can't continue whatever action was going on" alert or log get's recorded and the activity is terminated.
If something is throwing an exception and you don't know it can be throwing an exception it probably belongs in category 2. You may over time transition it to category 1 when you figure out that it is actually handleable.
In my experience though if you aren't disciplined then every exception ends up being lumped into category 2 whether it should be or not. Any language that helps you force the categorization gets bonus points from me.
I'm reminded of an old Eric Lippert post about this. He basically says the same as you: boneheaded and fatal exceptions are not meant to be caught, and vexing and exogenous exceptions are ordinary flow control.
This is one of the problems with checked exceptions. The library author is in no position to expect me to handle an exception. Whether or not I can relies on the design of my system, which they have no window into.
This is the problem of Java's implementation of checked exceptions... because they are not generic. Which forces middle layers to make impossible decisions like this.
If the type system were more capable, they'd just be equivalent to typed generic error returns, which work fine.
This forces client code to handle arbitrary exceptions and it loses the information about which specific exceptions are actually thrown. This is like having Object as the return type on all functions, and having callers do instanceof to see what type of result they got. Exceptions should be part of the semantic contract of functions/methods just like return types are.
Exceptions are a return type of functions. They return via a different path but there is no possible way you could disagree that they are a form of return.
As they don't return to the same place, don't return to a single place, and may (if not caught) not actually return but instead take down the entire thread, I think it's reasonable that for some purposes it might well make sense to treat exceptions as something other than "a form of return." ¯\_(ツ)_/¯
However we're treating them, however, I agree that they are a part of the interface and should be documented (and, where relevant, checked) as such.
The point is that the type system doesn't properly reflect that. You should be able to declare a generic method that takes some T, and throws everything that T.foo() throws, for example.
And 99.9% of the time the client is just going to catch generic Exception and doesn't care what the type is. Rollback a transaction, return 500 or display an error dialog. Done. It doesn't matter what the client library thinks.
If it's an unchecked exception experience says I have a 60-70 percent chance of not knowing it's there without a careful reading of the code. Which means I can't know myself if it's possible to handle it or not. I can always rethrow an exception if I know it exists. Languages that give me a way to ensure I know about it means I avoid unnecessary pain later in production with someone breathing down my neck to please fix it now. I'll take that any day over a little boilerplate.
That is true, so the library should throw the checked exception, and if the caller has no way to handle it, it should wrap the checked exception in an unchecked exception and throw the unchecked exception. Not too hard, and library clients that can handle some checked exceptions will be able to.
I hate libraries that only throw unchecked exceptions. It seems easier initially, but makes writing correct code more difficult.
> If the caller has no way to handle it, it should wrap the checked exception in an unchecked exception and throw the unchecked exception.
Not you have a new problem: How is the code calling the caller supposed to know about that exception? You can't even catch it (even if you know about it!) using normal try-catch because what you have to do is catch the wrapper exception and then check inside that for the exception type you expect via instanceof.
Checked exceptions (at least as implemented in Java) are horrible for the composability of code wrt. error handling.
That is why most libraries eschew checked exceptions these days. Unfortunately, large bits of the standard library in Java forces their hand wrt. re-wrapping stuff like InterruptedException and IOException and the like.
(Not to mention, most of the time you really shouldn't be catching exceptions in very small scopes or at the very highest level in your code. Involving every single layer in between is madness.)
The error here is not that they are wrapping the exception necessarily. It is that they re-threw it as an unchecked exception. I strongly believe the only good use of unchecked exceptions is if the code should do the closest reasonable thing to crash safely. Everything should be clearly communicated to the callers so they can make good decisions about what to handle here and what to pass up the chain.
If the complaint is that you then have too many different unchecked exceptions perhaps the error domain has been improperly modeled and you are getting a clue that the system is poorly designed.
If you have a Function which needs to do some interruptible work you cannot throw InterruptedException -- you must wrap it. This is a fundamental design flaw in Java's exception system and cannot be handwaved away just by saying that Function is badly designed. This problem is pervasive.
The ultimate problem here is one of variance -- throws clauses have the opposite variance rules from method implementations: Subclasses (whether of interfaces or classes) frequently need to more than could be foreseen by the implementor of the interface, so they need to be able to throw "more things", but checked exception clauses explicitly disallow widening the set of thrown exceptions in subclasses (for obvious reasons -- since a FooImpl can be used a runtime where a Foo is expected).
This is a fundamental flaw that was overlooked in the checked exceptions design and there's no fixing it now.
"Exceptions you expect the consumer to handle" should be explicit return values on the function signature instead, via an Either/Result type or similar.
Unless you're writing Java where you can force certain exception types to be handled, but people never write that code properly.
Very few languages actually support Either or Result types ergonomically. I don't have a problem with people using exceptions for these but I do think if the language has compile time type checking it should provide a distinction between checked exceptions and non checked exceptions. I am very much in the minority here but I find it very useful to let robots tell me I forgot something than to discover I forgot it in production. I think this is the useful sort of lazy rather than the not useful sort. Not useful laziness is letting your customers discover the problem.
Yeah I agree. I think the problem with checked exceptions in Java is a combination of misuse on both the consuming and the throwing end. They're amazing when they provide a compile time checked way to make sure consumers handle stuff they actually definitely want to handle, but if you add too much noise then all the consuming code is just going to catch and rethrow even the important ones.
1. You don't know which exceptions will be raised in advance. Anything that involves IO can fail in a plethora of ways, and you don't even know which calls involve IO (e.g. a library might choose to cache something on disk).
Not sure how this relates to the code design decision
2. Consumers of your code will not know how to deal with those exceptions.
Not sure the point here but if the argument is that the caller won't know how to deal with the third party exception, I disagree. Typically it just needs to return a single exception type that wraps any underlying cause. Caller can just either decide to deal with it or rethrow.
3. Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
If the database is unresponsive do you want to crash your program? I wouldn't. I'll usually retry until its available.
4. You debug those exceptions by looking at stack trace. Adding extra levels just to give a fancy meaningless name to an exception does not help.
Really don't think an extra trace in the stack is a reason to not create a well defined contract.
5. The whole point of exceptions is to propagate. Parthenon essentially suggests converting exceptions into return values.
I didn't see anywhere in the doc where they mentioned converting exception into return values. Wrapped exceptions are still exceptions.
> Really don't think an extra trace in the stack is a reason to not create a well defined contract.
It absolutely is IMO. Stack traces are the main reason to use exceptions at all, and having multiple layers of useless wrapping around them is one of the biggest frustrations when trying to understand and debug an issue.
If the database is unresponsive, I would prefer to return 5xx to the caller as soon as possible. May be this service is not so important and it's better to present to user an incomplete page rather than waiting for hours or days until database is available.
If it is important, caller will call the service until he got response.
Again: fail fast. Even in distributed systems. Unless you're 100% sure that you know better how to handle this particular issue.
6. It is much faster and easier to develop an application's "happy path" while completely ignoring failures. Failure handling is a software maintenance issue. YAGNI.
As a counterpoint to that - the chances that a newly developed feature will work perfectly anywhere other than your own development environment are often pretty low. And when it almost inevitably fails, not having any error-handling logic will use up huge amounts of your time trying to track down the reason for failure.
For a first cut of any new feature, I'd want at a minimum to have some form of detailed error-reporting, even if it's presenting a stack trace to the user. Ideally it's handled by whatever framework you're working in, which typically means all you need to do is not swallow exceptions, but if you do have to write your own error-handling code, don't throw away any details about the error (sometimes the best you can do is just log the full exception details, and personally I'll always ensure that the names/URIs of any resources involved are included in those details).
> You don't know which exceptions will be raised in advance.
> Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
I prefer to propagate such unknown exceptions to a top-level catch to clanly log that something happened.
I usually have two types of exceptions: the ones I expect at some point (a HTTP call failing for some reason) that I may ("when I have time") group in "known exceptions we should not worry about" and log them as "informational", and exceptions I did not anticipate that I want to log as well, but with a critical level because they were, well, unexpected.
So crashing right when they happen may not be the best strategy.
I really wish English had different individual words for the concepts "known [problem] we should not worry about" and "[problems] I did not anticipate".
Everything goes to ERROR by default, some errors get downgraded to WARNING when you confirm they are not important.
Send ERRORs to Sentry (or whatever you use) and deal with them immediately. Send WARNINGs to your favorite centralized logging solution and deal with them when there's too many.
> Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
This is why D separates exceptions into two categories:
1. Error - these are not recoverable. The only reason to catch them is to maybe try to save some state or log a message or shut down the reactor before crashing.
2. Exception - these are recoverable
(I'm being facetious. Any system design where, while unwinding a fatal error, one relies on it to shut down the reactor is a horrible, terrible design.)
> Just because it's useful to wrap exceptions in 5% of the cases doesn't mean you should wrap the rest 95% just in case. YAGNI.
I'm going to curse here, but ... fucking seriously.
If I get a .net ADO exception coming out of a 3rd party library it's absolutely not going to shock me or throw me for a loop. But do you know what IS a pain in the ass? Using 3 different libraries, all that wrap that same ADO exception in their own custom exception.
Haha, that's actually a good point. A single problem can affect dozens of modules, each having a different wrapper.
If my db goes down, I very much prefer to see a single DatabaseConnectionFailed than a multitude of FailedToSaveObject, DatabaseError, CannotLoadData, IOError, SomethingIsWrongThisShouldNeverHappen, DBHostUnavailable all over the place. Good luck navigating through the noise and isolating those.
The article talks about solving that exact issue by wrapping all of those different exceptions so that you don't have to deal with them. Much easier to catch a single ThirdPartyInternalFailure exception than catch all the internal exceptions that could have caused an internal failure.
When library 1 through 10 take this approach to wrapping a common exception you get the above behavior. Obviously a single library should have a coherent internal story
"Just because it's useful to wrap exceptions in 5% of the cases doesn't mean you should wrap the rest 95% just in case"
Yes you should wrap ALL third party exceptions for the reasons given in the post. There is no 'just in case' reason. Any third party exception returned may cause your client to be dependent on it.
In short: if you have a meaningful recovery pathway for a particular exception this can be useful, but I found that 9 out of 10 exceptions/errors in code cannot be recovered from.
This is checked exceptions all over again... Frankly, I usually do exactly the opposite. Most cases I've seen the exceptions that can arise are not widely known in advance, and for most spots where exceptions can be raised from I simply do not care about them as much either. If my DB is not accessible - do I really, really need to wrap it? If the DNS resolver is failing - do I really really need to wrap it?
When debugging I actually appreciate when a library/controller/module/whatnot does not attempt to "enhance" the exception from something outside of its control with a wrapper, and value seeing the original failure instead.
> If my DB is not accessible - do I really, really need to wrap it?
I would say yes. Most of your code doesn't care that the database failed with a PG-300292-AB error. What you do care about is that there is a system failure. If you wrap your system failure, and document it as THE exception, the API caller will know what to look for.
In general there really are only a few exceptions: system, invalid input, non found. I'm probably missing one or two others. Most exceptions/errors are one of these. If you make these 3-5 exceptions/errors explicit, you're APIs will be pretty clear. A function can pretty much always returns a system exception (bad IO, sunspots, etc.). It can also throw an invalid input exception. Your client can handle these cases differently. For a HTTP RPC service, you return a 500 for system and 400 or 422 in the example. Not bad.
One can even wrap the exception with a more detailed message (which I know you said you didn't like) that preserves flow. So you get a invalid input exception, you can add details around the null value by properly nesting the messages. They should show up in your logs.
Not that this invalidates your point in any way, but returning a 503 for database (or any other resource) unavailable is useful to disambiguate server errors from transient connectivity/database is temporarily unavailable errors.
That is an interesting topic, as I went through the cycle of adding and removing them, they always felt like the right thing, but they felt like "work". They make coding less fun, having to think through failure conditions, like you say, you don't care about.
I guess the answer is really there is no one sized fits all solution, we have some applications where the correct solution is to crash, we have other applications that can't crash under any condition. You just need to know where you are.
Edit:
Also, out of the box C++ checked exceptions implemented terribly.
We primarily use custom exceptions to add detail to our error logs while still remaining concise. Generic exceptions are exceptionally useless for trying to debug issues, I've found.
The message (a string) will give you that information, nothing is lost by catching a base Exception and writing it to a log.
typed exceptions is about special handling. The type information allows the programming language to offer special handling when that is necessary, it is NOT meant to replace good error messages.
But it also isn't about taxonomy, which is something that a lot of people miss. It's putting the cart before the horse.
---
But you specifically talked about debugging. If you're talking about interacting with an attached debugger, the developer already has everything they need, there is no reason why specific, typed, exceptions are needed to assist that.
so you MUST be talking about general debugging by reading logs and the like, but those logs will tell you both the error message and the stack trace with the specific line that originally threw. Unless, of course, you're catching and rewrapping the exception and throwing that information away...
I think this only makes sense if the 3rd party is also throwing custom exceptions.
If you want to reduce coupling you should avoid throwing custom exceptions at all. Semantic information can go in the error message and log. The error type should be used to indicate to your program whether an error is recoverable, retriable or some other action needs to be taken. For example google on has 16 canonical error codes for all APIs.
Addendum: write a useful exception message for future travelers that might need to review that stack.
What was null? Why might it be null? Mis-configuration? Missing configuration? Can't load some data or connect to some system? Which setting should be verified? Give simple hints on what might have gone wrong to help future you.
Amazes me that so many people dont understand that if all things are dependant upon your business logic or on your domain then this is natural. Its natural hex architecture
I actually do not think your code should throw exceptions. It is really just an Either / result and then if something does blow its because you havent anticipated it via wrapping something in an either or result ... and so it should blow and the callers of your library should be submitting a defect..
This is a reasonable approach, but having a catch-all case that handles the exception and re-throws it, but wrapped in custom exception type can be problematic in case a new exception is added - the consumer of the API needs to go over their code and change the types everywhere, or else their catch code will not be working as before anymore.
This problem can easily go unnoticed, judging by my Haskell experience.
The solution would be to always use "checked exceptions" (or similar concept in your language, e.g. `ExceptT SomeEnumType` in Haskell) and never use catch-all cases (or wildcard patterns, in case of Haskell), so that every exception handling case is tagged with exception type, and type-checked.
This idea can also be explored in the Go programming language. Go has an error type, not exceptions, but error checking famously can be rather verbose.
Two cases to consider come to mind. First, the common pattern
Here the code is just passing along the error to the caller, unchanged.
Second, signaling errors ab initio
result := // some calculation or behavior
if result != expected
return fmt.Errorf("An error happened. Expected %v, go %v", expected, result)
In the first case, the discussions around whether or not to throw custom exceptions applies analogously: should you wrap the error or not?
The second case, I argue, is always wrong. The error is "stringly typed", and can be examined and read by a person, but that's it. The correct way is to define an error type meaningful for the context. Errors in Go are type implementing the error interface
type error interface {
Error() string
}
therefore, an error should be a type relevant or the context. A useful starting point looks something like
// some work
if result != expected {
return DomainError {
Status: AnErrorCode
Message: fmt.Sprintf("Error %v. Expected %v, go %v", AnErrorCode expected, result)
Details: []DomainType{ expected, result }
}
}
And the caller gets back a type providing useful information.
The whole DomainError thing only makes sense if you expect someone to handle this error (and by handle, I mean doing something other than logging and aborting some operation), or if you are writing a very generic library. Otherwise, it is wasted time and extra complexity that makes the code harder to read.
It depends on how much context is needed. Imagine you have a Go library that parses JavaScript. Parsing can return an error, and it's very helpful to know information like the line and column number. So you might have an error like this:
type ParseError struct {
Line int
Col int
Message string
}
The user of your library isn't expected to handle this error explicitly, but when they print the output, they can see exactly where the issue was.
Do you expect any piece of code to touch err.Line and err.Col? I definitely don't - I would expect the line and col to be included in the message of a basic string error.
Yes, that's true, just as it is with exceptions in the original article. However, if you are writing the code and it's callers, if you don't want to handle the error, why even have it? Just log the results there and move on.
> However, if you are writing the code and it's callers, if you don't want to handle the error, why even have it? Just log the results there and move on.
Well at very least you need the caller to know that the operation didn't succeed. And if the failure is deep in the call stack, you may not have enough information to log a meaningful error. Returning errors and wrapping them with a description as you pass them up the stack means you can have a single meaningful log message at the "top level" (wherever the buck stops).
IME, returning a "stringly-typed" error is only wrong if you don't provide enough context in the returned error message. You can describe what you were doing as an error message and pass it back up the stack, each layer adding their own context, ie what they were trying to do to what. At the top you (should) get a pretty complete picture of what went wrong and why. I have found this reasonable and proportionate for most scenarios.
That's all well and good if all you want to do it propagate the error up the call stack until some code prints it, but that's not really handling the error, any more than if err != nil { return error } is. If the calling code needs to behave differently depending on the details of the error, the worst way to do that would be examine the message string. A robust system is going to respond very differently if, for example, the error is one that indicates retrying is worthwhile vs an error indicating that no amount of retrying will ever succeed.
I agree with the first sentence. Your code should throw custom exceptions.
But it shouldn’t wrap other exceptions, if they are obvious. If library.readconfigfile(path) throws an IO exception while reading the file, just let it bubble up to the caller and don’t bother handling it. Just make sure your internal state is clean (catch..finally)
class Error(Exception):
pass
class FooError(Error):
…
and then, in the generic case:
try:
failing_external_function()
except Exception as e:
raise Error(*e.args) from e
(And if the code encounters a foo situation, it explicitly raises FooError, possibly with extra parameters, etc.)
It is really annoying to have to catch all of, say, socket.error, SSL errors, FileNotFound, etc. etc. in every call to some function in a module, if you use that module extensively. It’s much easier to just catch module.Error and be done with it. If you need to handle foo errors specially, you catch FooError.
What happens if function from another module calls failing_external_function()? Will you wrap it in another_module.Error?
This sounds like viral boilerplate: any function that can fail forces all other functions in the stack trace to be wrapped. Seems quite unpythonic, the whole point of exceptions is to avoid this boilerplate by propagating.
Are you just writing boilerplate to assign exceptions to modules? But this information is already present in the stack trace, what is the point? It's too generic to actually handle exceptions, and too redundant to provide debugging value.
> What happens if function from another module calls failing_external_function()? Will you wrap it in another_module.Error?
Yes?
If you call a function in another module, generally you’ll always have to catch and handle errors specific to that module. If that module does not wrap its errors (like socket.error, FileNotFound, etc.), you’ll have to handle those, as well. And, like Parthenon points out, if that module ever changes its internal implementation, it will suddenly raise different errors, which your code does not catch. On the other hand, if the module wraps its errors, you’ll have a guarantee that you’ll be able to catch them.
I don’t know why you keep bringing up debugging; Python’s “raise from” keeps the original exception intact and available.
> Are you just writing boilerplate to assign exceptions to modules?
Wait, are you talking about monkey-patching? I’m not doing that. I’m talking about the case where you write your own module for something, and then use that module in another program (possibly itself a module).
Your code example essentially renames a regular wildcard Exception to module-specific wildcard exception.
But here's a thing:
For direct callers of your module, regular wildcard and module wildcard are exactly the same. There's absolutely no difference for them whether they are catching module.Error or Exception, because your code is structured so that mean exactly the same thing.
For indirect callers higher in the stack, your custom wildcard exception makes error handling harder, because instead of familiar exceptions they will see your custom one. Your wrapper makes it harder to access what happened, but easier to access where it happened. This is just a bad tradeoff. In some cases for you where might be more valuable than what. But you are in no position to assume that this is how it's going to be for your customers.
Examples where this practice might be useful (e.g. explicitly shifting the blame to a 3rd party) are simply too rare to justify those wrappers. You could still achieve the same results by inspecting the stack, without impeding the ability of your customers to deal with known exceptions.
> For direct callers of your module, regular wildcard and module wildcard are exactly the same.
Well, no. I usually only wrap exceptions which I know to possibly expect, and only wrap ‘Exception’ in code sections where I know I want to catch any exception, no matter what (which is rare, but happens). Unexpected exceptions (either of an unexpected type or in an unexpected place) will still propagate upwards. This allows the code calling my module to catch any reasonably expected exceptions (since I will wrap them in my module.Error, or, really, a more specifc exception class inhereting from module.Error), while still allowing unexpected exceptions to be shown.
I realize that I was unclear in my initial description; I do not catch ‘Exception’ all the time, but only occasionally. I most often list the exceptions which I know that the called function could raise in failure states which I know how to handle.
> Your wrapper makes it harder to access what happened
How? With “raise from”, Python shows you not only the stack trace of the error, but also the stack trace of the original exception, IIRC.
> How? With “raise from”, Python shows you not only the stack trace of the error, but also the stack trace of the original exception, IIRC.
Because you can't catch the original exception, you're stuck with weird module.Error which is too generic to do something about it. You would have to catch the module.Error and then look at e.__cause__ to actually handle the exception. So you end up with the exact same problem Parthenon is talking about, but with extra steps.
And pray no other dependencies are following your practice, because you would then have to go into e.__cause__.__cause__ and so on.
What exactly is the benefit that module.Error provides to your users compared to letting the original exception propagate?
> Because you can't catch the original exception, you're stuck with weird module.Error which is too generic to do something about it.
To be clear, I most often do not raise plain module.Error; I generally raise semantically informative errors like module.FooError or module.EntityNotFoundError, all of which inherit from module.Error.
> What exactly is the benefit that module.Error provides to your users compared to letting the original exception propagate?
I thought that I (and Parthenon) made that clear; it’s to protect the users of my module from having to know about my implementation details. If I switch from a socket-based approach to an HTTP-based system internally, I don’t want my users to have to know that and switch from catching socket.error to requests.Error or other_http_module.Error. Implementation details should not be a part of the API.
I’ve come to this view from, when investigating a runtime error, too often having to dig into the source code of some third-party module to investigate what kind of exception class I’ve gotten in my traceback, only to discover that it’s not an exception from that module, it’s an exception class from yet another module that the third-party module is itself using, and which I had no way of knowing could be raised. I then have to not only catch that exception in my code (which ties my code to implementation details of the third-party module, i.e. making my code brittle), but I also have to import the module containing the exception (in order to name the exception class in order to catch it), which increases my code’s direct dependencies.
There are some nice arguments for not overengineering in the comments here, as well as some for there still being merit in explicitly saying what could (typically) go wrong in the execution of a program. We've already seem the extreme of "never create your custom exceptions", or even something to the tune of "exception handling is a maintenance burden", but I can't help but to wonder about going in the exact opposite direction - even further than the article suggests.
What if we had a language that only had checked exceptions and forced you to deal with anything that can go wrong. Launching a program? You better have some code to deal with an out of memory exception, or some code for dealing with a stack overflow exception. Working with some maths? Well, you better define what should happen in the case of number underflows or overflows, as well as division by zero or whatever else can be inferred by what operators you use. Dealing with a network? Well, be prepared to handle dozens of types of network failures, the brittleness of networks being laid bare to you. Want to deal with reflection? Well, there would probably be none, to avoid too much dynamic behavior.
It's rather obvious why we don't really have languages like that, but at the same time - if writing code in a language like that would be "possible", then surely there's a domain or two out there that might benefit not just from enforced 100% test coverage, but also every single possible error being handled, or at least laid bare. If a program should crash upon particular errors, then the developer might say so explicitly, or otherwise provide logic to recover from those, without ever missing any place where things could go wrong.
Contrast this made up language with your typical Java project: you might use the Spring framework and have a method that exposes a RESTful API that returns some JSON to the client. You'd be amazed at just how many different issues you can run into with even the simplest implementations, it's like a never ending path of discovering more ways for your programs to go wrong. If you can sometimes benefit from your IDE going "hey, this code might throw a NullPointerException", then how much additional assistance you'd benefit from (and how much error handling should be encouraged/enforced) is probably up for debate!
> What if we had a language that only had checked exceptions and forced you to deal with anything that can go wrong. Launching a program? You better have some code to deal with an out of memory exception, or some code for dealing with a stack overflow exception. Working with some maths? Well, you better define what should happen in the case of number underflows or overflows, as well as division by zero or whatever else can be inferred by what operators you use. Dealing with a network? Well, be prepared to handle dozens of types of network failures, the brittleness of networks being laid bare to you. Want to deal with reflection? Well, there would probably be none, to avoid too much dynamic behavior.
> It's rather obvious why we don't really have languages like that, but at the same time - if writing code in a language like that would be "possible", then surely there's a domain or two out there that might benefit not just from enforced 100% test coverage, but also every single possible error being handled, or at least laid bare. If a program should crash upon particular errors, then the developer might say so explicitly, or otherwise provide logic to recover from those, without ever missing any place where things could go wrong.
A lot of functional languages go in that direction. E.g. if you stick to the non-IO fragment of Haskell then it mostly works like that - things that can error return Either that you have to handle explicitly, and while there's sugar to let you work with that in a similar way to exceptions it will never be entirely hidden. (Within IO you can have exceptions, of course; making a proper algebraic model of how e.g. network I/O works is pretty daunting). Idris or especially Noether go even further in that direction.
Like others have said, in theory this is great. In reality, I never see custom exceptions being handled differently than whatever exception was wrapped. And I have worked on some large distributed systems where failure is common. For new engineers these custom exceptions add abstract complexity and exception class hierarchy into a code base when it really isn’t needed.
I disagree. I have seen plenty of code that is forced to match on the text of an exception because it uses a too-generic type.
That said, I still wouldn't use a custom exception type in most languages simply because it's so tedious for a small pay-off. It's one of those things that you should do, but isn't really worth the hassle. Like putting alt text on HTML images.
Do any languages with exceptions let you define new exception types at the throw site?
I don't see how an exception type defined at the throw site would be significantly different than using the text of the exception to convey the type. Callers wouldn't know to expect it either way, so how would they be able to handle it effectively?
1. You don't know which exceptions will be raised in advance. Anything that involves IO can fail in a plethora of ways, and you don't even know which calls involve IO (e.g. a library might choose to cache something on disk).
2. Consumers of your code will not know how to deal with those exceptions.
3. Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
4. You debug those exceptions by looking at stack trace. Adding extra levels just to give a fancy meaningless name to an exception does not help.
5. The whole point of exceptions is to propagate. Parthenon essentially suggests converting exceptions into return values.