Just because it's useful to wrap exceptions in 5% of the cases doesn't mean you should wrap the rest 95% just in case. YAGNI.
1. You don't know which exceptions will be raised in advance. Anything that involves IO can fail in a plethora of ways, and you don't even know which calls involve IO (e.g. a library might choose to cache something on disk).
2. Consumers of your code will not know how to deal with those exceptions.
3. Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
4. You debug those exceptions by looking at stack trace. Adding extra levels just to give a fancy meaningless name to an exception does not help.
5. The whole point of exceptions is to propagate. Parthenon essentially suggests converting exceptions into return values.
I think there are two kinds of exceptions in languages that have them as their error handling mechanism.
1. Exceptions you expect your consumers to handle.
2. Exceptions you don't expect your consumers to handle.
The first one I would argue you should wrap third party exceptions. There is in nearly every case important context in your code that the thrower of the third party exception will not know and whoever is reading or handling the exception will want to know.
The second one should do whatever the equivalent of crashing is for your use case. Either exiting the program hard or bubbling to some top level handler where a "Something weird and unexpected happened and we can't continue whatever action was going on" alert or log get's recorded and the activity is terminated.
If something is throwing an exception and you don't know it can be throwing an exception it probably belongs in category 2. You may over time transition it to category 1 when you figure out that it is actually handleable.
In my experience though if you aren't disciplined then every exception ends up being lumped into category 2 whether it should be or not. Any language that helps you force the categorization gets bonus points from me.
I'm reminded of an old Eric Lippert post about this. He basically says the same as you: boneheaded and fatal exceptions are not meant to be caught, and vexing and exogenous exceptions are ordinary flow control.
This is one of the problems with checked exceptions. The library author is in no position to expect me to handle an exception. Whether or not I can relies on the design of my system, which they have no window into.
This is the problem of Java's implementation of checked exceptions... because they are not generic. Which forces middle layers to make impossible decisions like this.
If the type system were more capable, they'd just be equivalent to typed generic error returns, which work fine.
This forces client code to handle arbitrary exceptions and it loses the information about which specific exceptions are actually thrown. This is like having Object as the return type on all functions, and having callers do instanceof to see what type of result they got. Exceptions should be part of the semantic contract of functions/methods just like return types are.
Exceptions are a return type of functions. They return via a different path but there is no possible way you could disagree that they are a form of return.
As they don't return to the same place, don't return to a single place, and may (if not caught) not actually return but instead take down the entire thread, I think it's reasonable that for some purposes it might well make sense to treat exceptions as something other than "a form of return." ¯\_(ツ)_/¯
However we're treating them, however, I agree that they are a part of the interface and should be documented (and, where relevant, checked) as such.
The point is that the type system doesn't properly reflect that. You should be able to declare a generic method that takes some T, and throws everything that T.foo() throws, for example.
And 99.9% of the time the client is just going to catch generic Exception and doesn't care what the type is. Rollback a transaction, return 500 or display an error dialog. Done. It doesn't matter what the client library thinks.
If it's an unchecked exception experience says I have a 60-70 percent chance of not knowing it's there without a careful reading of the code. Which means I can't know myself if it's possible to handle it or not. I can always rethrow an exception if I know it exists. Languages that give me a way to ensure I know about it means I avoid unnecessary pain later in production with someone breathing down my neck to please fix it now. I'll take that any day over a little boilerplate.
That is true, so the library should throw the checked exception, and if the caller has no way to handle it, it should wrap the checked exception in an unchecked exception and throw the unchecked exception. Not too hard, and library clients that can handle some checked exceptions will be able to.
I hate libraries that only throw unchecked exceptions. It seems easier initially, but makes writing correct code more difficult.
> If the caller has no way to handle it, it should wrap the checked exception in an unchecked exception and throw the unchecked exception.
Not you have a new problem: How is the code calling the caller supposed to know about that exception? You can't even catch it (even if you know about it!) using normal try-catch because what you have to do is catch the wrapper exception and then check inside that for the exception type you expect via instanceof.
Checked exceptions (at least as implemented in Java) are horrible for the composability of code wrt. error handling.
That is why most libraries eschew checked exceptions these days. Unfortunately, large bits of the standard library in Java forces their hand wrt. re-wrapping stuff like InterruptedException and IOException and the like.
(Not to mention, most of the time you really shouldn't be catching exceptions in very small scopes or at the very highest level in your code. Involving every single layer in between is madness.)
The error here is not that they are wrapping the exception necessarily. It is that they re-threw it as an unchecked exception. I strongly believe the only good use of unchecked exceptions is if the code should do the closest reasonable thing to crash safely. Everything should be clearly communicated to the callers so they can make good decisions about what to handle here and what to pass up the chain.
If the complaint is that you then have too many different unchecked exceptions perhaps the error domain has been improperly modeled and you are getting a clue that the system is poorly designed.
If you have a Function which needs to do some interruptible work you cannot throw InterruptedException -- you must wrap it. This is a fundamental design flaw in Java's exception system and cannot be handwaved away just by saying that Function is badly designed. This problem is pervasive.
The ultimate problem here is one of variance -- throws clauses have the opposite variance rules from method implementations: Subclasses (whether of interfaces or classes) frequently need to more than could be foreseen by the implementor of the interface, so they need to be able to throw "more things", but checked exception clauses explicitly disallow widening the set of thrown exceptions in subclasses (for obvious reasons -- since a FooImpl can be used a runtime where a Foo is expected).
This is a fundamental flaw that was overlooked in the checked exceptions design and there's no fixing it now.
"Exceptions you expect the consumer to handle" should be explicit return values on the function signature instead, via an Either/Result type or similar.
Unless you're writing Java where you can force certain exception types to be handled, but people never write that code properly.
Very few languages actually support Either or Result types ergonomically. I don't have a problem with people using exceptions for these but I do think if the language has compile time type checking it should provide a distinction between checked exceptions and non checked exceptions. I am very much in the minority here but I find it very useful to let robots tell me I forgot something than to discover I forgot it in production. I think this is the useful sort of lazy rather than the not useful sort. Not useful laziness is letting your customers discover the problem.
Yeah I agree. I think the problem with checked exceptions in Java is a combination of misuse on both the consuming and the throwing end. They're amazing when they provide a compile time checked way to make sure consumers handle stuff they actually definitely want to handle, but if you add too much noise then all the consuming code is just going to catch and rethrow even the important ones.
1. You don't know which exceptions will be raised in advance. Anything that involves IO can fail in a plethora of ways, and you don't even know which calls involve IO (e.g. a library might choose to cache something on disk).
Not sure how this relates to the code design decision
2. Consumers of your code will not know how to deal with those exceptions.
Not sure the point here but if the argument is that the caller won't know how to deal with the third party exception, I disagree. Typically it just needs to return a single exception type that wraps any underlying cause. Caller can just either decide to deal with it or rethrow.
3. Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
If the database is unresponsive do you want to crash your program? I wouldn't. I'll usually retry until its available.
4. You debug those exceptions by looking at stack trace. Adding extra levels just to give a fancy meaningless name to an exception does not help.
Really don't think an extra trace in the stack is a reason to not create a well defined contract.
5. The whole point of exceptions is to propagate. Parthenon essentially suggests converting exceptions into return values.
I didn't see anywhere in the doc where they mentioned converting exception into return values. Wrapped exceptions are still exceptions.
> Really don't think an extra trace in the stack is a reason to not create a well defined contract.
It absolutely is IMO. Stack traces are the main reason to use exceptions at all, and having multiple layers of useless wrapping around them is one of the biggest frustrations when trying to understand and debug an issue.
If the database is unresponsive, I would prefer to return 5xx to the caller as soon as possible. May be this service is not so important and it's better to present to user an incomplete page rather than waiting for hours or days until database is available.
If it is important, caller will call the service until he got response.
Again: fail fast. Even in distributed systems. Unless you're 100% sure that you know better how to handle this particular issue.
6. It is much faster and easier to develop an application's "happy path" while completely ignoring failures. Failure handling is a software maintenance issue. YAGNI.
As a counterpoint to that - the chances that a newly developed feature will work perfectly anywhere other than your own development environment are often pretty low. And when it almost inevitably fails, not having any error-handling logic will use up huge amounts of your time trying to track down the reason for failure.
For a first cut of any new feature, I'd want at a minimum to have some form of detailed error-reporting, even if it's presenting a stack trace to the user. Ideally it's handled by whatever framework you're working in, which typically means all you need to do is not swallow exceptions, but if you do have to write your own error-handling code, don't throw away any details about the error (sometimes the best you can do is just log the full exception details, and personally I'll always ensure that the names/URIs of any resources involved are included in those details).
> You don't know which exceptions will be raised in advance.
> Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
I prefer to propagate such unknown exceptions to a top-level catch to clanly log that something happened.
I usually have two types of exceptions: the ones I expect at some point (a HTTP call failing for some reason) that I may ("when I have time") group in "known exceptions we should not worry about" and log them as "informational", and exceptions I did not anticipate that I want to log as well, but with a critical level because they were, well, unexpected.
So crashing right when they happen may not be the best strategy.
I really wish English had different individual words for the concepts "known [problem] we should not worry about" and "[problems] I did not anticipate".
Everything goes to ERROR by default, some errors get downgraded to WARNING when you confirm they are not important.
Send ERRORs to Sentry (or whatever you use) and deal with them immediately. Send WARNINGs to your favorite centralized logging solution and deal with them when there's too many.
> Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
This is why D separates exceptions into two categories:
1. Error - these are not recoverable. The only reason to catch them is to maybe try to save some state or log a message or shut down the reactor before crashing.
2. Exception - these are recoverable
(I'm being facetious. Any system design where, while unwinding a fatal error, one relies on it to shut down the reactor is a horrible, terrible design.)
> Just because it's useful to wrap exceptions in 5% of the cases doesn't mean you should wrap the rest 95% just in case. YAGNI.
I'm going to curse here, but ... fucking seriously.
If I get a .net ADO exception coming out of a 3rd party library it's absolutely not going to shock me or throw me for a loop. But do you know what IS a pain in the ass? Using 3 different libraries, all that wrap that same ADO exception in their own custom exception.
Haha, that's actually a good point. A single problem can affect dozens of modules, each having a different wrapper.
If my db goes down, I very much prefer to see a single DatabaseConnectionFailed than a multitude of FailedToSaveObject, DatabaseError, CannotLoadData, IOError, SomethingIsWrongThisShouldNeverHappen, DBHostUnavailable all over the place. Good luck navigating through the noise and isolating those.
The article talks about solving that exact issue by wrapping all of those different exceptions so that you don't have to deal with them. Much easier to catch a single ThirdPartyInternalFailure exception than catch all the internal exceptions that could have caused an internal failure.
When library 1 through 10 take this approach to wrapping a common exception you get the above behavior. Obviously a single library should have a coherent internal story
"Just because it's useful to wrap exceptions in 5% of the cases doesn't mean you should wrap the rest 95% just in case"
Yes you should wrap ALL third party exceptions for the reasons given in the post. There is no 'just in case' reason. Any third party exception returned may cause your client to be dependent on it.
1. You don't know which exceptions will be raised in advance. Anything that involves IO can fail in a plethora of ways, and you don't even know which calls involve IO (e.g. a library might choose to cache something on disk).
2. Consumers of your code will not know how to deal with those exceptions.
3. Most of exceptions are unrecoverable (that's why they are called exceptions), the best course of action is to crash, which happens by default.
4. You debug those exceptions by looking at stack trace. Adding extra levels just to give a fancy meaningless name to an exception does not help.
5. The whole point of exceptions is to propagate. Parthenon essentially suggests converting exceptions into return values.