I'd be interested in hearing about it. From a quick look, it seems like there's a focus on interactively making sense of unstructured data and then cleaning it up? And doing that quickly?
That part sort of overlaps with R, i.e. the "comprehensions" part, although R is pretty weak at parsing and dealing with strings in general. And it's pretty slow for unstructured data, although for structured data it's pretty good with data.table.
Well one way we deal with typical unstructured data is to prevent the unstructuring in the first place. The low-latency logging method (for example) uses something like a printf-style interface but stores all of the source data with exactly the types you intended to print -- you can always erase the structure by printing when you want to, but having the structure when you need it is very useful.
What "structure" means and how it works can have a lot of nuance. With hobbes we basically start with algebraic data types (which map to most C-style data structures and so can be shared without conversion with C/C++ code). It's been a while since I looked at R, but IIRC it's a lot like Scheme (e.g. maybe the data sharing/translation story is more complicated?).
We do have some things that are helpful for dealing with unstructured text data, like a built-in LALR parser generator and regex matching (integrated with general pattern matching), but it's not one of the main use-cases we've been focused on.
Yeah I read a little more of the site after commenting. At first I thought it was about analytics (hence thinking of R), but it's also about embedding in an application to take action (make trades) as well.
It definitely sounds like an interesting language!
That part sort of overlaps with R, i.e. the "comprehensions" part, although R is pretty weak at parsing and dealing with strings in general. And it's pretty slow for unstructured data, although for structured data it's pretty good with data.table.