Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many C programmers need proper generic programming mechanisms (perhaps something like Zig's comptime) in C, but macros are the worst possible approach, and they don't want to switch to a different language like C++. As a result, they struggle with these issues. This is what I think the standardization committee should focus on, but instead, they introduced _Generic.


The biggest issue is the ABI for C - it's the lingua-franca of language interoperability and can't really be changed - so whatever approach is taken it needs to be fully compatible with the existing ABI. `_Generic` is certainly flawed but doesn't cause any breaking ABI changes.

That's also a major reason why you'd use C rather than C++. The C++ ABI is terrible for language interoperability. It's common for C++ libraries to wrap their API in C so that it can be used from other language's FFIs.

Aside from that another reason we prefer C to C++ is because we don't want vtables. I think there's room for a `C+` language, by which I mean C+templates and not C+classes - perhaps with an ABI which is a subset of the C++ ABI but superset of the C ABI.


> we don't want vtables

Then don't use virtual functions. Then there will be no vtables.

You might have known that already, but in general I'm surprised how many engineers think that all C++ classes have vtables. No, most in fact do not. C++ classes generally have the same memory layout as a C struct as long as you don't use virtual functions.


> I think there's room for a `C+` language, by which I mean C+templates and not C+classes - perhaps with an ABI which is a subset of the C++ ABI but superset of the C ABI.

indeed, i have spoken to a lot of my colleagues about just that. if overloading is not allowed, perhaps there is still some hope for a backwards compatible abi ?


I don't think we can get away with just using the C ABI - or even if we did, we would need a standardized name-mangling scheme, and then any language which consumes the ABI would need to be aware of this name-mangling scheme, so it would effectively be a new ABI.

We might be able to make this ABI compatible with C if no templates are used, which wouldn't cause breaking changes - but for other compilers to be able to use templates they would need to opt-in to the new scheme. For that we'd probably want to augment libffi to include completely new functions for dealing with templates. Eg, we'd have an ffi_template_type, and an ffi_prep_template for which we supply its type arguments - then an ffi_prep_templated_cif for calls which use templates, and so forth. It would basically be a new API - but probably still more practical than trying to support the C++ ABI.

Another issue is that if we compile some library with templates and expose them in the ABI, we need some way to instantiate the template with new types which were not present when the library was compiled. There's no trivial solution to this. We'd really need to JIT-compile the templates.


> ... we would need a standardized name-mangling scheme, ...

may you please elaborate on _why_ you think this is needed ?


If the templates are monomorphized, each instantiation of a templated function will have a different address. To acquire the address of any given instantiation we need a symbol in the object file.


What isn't clear to me why one would ever want monomorphization in the first place.


Not ever wanting monomorphization seems like a bit of a strong claim to me. Why do you take that position?


I was asking the question why one would ever want it.


Right. I had interpreted you asking that question as you having taken that position and soliciting responses for a discussion. Seems that was an improper reading.


exactly !


How can you have templates without name mangling and overloads?


This is true. I agree with this statement. It's the holy cow of C. However, the problem with generic programming and metaprogramming isn't going away, and many people continue to struggle with it. Introducing something like compile-time reflection might be a solution...


They showed something they think it’s neat. You start a topic with the assumption that they struggle, not sure how you get that information from the original post or you just want to state that claim anyway?


The most insulting thing about _Generic is the name. Really? _Generic? For a type-based switch with horrific syntax? What were they thinking...

That said, generic programming in C isn't that bad, just very annoying.

To me the best approach is to write the code for a concrete type (like Vec_int), make sure everything is working, and then do the following:

A macro Vec(T) sets up the struct. It can then be wrapped in a typedef like typedef Vec(int) Vec_i;

For each function, like vec_append(...), copy the body into a macro VEC_APPEND(...).

Then for each relevant type T: copy paste all the function declarations, then do a manual find/replace to give them some suffix and fill in the body with a call to the macro (to avoid any issues with expressions being executed multiple times in a macro body).

Is it annoying? Definitely. Is it unmanageable? Not really. Some people don't even bother with this last bit and just use the macros to inline the code everywhere.

Some macros can delegate to void*-based helpers to minimize the bloating.

EDIT: I almost dread to suggest this but CMake's configure_file command works great to implement generic files...


There are less annoying ways to implement this in C. There are at least two different common approaches which avoid having macro code for the generic functions:

The first is to put this into an include file

  #define type_argument int
  #include <vector.h>
Then inside vector.h the code looks like regular C code, except where you insert the argument.

  foo_ ## type_argument ( ... )
The other is to write generic code using void pointers or container_of as regular functions, and only have one-line macros as type safe wrappers around it. The optimizer will be able to specialize it, and it avoids compile-time explosion of code during monomorphization,

I do not think that templates are less annoying in practice. My experience with templates is rather poor.


An idea I had was to implement a FUSE filesystem for includes, so instead of the separate `#define type_argument` (and `#undef type_argument` that would need to follow the #include), we could stick the type argument in the included filename.

   #include <vector.h(int32_t)>
   #include <vector.h(int64_t)>
The written `vector.h(type_argument)` file could just be a regular C header or an m4 file which has `type_argument` in its template. When requesting `vector.h(int32_t)` the FUSE filesystem would effectively give the output of calling `gcc -E` or `m4` on the template file as the content of the file being requested.

Eg, if `vector.h(type_argument)` was an m4 file containing:

    `#ifndef VECTOR_'type_argument`_INCLUDED'
    `#define VECTOR_'type_argument`_INCLUDED'

    typedef struct `vector_'type_argument {
        size_t length;
        type_argument values[];
    } `vector_'type_argument;
 
    ...
    #endif
Then `m4 -D type_argument=int32_t vector.h(type_argument)` gives the output:

    #ifndef VECTOR_int32_t_INCLUDED
    #define VECTOR_int32_t_INCLUDED
    
    typedef struct vector_int32_t {
        size_t length;
        int32_t values[];
    } vector_int32_t;
    
    ...
    #endif
But the idea is to make it transparent so that existing tools just see the pre-processed file and don't need to call `m4` manually. We would need to mount each include directory that uses this approach using said filesystem. This shouldn't require changing a project's structure as we could use the existing `include/` or `src/` directory as input when mounting, and just pick some new directory name such as `cfuse/include` or `cfuse/src`, and mount a new directory `cfuse` in the project's root directory. The change we'd need to make is in any Makefiles or other parts of the build, where instead of `gcc -Iinclude` we'd have `gcc -Icfuse/include`. Any non-templated headers in `include/` would just appear as live copies in cfuse/include/, so in theory this could work without causing anything to break.


That's the craziest idea on this topic I've seen so far! I'm not sure that's a good or a bad thing, but it sure is a thing!


Those techniques being less annoying is highly debatable ;). Working with void* is annoying, header includes look quite ugly with the ## concatenation everywhere or even a wrapper macro. It also gets much worse when you need to customize the suffix (because type_argument is char* or whatever).

Sometimes the best option is an external script to instantiate a template file.


It may be debatable, but I would say C++'s template syntax is not nicer. I do not think working with void pointers is annoying, but I also prefer the container_of approach. The ## certainly has the limitation that you need to name things first, but I do not think this much of a downside.

BTW, here is some generic code in C using a variadic type. I think this quite nice. https://godbolt.org/z/jxz6Y6f9x

Running a program for meta programming are always a possibility, and I would agree that sometimes the best solution.


I don't think

    T ## _foo (T foo, ...)
is that much different from

    <T>::foo (T foo, ...)
Same for:

    foo (Object * a)
vs:

    foo (void * a)


Hey, I understand you and know this stuff well, having worked with it for many years as a C dev. To be honest, this isn't how things should generally be done. Macros were invented for very simple problems. Yes, we can abuse them as much as possible (for example, in C++, we discovered SFINAE, which is an ugly, unreadable technique that wasn't part of the programming language designer's intent but rather like a joke that people started abusing), but is it worth it?


The name has to be ugly, new names in C are always taken from the set of reserved identifiers: those starting with an underscore & a capital letter, or with two underscores. Since they didn't reserve any "normal" names, all new keywords will be stuff like `_Keyword` or `__keyword`, unless they break backwards compatibility. And they really hate breaking backwards compatibility, so that's quite unlikely.


The problem is not the _G, the problem is the "generic". It is a completely wrong name for what it does.


This is an old tradition in ISO C. unsigned actually means modulo and const actually means immutable.


username checks out


I don't struggle, I switch from C++ to C and find this much nicer.


I'm currently at a crossroads: C++ or Zig. One is very popular with a large community, amazing projects, but has lots of ugly design decisions and myriad rules you must know (this is a big pain, it seems like even Stroustrup can't handle all of them). The other is very close to what I want from C, but it's not stable and not popular.


Why not C?

Its only real issue is that people will constantly tell you how bad it is and how their language of choice is so much better. But if you look at how things work out in practice, you can usually do things very nicely in C.


My choice in this situation is indeed C, but every once in a while I hit a problem that makes me yearn for better metaprogramming.

Perfect hashing that you’d ideally use two different approaches for depending on whether the platform has a cheap popcount (hi AArch32), but to avoid complicating the build you give up and emulate popcount instead. Hundreds of thousands of lines of asynchronous I/O code written in a manual continuation-passing style, with random, occasionally problematic blocking synchronization sprinkled all over because the programmer simply could not be bothered anymore to untangle this nested loop, and with a dynamic allocation for each async frame because that’s the path of least resistance. The intense awkwardness of the state-machine / regular-expression code generators, well-developed as they are. Hoping the compiler will merge the `int` and `long` code paths when their machine representations are identical, but not seeing it happen because functions must have unique addresses. Resorting to .init_array—and slowing down startup—because the linker is too rigid to compute this one known-constant value. And yes, polymorphic datastructures.

I don’t really see anybody do noticeably better than C; I think only Zig and Odin (perhaps also Hare and Virgil?) are even competing in the same category. But I can’t help feeling that things could be much better. Then I look at the graveyard of attempted extensions both special-purpose (CPC[1]) and general (Xoc[2]) and despair.

[1] https://github.com/kerneis/cpc

[2] https://pdos.csail.mit.edu/archive/xoc/


It would be interesting to understand better where language feature are actually needed or helpful, and where the code should be organized differently. I also observe that often cure if worse than the disease.

Many example I see where people argue for metaprogramming features are not all convincing to me. For example, there was recently a discussion about Zig comp-time. https://news.ycombinator.com/item?id=44208060 This is the Zig example: https://godbolt.org/z/1dacacfzc Here is the C code: https://godbolt.org/z/Wxo4vaohb

Or there was a recent example where someone wanted to give an example for C++ coroutines and showed pre-order tree traversal (which I can't find at the moment), but the C code using vec(node) IMHO was better: https://godbolt.org/z/sjbT453dM compared to the C++ coroutine version: https://godbolt.org/z/fnGzszf3j (from https://news.ycombinator.com/item?id=43831628 here). Edited to add source.


Macros are the best possibly approach, compared to C++ templates or _Generic




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: