Handmade Seattle

November 16 - 18. In person and online.
Catch up

We are a community of programmers producing quality software through deeper understanding.

Originally inspired by Casey Muratori's Handmade Hero, we have grown into a thriving community focused on building truly high-quality software. We're not low-level in the typical sense. Instead we realize that to write great software, you need to understand things on a deeper level.

Modern software is a mess. The status quo needs to change. But we're optimistic that we can change it.

Around the Network

This post is mirrored on my blog.

I have been hard at work on my dynamic environment based on Tiny C Compiler. It has been going worse than I hoped. I am attempting to static link SDL 2 with an application I want to test the dynamic environment on. I am soldiering on, and hope to have something useful before the end of the month. If I can't get it by then, I may need to look for other options. It has been an uphill battle mainly because these tools were never intended to be used this way, but provide enough value that rewriting from scratch doesn't feel like a viable option for me.

Cakelisp itself has some new features worth mentioning.

File and line macros

I added (this-file) and (this-line) macros to Cakelisp's runtime macro library. These macros replace their invocation with the filename as a string or the Cakelisp line number, respectively.

The most obvious use-case for these is debugging, e.g. outputting:

MyFile.cake:1234: Hello

via something like:

(fprintf stderr "%s:%d: Hello\n" (this-file) (this-line))

The more interesting use-case is for code navigation straight from the program. I am writing a 2D vector animation program that uses the immediate-mode GUI paradigm for its UI. A button can be drawn to the screen and its click responded to in the following code:

(when (do-button renderer
        (addr (path regular-font > atlas))
        (path regular-font > texture)
        (+ (path state > ui-pane-position . X) 5) button-y 250 75
        "Toggle atlas")
  (set (path state > view-atlas) (not (path state > view-atlas))))

do-button will render the button and return true if the button was clicked.

I wanted to go straight from visible buttons on screen to the code that caused that button to exist. I added file and line arguments to do-button, then created a macro to automatically populate those fields whenever I called do-button. The do-button function then calls handle-ui-meta-inquiry, which opens Emacs for me at the file and line:

;; The UI element is trying to describe itself somehow. Do something helpful for the programmer
(defun handle-ui-meta-inquiry (element-name (addr (const char))
                               filename (addr (const char)) line int)
  (fprintf stderr "%s:%d: %s\n" filename line element-name)
  (var goto-line (array 32 char) (array 0))
  (snprintf goto-line (sizeof goto-line) "+%d" line)
  (runtime-run-process-sequential-or
      ("emacsclient" goto-line filename "-n")
    (return)))

This handle-ui-meta-inquiry would of course need to be fleshed out if I plan to ship it to end-users. Even this simple version enables a great quality-of-life improvement for me while I'm iterating on an interface. Now, if I want to make a change, I press a key while hovering over the button and I'm taken straight to the relevant code.

Note that I implemented the "meta inquiry" feature as an immediate mode function as well. I did not need to set up fancy metadata or describe my whole UI up-front. I can still extend the meta inquiry to render all the possible things I could inspect via building a list each frame instead of making the UI element decide whether it has been inquired about. These sorts of features are fantastic when making complex UIs.

This was made possible by the humble this-file and this-line macros!

RunProcess is now C-compatible

Cakelisp uses sub-processes to compile and link comptime and runtime code. I wrote several macros to make an easy interface to creating such processes in user Cakelisp code as well. One such macro is runtime-run-process-sequential-or, which I used in the previous section to open Emacs.

The macro has a simple form:

(runtime-run-process-sequential-or (command) on-failure)

The command uses a strict form where each argument is wrapped in quotes. This leaves no ambiguity for arguments with spaces, and allows you to freely intermix string variables with string literals in commands.

Here's a simple example:

(runtime-run-process-sequential-or
    ("gcc" "-c" filename "-o" output
           :in-directory "my/build/dir")
  (fprintf stderr "Failed to compile %s\n" filename))

The sequential part of the name indicates that the current function will wait there until the process closes, thereby letting you easily run processes which have dependencies on the previous one. This is used frequently in GameLib to build 3rd party code during Cakelisp's compile-time phase.

There are start variants that start the process and continue execution immediately after for cases where you want to run processes in parallel instead.

This interface has been in for a while, and has proven its value. The recent change was to make the RunProcess file C instead of C++, which makes it compatible with more compilers (like Tiny C Compiler).

Needless to say, if you want to do more complex things like redirect pipes and so on, you'll need to use a different interface to run your sub-processes. However, in my use-cases these macros are more than enough.

Macoy Madson
Forum reply: GUI problems
x13pixels
New forum thread: GUI problems
desiredusername
New forum thread: RemedyBG 0.3.8.5
x13pixels
longtran2904
Christoffer Lernö

Macros and compile time evaluation are popular ways to extend a language. While macros fell out of favour by the time Java was created, they've returned to the mainstream in Nim and Rust. Zig has compile time and JAI has both compile time execution and macros.

At one point in time I was assuming that the more power macros and compile time execution provided the better. I'll try to break down why I don't think so anymore.

Code with meta programming are hard to read

Macros and compile time form a set of meta programming tools, and in general meta programming has very strong downsides in terms of maintaining and refactoring code. To understand code with meta programming you have to first resolve the meta program in your head, and not until you do so you can think about the runtime code. This is exponentially harder than reading normal code.

Bye bye, refactoring tools

It's not just you as a programmer that need to resolve the meta programming – any refactoring tool would need to do the same in order to safely do refactorings – even simple ones as variable name changes.

And if the name is created through some meta code, the refactoring tool would basically need to reprogram your meta program to be correct, which is unreasonably complex. This is why everything from preprocessing macros to reflection code simply won't refactor correctly with tools.

Making it worse: arbitrary type creation

Some languages allow that arbitrary types are created at compile time. Now the IDE can't even know how types look unless it runs the meta code. If the meta code is arbitrarily complex, so will the IDE need to be in order to "understand" the code. While the meta programming evalution might be nicely ordered when running the compiler, a responsive IDE will try to iteratively compile source files. This means the IDE will need to compile more code to get the correct ordering.

Code and meta code living together.

Many languages try to make the code and meta code look very similar. This leads to lots of potential confusion. Is a a compile time variable (and thus may change during compilation, and any expression containing it might be compile time resolved) or is it a real variable?

Here's some code, how easy is it to identify the meta code?

fn performFn(comptime prefix_char: u8, start_value: i32) i32  {
    var result: i32 = start_value;
    comptime var i = 0;
    inline while (i < cmd_fns.len) : (i += 1) {
        if (cmd_fns[i].name[0] == prefix_char) {
            result = cmd_fns[i].func(result);
        }
    } 
    return result;
}

I've tried to make it easier in C3 by not mixing meta and runtime code syntax. This is similar how macros in C are encouraged to be all upper case to avoid confusion:

macro int performFn(char $prefix_char, int start_value)
{
    int result = start_value;
    // Prefix $ all compile time vars and statements
    $for (var $i = 0; $i < CMD_FNS.len, $i++):
        $if (CMD_FNS[$i].name[0] == $prefix_char):
            result = CMD_FNS[$i].func(result);
        $endif;   
    $endfor;   
    return result;
}    

The intention with the C3 separate syntax is that the approximate runtime code can be found by removing all rows starting with $:

macro int performFn(char $prefix_char, int start_value)
{
    int result = start_value;


            result = CMD_FNS[$i].func(result);


    return result;
}    

Not elegant, but the intention is to maximize readability. In particular, look at the "if/$if" statement. In the top example you can only infer that it is compile time evaluated and folded by looking at i and prefix_char definitions. In the C3 example, the $if itself guarantees the contant folding and will return an error if the boolean expression inside of () isn't compile time folded.

Extending syntax for the win?

A popular use for macros is for extending syntax, but this often goes wrong. Even if you have a language with a macro system that is doing this well, what does it mean? It means that suddenly you can't look at something like foo(x) and be able to make assumptions about it. In C without macros we can make the assumption that neither x nor other local variables will not changed (unless they have been passed by reference to some function prior to this), and the code will resume running after the foo call (except if setjmp/longjmp is used). With C++ we can asume less, since foo may throw an exception, and x might implicitly be passed by reference.

The more powerful the macro system the less we can assume. Maybe it's pulling variables from the calling scope and changing them? Maybe it's returning from the current context? Maybe it's formatting the drive? Who knows. You need to know the exact definition or you can't read the local code and this undermines the idea of most languages.

Because in a typical language you will what "breaks the rules": all the built in statements like if, for and return. Then there is a way to extend the language that follows certain rules: functions and types. This forms the common language understood by a developer to be what "knowing a language is about": you know the syntax and semantics of the built-in statements.

If the language extends its syntax, then every code base becomes a DSL which you have to learn from scratch. This is similar to having to buy into some huge framework in the JS/Java-space, just worse.

The point is that while we're always extending the syntax of the language, doing this through certain limited mechanisms like functions works well, but the more unbounded the extension mechanisms the harder the code will be to read and understand.

When meta programming is needed

In some cases meta programming can make code more readable. If the problem is something like having a pre-calculated list for fast calculations or types defined from a protocol, then code generation can often solve the problem. Languages can improve this by better compiler support for triggering codegen.

In other cases the meta programming can be replaced by running code at startup. Having "static init" like Java static blocks can help for cases when libraries need to do initialization.

If none of those options work, there is always copy-paste.

Summary

So to summarize:

  • Code with meta programming is hard to read (so minimize and support readability).
  • Meta programming is hard to refactor (so adopt a subset that can work with IDEs).
  • Arbitrary type creation is hard for tools (so restrict it to generics).
  • Same syntax is bad (so make meta code distinct).
  • Extending syntax with macros is bad (so don't do it).
  • Codegen and init at runtime can replace some use of compile time.

Macros and compile time can be made extremely powerful, but this power is tempered by the huge drawbacks, good macros are not what you can do with them, but if it manages to balance readability with necessary features.

Community Showcase

This is a selection of recent work done by community members. Want to participate? Join us on Discord.