In the previous post I got a great, if obvious and easy performance improvement by moving from a naive implementation (but one that helped me make initial headway in understanding the WebAssembly spec) to something much more sensible. The very simple measuring I think puts me somewhere around python performance, but can I do better? At the moment, I’m not quite ready to think about JITing code, but is there something that I can do whilst keeping the interpretation model?
Just shy of a month ago I started work on a WebAssembly interpreter written in Zig. With this commit I have all but a few baseline spec testsuite tests passing. Part of the reason was purely to learn how WebAssembly works, and in that respect the peformance of the interpreter was a secondary concern. However, I would like to see if I can at least take care of some low-hanging optimisations.
Today I managed to fix 3 issues. Issues: weston-subsurfaces was leaking regions. This was apparent from rapidly resizing the window. cheese was leaking…something. Apparent from the object IDs always increasing. gedit would display a subsurface and when the subsurface was dismissed the window would stop responding. I had realised there was an in issue with weston-subsurfaces. Namely, one of the subsurfaces would, briefly, not be in the correct set_position.
This is the first installment of my Wayland compositor dev diary. How I got here? Back in late 2016 I started work on a Wayland compositor written in Common Lisp. I got to the point where I had SDL and DRM backends, but I ran into a performance issue that I struggled to deal with and, in early 2017, with starting a new job in software development my motivation to work on the compositor waned.
Here’s an implementation of infix notation for Shen; it’s effectively Dijkstra’s shunting-yard algorithm. Custom precedence can be defined by setting prec. (define prec ** -> 4 * -> 3 / -> 3 + -> 2 - -> 2) * power is defined in the maths library *\ (define ** X Y -> (power X Y)) (define shunt  Output  -> Output   [X Op Y | Rest] -> (shunt [Op] [(shunt   Y) (shunt   X)] Rest) where (element?
In December I released a curses client allowing playback of music stored/purchased with Google Play. The client is written in python and uses Simon Weber’s unofficial Google Music API. Details can be found on the github page. It’s still a work in progress but I have been able to use it as my sole music player. Here’s a picture of what it looks like: Figure 1: Screenshot of thunner running in terminal
Today I’m going to visit two topics that I’ve not covered yet: lazy evaluation and types. Personally, the type system is the hardest thing to get my head around and I hope to write a lot more on the subject. Lazy evaluation allows for the delay of evaluation until a required time. In Shen, lazy evalation is controlled with two functions: freeze and thaw. As the names suggest freeze delays evaluation and thaw evaluates a frozen expression.
More Shen macros today. In the previous post I promised an explanation for why we don’t have/need quasiquote, unquote and unquote-splicing in Shen. Let’s look at a Common Lisp example: a single place let. This would typically be written as: (defmacro let-one ((loc val) &rest body) `(let ((,loc ,val)) ,@body)) …using quasiquote, unquote and unquote-splicing. This is actually short hand for the following: (defmacro let-one ((loc val) &rest body) (append (list 'let (list (list loc val))) body)) I think everyone would agree that this second form is harder to read.
In this post I’m going to concentrate on Shen macros; some familiarity with basic Shen and Common Lisp macros is assumed. In Shen, as in traditional Lisp style, code is data and data is code. Shen code is read in as a list datastructure: (+ 1 2) becomes [+ 1 2]. Shen macros are functions that pattern match on the list representation of code at read time in order to rewrite it.