In the previous post I got a great, if obvious and easy performance improvement by moving from a naive implementation (but one that helped me make initial headway in understanding the WebAssembly spec) to something much more sensible. The very simple measuring I think puts me somewhere around python performance, but can I do better? At the moment, I’m not quite ready to think about JITing code, but is there something that I can do whilst keeping the interpretation model?
Just shy of a month ago I started work on a WebAssembly interpreter written in Zig. With this commit I have all but a few baseline spec testsuite tests passing. Part of the reason was purely to learn how WebAssembly works, and in that respect the peformance of the interpreter was a secondary concern. However, I would like to see if I can at least take care of some low-hanging optimisations.
Today I managed to fix 3 issues. Issues: weston-subsurfaces was leaking regions. This was apparent from rapidly resizing the window. cheese was leaking…something. Apparent from the object IDs always increasing. gedit would display a subsurface and when the subsurface was dismissed the window would stop responding. I had realised there was an in issue with weston-subsurfaces. Namely, one of the subsurfaces would, briefly, not be in the correct set_position.
This is the first installment of my Wayland compositor dev diary. How I got here? Back in late 2016 I started work on a Wayland compositor written in Common Lisp. I got to the point where I had SDL and DRM backends, but I ran into a performance issue that I struggled to deal with and, in early 2017, with starting a new job in software development my motivation to work on the compositor waned.