I came to this forum specifically to talk about the UI library, so I think it’s great that this discussion is happening.
Personally, I would strongly push for a UI framework & editor written in amethyst. As I mentioned on discord, there are some great resources for designing a UI engine (e.g. layout in flutter), and so the hard work is actually implementing these solutions.
I think this could have been said a year, or even 2 years ago. I think amethyst as a big enough community that it can push forward the situation for the whole of rust here. If amethyst implements a really simple effective backend, it will then be modularized and consumed by the rest of the rust community.
That being said, here is my design proposal for the rust UI engine I’ll go from how you describe the UI down to how you color the pixels.
UI design
Text format
I actually think ron is a great format for describing UIs. It ties in with rust data-structures really nicely, and is easy enough for humans to read. With serde, it’s easy to swap data formats anyway if it is decided that something else is better.
UI Structure
Taking inspiration from flutter (and all other UI toolkits), I would describe a UI as a tree of components, where each parent component owns its children. A component can decide whether to accept children or not.
I propose a simple solution for styling components - every component (node in the graph) can have style for that component, and a style for both it and its children. Style just for the component always wins, then style for it and its children, otherwise style comes from the most recent parent. Each property is overwritten individually - so border-radius may be overridden by the parent, but color might come from the grandparent. A stack can be maintained during the layout phase to pass style information to all components, that they then use during the draw phase. Style doesn’t include size information like width and height that comes from the layout.
Where we draw things
In flutter, a component can draw both underneath and on top of its children.
Layout is determined by a simple algorithm: A single depth-first recursive descent of the tree, where each component tells its children what their constraints are, and they report back their size. Each component can visit its children in any order it chooses, e.g. the fixed sized components, then the flexibly sized ones. Sometimes a component can reason that the layout of its children can’t change, but we can ignore this to start with for simplicity.
Actually drawing them
Once we know each element’s size, we do a depth-first recursive descent asking each component to draw itself. To start with I propose that we draw everything into one layer on the GPU. Once we have everyone’s pixels rendered, we composite them over the 3D stuff, in order to get the final image.
Converting primitives to draw calls
Some code needs to be written to translate draw calls from the widgets to draw commands for the GPU. This is where a library like lyon can be used to tesselate 2D shapes like line, square, etc into primitives and colors/textures. Thus we can support 2d svg-style shapes (using lyon) and anything that renders itself (by just compositing directly). You could even do render-to-texture 3D scenes that are rendered onto the UI - I would use this to create a widget to orient yourself in debug mode (like you get in e.g. blender).
Widgets
Then once we had this infrastructure it would be easy to build widget abstractions like buttons, text areas, scroll areas etc. and widgets for layout like flex.
Text
Text is quite hard in its own right, but there are good libraries out there to help, like harfbuzz. Here is some info on rendering text correctly. This should probably be seen as a separate problem to layout (the problem is: given an area and some text, layout the text correctly in the area, as best you can).
Implementation
If people like this, I’ve got a while between paid jobs to work on it.
The first step is to work out how to draw and composite things (how to use lyon, modify the UiPass), then write the layout algorithm, and finally write widgets. There also obviously needs to be event processing, but I think this already works pretty well in amethyst. It would be good to make everything as decoupled as possible, maybe having a pluggable layout algorithm using the visitor pattern, but this can come later.
I probably will fail if I try to do this on my own though so I would need to get buy-in from other members of the community.
Stretch goals
Stretch goals are to implement the caching in flutter, that is layout breaks, layer breaks, and render-to-texture for layers. They’re probably not such an issue on desktop, but important for mobile, however they can be added in after we have a MVP.
What do people think? If this is successful (a fast and efficient way to draw UI on the GPU) then the code will be useful well beyond amethyst, but this seems like a good place to do the work.
There’s also the issue of how your 2D interface interacts with a 3D world. The best way here is probably to render your 2D solution to a texture, then render that into the 3D scene. This could be done after the initial 2D engine was finished. You could also do more complicated things, like maybe different parts of the UI are at different heights like a hologram, or also maybe the UI is animated like it is part of the 3D scene (it moves with the player’s wrist or something), but again these can be tackled as post-processing on the 2D solution.