Here's a challenge: build a GUI for a sign event system (code list dispatcher in the scene and in each avatar) selectors that can route the code to any and all objects in the X3D scene.
The problem of X3D in the current technical offerings is the very large gap between authoring intelligent geometry and scripting scenes in it.
Character-building should be be simple variations on code lists of avatar-gestures that are reacted to differently given the avatar's interpreter (event behaviors).
The authoring system tool should enable a high-level character builder that sets emotional ranges coupled to ranges.
The avatar is not a passive reactor. All events/links aren't inbound; the avatar as an actor is seeking events and will navigate towards clusters containing the search terms in Topical Vector Space (the vectors among the terms in the XML metadata create clustered attractors). Where locations in geometric space contain these terms, the avatar navigates toward it or away from it.
The DeepGeeks will note that these are both Hamiltonian coordinate systems subject to the same transformations. In energy terms, the search term clusters act as attractors. Create a search set that returns the demographic desired and it pulls the search engine to the location. The world metadata (searc terms) is pulling the search engine, therefore, the avatar toward it. While this is trivial in terms of search engine ad placement support, it is non-trivial in creating a real=time situation comedy.
That is how you beat YouTube.