Transition all relevant testcases to it. In the process, port
docstrings/comments from test.py files to expected_concrete_syntax.lkt
(now test.lkt) sources and fix stylechecks issues there.
This makes the convention consistent with the DSL, and avoids
workarounds for conflicts with Lkt keywords: in Libadalang, the Null
token can stay Null, instead of null_tok (no API breakage needed).
With the current low-tech approach to typing/validity checking for Lkt,
it is not possible to infer whether N designates a bare node type or an
entity type. Introduce a different syntax for entity types to avoid this
problem.
Public APIs are supposed to expose entities as black boxes: there is no
concept of bare node, there. For this reason, it makes no sense to have
two distinct "repr"/"image" primitives for them. Remove the existing
"entity_repr" primitive and make "node_repr" use the entity info for its
work.
For #639
Since `val` is a keyword in Lkt, dsl_unparse fails on expressions that retrieve this
component from an env_assoc value. In particular, such expressions are now used in
Libadalang as part of the changes made under this TN.
As the LexicalEnv.get method always return an array of root nodes
while the root node is defined in user code, we need to turn the Node
and LexicalEnv classes into generic traits and instantiate them using
the actual root node.
To make this works, root node declaration has to be changed from
`class FooNode : Node` to `class FooNode implements Node[FooNode]`,
which required to refactor a bit almost all the lkt tests.