Native does not always do this. For example, functions whose parameters are
float and float1 always result in an "ambiguous function call" error.
This does not fix any tests, because the relevant tests previously (incorrectly)
succeeded, and now fail with:
E5017: Aborting due to not yet implemented feature: Prioritize between multiple compatible function overloads.
when they should simply fail.
The choice to store them in an rbtree was made early on. It does not seem likely
that HLSL programs would define many overloads for any of their functions, but I
suspect the idea was rather that intrinsics would be defined as plain
hlsl_ir_function_decl structures [cf. 447463e590]
and that some intrinsics that could operate on any type would therefore need
many overrides.
This is not how we deal with intrinsics, however. When the first intrinsics were
implemented I made the choice disregard this intended design, and instead match
and convert their types manually, in C. Nothing that has happened in the time
since has led me to question that choice, and in fact, the flexibility with
which we must accommodate functions has led me to believe that matching in this
way was definitely the right choice. The main other designs I see would have
been:
* define each intrinsic variant separately using existing HLSL types. Besides
efficiency concerns (i.e. this would take more space in memory, and would take
longer to generate each variant), the normal type-matching rules don't really
apply to intrinsics.
[For example: elementwise intrinsics like abs() return the same type as the
input, including preserving the distinction between float and float1. It is
legal to define separate HLSL overloads taking float and float1, but trying to
invoke these functions yields an "ambiguous function call" error.]
* introduce new (semi-)generic types. This is far more code and ends up acting
like our current scheme (with helpers) in a slightly more complex form.
So I think we can go ahead and rip out this vestige of the original design for
intrinsics.
As for why to change it: rbtrees are simply more complex to deal with, and it
seems unlikely to me that the difference is going to matter. I do not expect any
program to define large quantities of intrinsics; linked list search should be
good enough.
Specifically, map COLOROUT to OUTPUT, and map INCONTROLPOINT to INPUT for domain
shaders as well as hull shaders.
Obscure the non-existent differences from the view of the backend.
Hull shaders have a different temps count for each phase, and the
parser only reports the count for the patch constant phase.
In order to properly check for temps count on hull shaders, we first
need to decode its phases.