The Hard Problem seems more about epistemic limits. Many scientific theories are underdetermined but we still accept that they work. Conscious experience, however, can only be directly known from the "inside" (qualia, subjectivity), so a skeptical stance may always be taken as regards any other being's consciousness - you, the King of England, a sophisticated android that claims to be conscious. There is no scientific determination that a being engineered by natural selection (I'm using design broadly, in the sense that a design, a functional pattern, doesn't have to have a conscious designer but may arise by chance) is conscious, so we won't get that with an artificial consciousness either. Bernoulli's principle is NOT underdetermined, because when we design a wing using it we can witness the plane actually flies (to use @mistermack s example). Any principle of the causal nature of a conscious mind, its volitional states, its intentionality, is likely to be underdetermined. But that isn't equivalent to saying it is impossible for such states to develop in an artificial being.
I can't really see that we can reject volition developing in a machine because the designers solely possess volition. We humans, after all, as children develop the ability to form conscious intentions to do things by learning from parents and adults around us. Our wetware is loaded with a software package. We don't then dismiss all our later decisions in life as just software programs installed by them. We don't say I am just executing parental programs and have no agency myself. All volition rests with Mom and Dad.
This presupposes that machines can never be developed with cortical architecture, plasticity and heuristics modeled on natural systems and thus be able to innovate and possibly improve their own design. The designed becomes the designer - wasn't this argued earlier in the thread and sort of passed over?
Second, again you still seem to deny volition by fiat, as if it is a process that simply cannot be transferred. I think you aren't proving this. Why can't an android "baby" be made, which interacts with its environment and develops desires and volitions regarding its world? IOW, not every state in an advanced AI must be assumed to be programmed. That assumption just creates a strawman version of AI, resting on the thin ice of our present level of technology.