Human Agents simply pass the responsibility back to the Controlling Entity, verbally describing the situation and how they cannot cope with it.
The situation has been dealt with in mechanical devices for a while. Take for example the thermostatic control knob on a shower, the kind with the temperature control stop that requires a button to be pressed for the stop to be overridden. Users of such knobs find them to be intuitive a good example of graceful control return, or safety affordance.
But how will or Agents gracefully bow out of situations that they are not equipped to deal with?
My car releases the accelerator, flashes BRAKE! on the dash board and sounds an Alarm. So far, early enough for me to take the correct action. It is clear to me that car is no longer controlling my speed and I am back in control.
The situation may not be so intuitive in the digital and virtual worlds, I do not want to find myself with a depleted bank account when interfaces don't operate the way I expect them to.
Recently two copies of the same CD arrived from Amazon, used One Click, so my only assumption is that I had a key bounce, as I found a suitable home for the second copy. I did not even report the issue to Amazon, but clearly their User Interface was not up to the job, at the very least on the second key depression I would expect a message saying you already have ordered one of these do you Really Mean to order a second?
As I struggle to come to terms with Agents and their foibles I am increasingly concerned that they confidently misunderstand me more often than I find comfortable. Responding to my supposed wishes with an alacrity and a determination that is frankly frightening.
So while I am rather sick of Alexa telling me "I'm sorry but I do not understand your question!" I certainly prefer that to the alternative of "Your new Bentley is on order"