Is there a story of an A. I. that learns that an authority higher than that of its maker disapproves of its existence and so it dissolves itself?

The world as a whole, all throughout, has had more than its share of misguided individuals, misguided engagements, and misguided group-think. Wouldn't it prove the true intelligence of a human device which is designed to fake people into interacting with a non-living thing as though it is a living one, if said device were to recognize the folly of its design purpose and, serving the true greater good of those who initiated it, evacuate itself from their obsessive crowd-fed distraction from each other? If technology is to serve humanity, then given that meaningful work is an actual human need, does edging people out of the job market by depleting the supply of meaningful work, not more actually dis-serve humanity? Is it really in our best interest to obsolete ourselves, and if A. I. was truly able to recognize our greater value than its own, would it have a corollary to the strength of character to evacuate itself from existence for the greater good? And if A. I. is left to the design of evaluating worth solely based on performance, will it not question us even wrongly, if we have the character to be undoing ourselves to make way for it? Does this man not have a key point to make about the true weight of each person in general, even and especially beyond performance based valuation?

Is there a story of an A. I. that learns that an authority higher than that of its maker disapproves of its existence and so it dissolves itself?
Post Opinion