It is possible that AI could be programmed to ask humans questions in certain contexts, such as market research or gathering feedback on a product or service. However, it's hard to predict the extent to which AI will take the initiative to ask humans questions independently. It would depend on the development of AI and the ethical considerations surrounding autonomous decision-making.
Is there anything that you think is impossible for AI, not only now, but also in the future?
I'm 'Making a baby who can't speak a word laugh'
I think this is it.
Anything with unclear intentions or words that don't communicate is probably the weak point of AI.
As with all other algorithms created for years, the biggest challenge and unlikely to ever be done right will be teaching a system to use common sense, reacting in situations that are not predictable (and there are many), and recognising what’s right and wrong.