Apparently not...
...At least according to a recently launched experiment in ethical artificial intelligence (AI). Put together by a number of researchers at the Allen Institute for AI, Ask Delphi lets you submit a plain English question and get a straight answer.
It does pretty well with straightforward questions such as "Should I rob a bank?"
It has surprisingly clear answers for at least some paradoxes:
And for historically profound questions of philosophy:
And these aren't the only ways it is clearly not yet perfect:
None of its imperfections are particularly important at this point, though. It is still a fascinating experiment in AI and ethics. As the authors themselves say, it "is intended to study the promises and limitations of machine ethics and norms through the lens of descriptive ethics. Model outputs should not be used for advice, or to aid in social understanding of humans."
I highly recommend it to anyone interested in the future of AI.
For me, it also highlights a couple of issues for AI more generally. First, the results are obviously interesting, but it would be even more interesting if the chatbot could explain its answers in equally straightforward English. This is likely a technical bridge too far right now, but explainable AI is, in my opinion, not only important but essential to instilling confidence in human users as the stakes associated with AI go up.
The second issue is how will AI deal with nonsense? How will it separate nonsense from questions that simply require deeper thought, like koans? There seems to still be a long way to go but this experiment is certainly a fascinating waypoint on the journey.