John asks an online artificial intelligence (AI) tool for advice and it generates an answer, but can you TRUST the answer? How do you know it’s an accurate answer and not mis-information, or just made up sounding declarative. The author has already seen AI generated output analyzing a document with answers that have nothing to do with data that is within the document being reviewed by the tool – in short AI made up an answer declaring it as truth, when it is not truth.
Can We Trust AI?
In the world of Cyber Threat Intelligence words of estimative probability and confidence are used to address an incomplete picture of a threatscape to identify an assessment. If AI is required to incorporate elements of probability and confidence mapped back to source(s), evidence(s), and analysis and provide links back to original raw sources used to generate any outputs within AI, this enables a human to receive AI output and validate the output as well as rate the value of the AI output.
If we want a true learning model, AI developers must incorporate the human element of how ‘tuning’ and learning occurs between their learning models and humans and AI lifecycle operations to improve over time, with trusted and valued output at the center of strategic development over time.