In prior publications relating to this presentation, we have looked at the general problems associated with artificial intelligence. But in this article, we will focus on one particular issue relating to artificial intelligence – the assumption by non-technology management that AI systems work as intended. While that may sometimes be true, assuming it to be true is, at best, ill-advised, and at worst, dangerous. There are multiple examples of artificial intelligence systems failing, ranging from bias (hopefully unintentionally) built into the algorithms, known as “implicit bias” to issues arranging from not clearly understanding the code that makes up the artificial intelligence application, including many years of open-source code and embedded libraries. This knowledge sometimes referred to as a “software bill-of-materials” or “SBOM”) is currently being recognized as vital. But while the evidence that artificial intelligence systems can and do fail, senior executives seem to operate in some instances as if these failure factors didn’t exist, or to simply assume that they had been factored in to the project, albeit without evidence of that fact. Ultimately, the authors believe that standards for such systems should include a full risk assessment.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 firstname.lastname@example.org
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 email@example.com