As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In prior publications relating to this presentation, we have looked at the general problems associated with artificial intelligence. But in this article, we will focus on one particular issue relating to artificial intelligence – the assumption by non-technology management that AI systems work as intended. While that may sometimes be true, assuming it to be true is, at best, ill-advised, and at worst, dangerous. There are multiple examples of artificial intelligence systems failing, ranging from bias (hopefully unintentionally) built into the algorithms, known as “implicit bias” to issues arranging from not clearly understanding the code that makes up the artificial intelligence application, including many years of open-source code and embedded libraries. This knowledge sometimes referred to as a “software bill-of-materials” or “SBOM”) is currently being recognized as vital. But while the evidence that artificial intelligence systems can and do fail, senior executives seem to operate in some instances as if these failure factors didn’t exist, or to simply assume that they had been factored in to the project, albeit without evidence of that fact. Ultimately, the authors believe that standards for such systems should include a full risk assessment.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.