By Phillimon Zongo
Artificial intelligence (AI) has clearly emerged as one of the most transformational technologies of our millennium. AI has helped explore the universe, tackle complex and chronic diseases, formulate new medicines, and alleviate poverty. These intelligent systems continue to penetrate every industry sector, and deliver enormous benefits in the form of new business opportunities, deeper customer insights, improved efficiency, and enhanced agility.
For example, the US-based Memorial Sloan Kettering Cancer Center is using IBM Watson to compare patient medical information against a vast array of treatment guidelines, published research, journal articles, physicians’ notes, and other insights to provide individualised, confidence-scored recommendations to physicians. In Canada, the Bank of Montreal deployed robo-advisors to provide automated, algorithm-based portfolio management advice to its customers. And excitingly, the Massachusetts Institute of Technology (MIT) has developed an AI system that can detect 85 per cent of cyberattacks by reviewing data from more than 3.6 billion lines of log files each day, and informing them about anything suspicious.
Unsurprisingly, the majority (85 per cent) of respondents in ISACA’s Next Decade of Tech: Envisioning the 2020s study, which polled the perspectives of more than 5,000 business technology professionals in 139 countries, expect that AI and machine learning will have a moderate or major impact on their business’ profitability in this decade.
As AI becomes more widespread over the next decade, I believe we will see more innovative, creative and integrated uses. Indeed, 93 per cent of survey participants in Australia and New Zealand believe the augmented workforce – or people, robots and AI working closely together – will reshape how some or most jobs are performed in the coming years. The rise of social robots to assist patients with physical disabilities, manage elderly care, and even educate our children, are just some of the many uses being explored in healthcare and education.
As AI continues to redefine humanity in various ways, ethical consideration is of paramount importance which, as Australians, we should be addressing in government and business. ISACA’s research highlights the doubled-edged nature of this budding technology. Only 39 per cent of respondents in Australia believe that enterprises will give ethical considerations around AI and machine learning sufficient attention in the next decade to prevent potentially serious, unintended consequences in their deployments. Respondents specifically pinpointed malicious AI attacks involving critical infrastructure, social engineering, and autonomous weapons as their primary fears.
A well-designed AI system can significantly improve productivity and quality, but when deployed without due care, the financial and reputational impacts can be of an epic magnitude. In banking and finance, flawed algorithms may encourage excessive risk-taking and drive an organisation toward bankruptcy. In the health care sector, flawed algorithms may prescribe the wrong medications, leading to adverse medical reactions for patients. In the legal sector, flawed algorithms may provide incorrect legal advice, resulting in severe regulatory penalties.
For a long time, prominent researchers, academics, and global leaders have sounded early warnings about these risks. For instance, in February 2018, a group of leading academics and researchers published a report raising alarm bells about the increasing possibilities that rogue states, criminals, terrorists, and other malefactors could soon exploit AI capabilities to cause widespread harm.
Back in 2017, the late physicist, Stephen Hawking, cautioned that the emergence of AI could be the “worst event in the history of our civilisation”, unless society finds a way to control its development. Hawking’s warnings echoed those of other distinguished global figures, including former US President Barack Obama, Tesla and SpaceX CEO Elon Musk, and Microsoft founder Bill Gates, who all warned that the absence of binding AI development regulations would spell disaster.
For most people, however, the malicious use of AI seemed a long way off. Those advocating for stricter global laws and governance surrounding AI usage were dismissed as doomsters. But malicious AI programs have surfaced much quicker than many pundits had anticipated.
A case in point is the proliferation of deepfakes, ostensibly realistic audio or video files generated by deep learning algorithms or neural networks to perpetuate a range of malevolent acts, such as faking celebrity pornographic videos, revenge porn, fake news, financial fraud, and a wide range of other disinformation tactics.
Several factors underpinned the rise of deepfakes, but a few stand out:
- The exponential increase of computing power combined with the availability of large image databases.
- The absence of coherent efforts to institute global laws to curtail the development of malicious AI programs.
- Social media platforms, which are being exploited to disseminate deepfakes at scale, are struggling to keep up with this rapidly maturing and evasive threat.
As a result, deepfake videos published online have doubled in the past nine months to almost 15,000 cases, according to DeepTrace, a Netherlands-based cybersecurity group.
It’s clear that addressing this growing threat will prove complex and expensive, but the task is pressing. Legislators are starting to tighten the squeeze, with a growing number of bills introduced in US Congress and by state governments. Locally, the Australian Competition and Consumer Commission Digital Platforms Inquiry report highlighted the “risk of consumers being exposed to serious incidents of disinformation”. Emphasising the gravity of the risk is certainly a step in the right direction, but more needs to be done.
To date, no industry standards exist to guide the secure development and maintenance of AI systems. Further exacerbating this lack of standards is the fact that start-up firms still dominate the AI market. A recent MIT report revealed that, other than a few large players, such as IBM Watson and Palantir Technologies, AI remains a market of 2,600 start-ups. The majority of these start-ups are primarily focused on rapid time to market, product functionality, and high return on investment. Embedding cyber resilience into their products is not a priority.
Furthermore, currently there is no global consensus on whether the development of AI requires its own dedicated regulator or specific statutory regime. The rapid intersection between cybercrime and politics, combined with deep suspicions that adversarial nations are using advanced programs to manipulate elections, spy on military programs, or debilitate critical infrastructure, have further spoiled prospects of meaningful international cooperation.
To support business innovation and maximise its value, comprehensive cyber resilience for intelligent systems is vital. Policymakers and business leaders need to be mindful of the key risks that are inherent in AI adoption, conduct appropriate oversight, and develop principles and regulations that articulate the roles that can be partially or fully automated today to secure the future for tomorrow.
Until these concerted standards come to realisation, business leaders around the world should:
- Use existing, industry-accepted standards where possible. Although these are not specifically designed for artificially intelligent systems, they can help businesses to identify common security risks and establish a solid baseline for securing new technologies. Notable frameworks include:
- Open Web Application Security Project (OWASP) Top 10 – A list of the 10 most current critical web application security flaws, along with recommendations to ensure that web applications are secured by design.
- US National Institute of Standards and Technology (NIST) Cyber Security Framework – Consists of standards, guidelines and practices to promote the protection of critical cyber infrastructure.
- COBIT 2019 – Provides detailed and practical guidelines for security professionals to manage and govern their organisation’s information and technology, and make more informed decisions while maintaining awareness about emerging technologies and the accompanying threats.
- Engage experienced security consultants to review critical controls for AI products (including detailed penetration testing) and fix any exploitable security vulnerabilities before going live.
- Conduct due diligence to determine vendor security capabilities, product security road maps, and frequency of security updates – with a long-term commitment to product security as a critical success factor.
- Deploy robust encryption to protect sessions between AI systems and critical records from compromise (commonly referred to as man-in-the-middle attacks).
- Grant minimum system privileges and deploy strong controls to protect service accounts that are used by AI systems to eliminate critical tasks from abuse – especially those with administrator-equivalent privileges.
- Adopt defence in depth to ensure that a failure in one control layer will not result in a system breach.
- Consider the ethical ramifications if AI development is used for malicious purposes.
In conclusion, cyber-threat actors are increasingly agile and inventive, spurred by the growing base of financial resources and absence of regulation – factors that often stifle innovation for legitimate enterprises. There is need for transparent, thoughtful and well-intentioned collaboration between academics, professional associations, the private sector, regulators, and world governing bodies. This threat transcends the periphery of any single enterprise or nation. Ultimately, strategic collaboration will be more impactful than unilateral responses to address the issue of ethics and regulation in AI.
Phillimon Zongo is an ISACA member, Director of Cyber Resilience, and Co-Founder and Director at Cyber Resilience.