AI Can Financially Destroy your Business :These days, everyone seems concerned about the possible effects of artificial intelligence (AI). Even influential figures in technology, including Elon Musk and Steve Wozniak, have signed a public petition requesting that OpenAI, the company behind the interactive chatbot ChatGPT, halt development for six months so that it may be “rigorously vetted and controlled by independent outside experts.”
They have reason to be concerned about the potential effects of AI on humans in the future—we’re talking serious Terminator stuff without an Arnold to save us. Yet that is the present. Regrettably, there is AI in use now that is already beginning to have a significant influence on organizations and individuals, even financially ruining them. It got to the point that the US Federal Trade Commission (FTC) felt the need to alert the public about an AI scam that, in the words of this NPR piece, “sounds like a plot from a science fiction movie.”
Scammers Stole Almost $11 Million
This, however, is not science fiction. Scammers stole almost $11 million from gullible consumers last year using deep fake AI technology by impersonating loved ones, physicians, and attorneys and asking for money from their family and friends. The FTC claims that all the con artist needs to pull off the fraud is a brief audio clip of your family member’s voice, which he may obtain from internet sources, and a voice-cloning application. “The con artist will sound just like your loved one when he phones you.” And only customers are affected by these situations. This new kind of fraud is fast affecting companies of all sizes.
This is what occurred to a bank manager in Hong Kong who got convincing deep-fake calls from a bank director asking for a transfer and ultimately moved $35 million before losing track of it. Similar circumstances led to an innocent employee of a UK-based energy company to transfer over $250,000 to criminals after being duped into believing the receiver was the CEO of the firm’s parent. The FBI is currently alerting companies about the use of deep fakes by criminals to establish online “workers” for remote job positions in order to steal company data.
During the past several years, deep fake video technology has been increasingly popular, mostly targeting politicians and public figures like Mark Zuckerberg, Tom Cruise, Barack Obama, and Donald Trump. And I’m confident that a rising number of convincingly realistic-looking false films will try to sway voters during this election year.
But what concerns me the most is the possible impact on the several unaware small company owners I know. Many of us have appeared in videos that are accessible to the public on Facebook, LinkedIn, or YouTube. But, even those who haven’t been in videos can have their voices “stolen” by con artists who duplicate incoming voicemails or even phone a target pretending to be someone else just to capture their voice. This is more harmful than viruses or ransomware. Effective utilization can result in large, immediate losses. What do you then? You put controls in place. As well as applying them.
Company Should be Permitted to Carry out Any Financial Activity
This implies that no financial manager in your company should be permitted to carry out any financial activity, such as a cash transfer based on an incoming call. Even the CEO of the company needs a callback to confirm the source. Furthermore crucially, no transaction beyond a specific predefined value may be permitted without the previous written consent of many corporate leaders. Of course, the transaction request must also be supported by written documents, such as a signed request or contract.
These kinds of controls are simpler to put into place in a bigger, more structured organization. The easiest way to describe management override at smaller firms is “I don’t care what the rules are, this is my business, so transfer the cash now, dammit!” Yet accountants at smaller businesses frequently find themselves victims of this type of management override. Establish guidelines and adhere to them, if you’re a company owner reading this. It’s in your best interest. So, absolutely, there are horrifying potential threats for mankind from AI technologies like ChatGPT. Yet that is the present. Deep fake technology that spoofs workers and impersonates CEOs is already prevalent and will continue to grow.