Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
CFOS is in an acid. The efficiency of his brains wants to enter AI, but the risks are red.
NewSurvey from KYIBAThe financial AI platform interviewed 1000 CFT and 96% is a priority to integrate AI, although many have yet to worry about it.
AI often functions as a “black box” to create uncertainty on how it comes to the final results. In addition, there are concerns about whether the information is confidential and security and the AI compliance.
However, this risk is to promise to promise for improved efficiency, for processing, 86% of CFOs use AI on some or more.
So how should companies and cfos be prepared For AI adoption to reduce risks?
Black box.Bob Stark, according to the global presentation President Kyiba, these concerns are ways to these concerns that can facilitate the comfort of the EU integration.
“Every CFO, every CFO we talk, say the same thing,” he said. “We must have the information. (We) need to understand how this works and the results are ourselves and our own organizations and our organization (we cannot work within our organization.”
Even some of them Program engineers building AI can not quite understand how this works, the AI products can be more open due to at least their work, so much CFOs can confirm the results independently. In the security side, not to be used to avoid data maintenance, to prepare models or exposed to others, must be a guard in the necessary Paid AI products.
According toGlenn HopperiAI Research and Design, Measures Consultative Group, the same safety rules GoogleSnow avalanche or AWS AI products should be applied to enterprise versions.
“Security concerns won a little overklown,” he said. “It is very easy to remove data downloaded to models.”
As it becomes compatible, an industrial, which does not move with the spread of the AI, and if it wants to be in front of these risks, it goes in front of these risks, these risks are in front of these risks.
Know your goals.Before placing AI, Stark recommends that CFOs actually understand what their goals are. . For example, the AI, according to Stark, can help in AI, exposure, hedge and accounting processes.
His accuracy should be tested after determining the goals of AI. STARK offers to start comparing predictant methods with new AI-powered results.
“It’s a type of travel that can help you create trust,” he said.
Create policy and train workers.Following the clearance of the EU’s work coverage, it can spread it to employees with clear policies and comprehensive training.
Hopper advises the use of AI’s use, how to use the EU system employees, how to use it, and how to use this process to use a person’s use. Stark also calls on companies to explain how compliance with politics.
AI is more flexible than traditional means according to the hopper. Managers say employees how to use traditional software packages, while employees with AI will shape how vehicles are used in jobs.
“They will understand how to automate the parts of their work,” he said. “And you want to come out clearly. Because open and you can figure it out to do not use anyone’s wrong.
Hopper calls on basic training in special engineering, which explains the most suitable specific tasks to check the custom tasks, the most suitable specific tasks for the AI.
“We do not expect to change the roles in finance, but people with AI can replace people with no AI,” he said.
This report was Was first published by Cfo brew.
This story was first displayed Fortune.com