IT News - AI

In most organizations, artificial intelligence models are 'black boxes,' where only data scientists understand what exactly AI does. That can create significant risk for organizations.

"Large, sensitive datasets are often used to train AI models, creating privacy and data breach risks," notes Avivah Litan in SiliconANGLE .

"The use of AI increases an organization's threat vectors and broadens its attack surface. AI further creates new opportunities for benign mistakes that adversely affect model and business outcomes.

Risks that are not understood cannot be mitigated. A recent Gartner survey of chief information security officers reveals that most organizations have not considered the new security and business risks posed by AI or the new controls they must institute to mitigate those risks. AI demands new types of risk and security management measures and a framework for mitigation..."


Nvidia believes it has the antidote to the headaches of putting AI to work without the need to hire data scientists.

Agam Shah writes in Enterprise AI, "The TAO starter kit -- which stands for Train, Adapt and Optimize -- speeds up the process of creating AI models for speech and vision recognition.

"You don't really need the AI expertise. You don't even need to know all the different frameworks. And you can leverage ... pre-trained models which are highly accurate and are high performance," said Chintan Shah, product manager at Nvidia, during a press briefing..."


The Artificial Intelligence Act was introduced to the European Union in April 2021, and is rapidly progressing through comment periods and rewrites

Alex Woodie writes in Datanami, "When it goes into effect, which experts say could occur at the beginning of 2023, it will have a broad impact on the use of AI and machine learning for citizens and companies around the world.

The AI law aims to create a common regulatory and legal framework for the use of AI, including how it's developed, what companies can use it for, and the legal consequences of failing to adhere to the requirements. The law will likely require companies to receive approval before adopting AI for some use cases, outlaw certain other AI uses deemed too risky, and create a public list of other high-risk AI uses..."


Next-generation AI products learn proactively and identify changes in the networks, users, and databases using "data drift" to adapt to specific threats as they evolve.

In March 2019, Norsk Hydro, a Norwegian renewable energy and aluminum manufacturing company, faced a ransomware attack. Rather than paying the ransom, a cybersecurity team used artificial intelligence to identify the corruption in the computer system and rebuild the operations in an uncorrupted parallel system. LockerGoga ransomware was eventually identified as the culprit, which spread via Windows-based systems.

While Norsk avoided paying the ransom, the attack still forced it to operate without computer systems for an extended period of time (weeks to months), while the security team isolated and scanned thousands of employee accounts for malicious activity.


Gartner's newest research highlights three 2022 Cool Vendors in AI for Computer Vision that offer innovative alternatives in the marketplace.

Analyst house Gartner, Inc. has released its newest research highlighting four emerging solution providers that data and analytics leaders should consider as compliments to their existing architectures. The 2022 Cool Vendors in Analytics and AI for Computer Vision report features information on startups that offer some disruptive capability or opportunity not common to the marketplace.

See all Archived IT News - AI articles See all articles from this issue