As enterprises try to deploy infrastructure to support AI applications they generally discover that the demands of this application can disrupt their architecture plans.
This episode of On-Premise IT, sponsored by Pure Storage, discusses the disruptive impact of AI on the enterprise with Justin Emerson, Allyson Klein, Keith Townsend, and Stephen Foskett.
Heavy duty AI processing requires specialized hardware that more resembles High-Performance Computing (HPC) than conventional enterprise IT architecture. But as more enterprise applications leverage accelerators like GPUs and DPUs, and become more disaggregated, AI starts to make more sense. Power is one key consideration, since companies are more aware of sustainability and are impacted by limited power availability in the datacenter, and efficient external storage can be a real benefit here.
This is still general-purpose infrastructure but it increasingly incorporates accelerators to improve power efficiency. One issue for general purpose infrastructure is the concern over security, and enterprise AI applications will certainly benefit from broad access to a variety of enterprise data. Enterprise use of AI will require a new data infrastructure that supports the demands of AI applications but also enables data sharing and integration with AI applications.
Synopsis: In this AI Leadership Insights video interview, Mike Vizard speaks with Jim White, CTO of IOTech, about the challenges of deploying and maintaining AI at the network edge.
Mike Vizard: Hello and welcome to Techstrong.ai. We're talking with Jim White today who's CTO for IOTech, and we're talking about the challenges of deploying and maintaining AI at the network edge. Jim, welcome to the show.
Jim White: Thanks for having me, Mike. It's a pleasure to be on.
Mike Vizard: At the risk of a gross oversimplification, we have AI models that are trained up in the cloud somewhere, and then there's an inference engine created, and then we want to kind of move that inference engine as close to the point where data is being created and consumed, and that sounds simple enough, but we have a lot of data scientists involved, there's DevOps teams involved. So from your perspective, what are the challenges that we're seeing as we try to move AI close to the edge?
As the freight train that is generative AI continues barreling down the track to an uncertain destination, we thought it would be good to take some time to stop and ponder where we're currently at in terms of GenAI adoption.
It's been quite a ride since the launch of ChatGPT in late November 2022 ignited the GenAI revolution. While we have been tracking the development of generative AI technology for more than five years here at Datanami, there's no denying the huge impact that OpenAI's debut of ChatGPT is having on the world.
To gain a better understanding of GenAI's impact on business and society, we aggregated recent research on GenAI adoption, and present the findings here.
Machine learning (ML) is a type of artificial intelligence (AI) focused on building computer systems that learn from data. The broad range of techniques ML encompasses enables software applications to improve their performance over time.
Machine learning algorithms are trained to find relationships and patterns in data. They use historical data as input to make predictions, classify information, cluster data points, reduce dimensionality and even help generate new content, as demonstrated by new ML-fueled applications such as ChatGPT, Dall-E 2 and GitHub Copilot.
Machine learning is widely applicable across many industries. Recommendation engines, for example, are used by e-commerce, social media and news organizations to suggest content based on a customer's past behavior. Machine learning algorithms and machine vision are a critical component of self-driving cars, helping them navigate the roads safely. In healthcare, machine learning is used to diagnose and suggest treatment plans. Other common ML use cases include fraud detection, spam filtering, malware threat detection, predictive maintenance and business process automation.
Most organisations are optimistic about AI but adopting it requires trainings and awareness on privacy and security according to Global DevSecOps Report: The State of AI in Software Development by GitLab Inc.
The survey of 1,000 global senior technology executives, developers, and security and operations professionals, revealed 83% believes in the importance of implementing AI in their software development processes. However, 79% are cautious about the security and privacy access of AI tools.
Around 40% of security professionals worry that AI-generated code could result in more security vulnerabilities, increasing their workload. The same percentage believes that security is a key benefit of AI.
See all Archived IT - AI articles
See all articles from this issue
|