Artificial intelligence and machine learning have distinct limitations. Businesses looking to implement AI need to understand where these boundaries are drawn
"Although we are still in the infancy of the AI revolution, there's not much artificial intelligence can't do. From business dilemmas to societal issues, it is being asked to solve thorny problems that lack traditional solutions. Possessing this endless promise, are there any limits to what AI can do?
Yes, artificial intelligence and machine learning (ML) do have some distinct limitations. Any organization looking to implement AI needs to understand where these boundaries are drawn so they don't get themselves into trouble thinking artificial intelligence is something it's not. Let's take a look at three key areas where AI gets tripped up..."
As artificial intelligence becomes more prevalent throughout business and society, companies need to be mindful of human bias creeping into their machine models
"As artificial intelligence becomes more prevalent throughout business and society, companies need to be mindful of human bias creeping into their machine models
The old saying 'you get out what you put in' certainly applies when training an artificial intelligence (AI) algorithm. This is especially true in a business context, where the purpose of the AI may be to interact with customers, manage automated systems or mimic human decision making. It's critical that the outcomes match the objectives. However, it's also vital that companies are able to address any incidence of bias that may skew how an AI responds to instructions or requests..."
AI is being rapidly deployed at companies across industries, with businesses projected to double their spending in AI systems in the next three years
"But AI is not the easiest technology to deploy, and even fully functional AI systems can pose business and customer risks. One key risk highlighted by recent news stories on AI in credit-lending, hiring, and healthcare applications is the potential for bias. As a consequence, some of these companies are being regulated by government agencies to ensure their AI models are fair.
ML models are trained on real-world examples to mimic historical outcomes on unseen data. This training data could be biased for several reasons, including limited number of data items representing protected groups and the potential for human bias to creep in during curation of the data. Unfortunately, models trained on biased data often perpetuate the biases in the decisions they make..."
When it comes to pop culture, a company executive or history questions, most of us use Google as a memory crutch to recall information we can't always keep in our heads, but Google can't help you remember the name of your client's spouse or the great idea you came up with at a meeting the other day
"Enter Luther.AI, which purports to be Google for your memory by capturing and transcribing audio recordings, while using AI to deliver the right information from your virtual memory bank in the moment of another online conversation or via search.
The company is releasing an initial browser-based version of their product this week at TechCrunch Disrupt where it's competing for the $100,000 prize at TechCrunch Disrupt Battlefield..."
The dataset includes more than 1.5 million newspaper photos
"The US Library of Congress has released an AI tool that lets you search through 16 million historical newspaper pages for images that help explain the stories of the past.
The Newspaper Navigator shows how seminal events and characters, such as wars and presidents, have been depicted in the press. Jim Casey, an assistant professor of African American Studies at Penn State University who's tested the tool, said it would add a visual component to his historical research:..."
See all Archived IT - AI articles
See all articles from this issue
|