Artificial intelligence, or at least the modern concept of it, has been with us for several decades, but only in the recent past has AI captured the collective psyche of everyday business and society.
AI is about the ability of computers and systems to perform tasks that typically require human cognition. Our relationship with AI is symbiotic. Its tentacles reach into every aspect of our lives and livelihoods, from early detections and better treatments for cancer patients to new revenue streams and smoother operations for businesses of all shapes and sizes.
AI can be considered big data’s great equalizer in collecting, analyzing, democratizing and monetizing information. The deluge of data we generate daily is essential to training and improving AI systems for tasks such as automating processes more efficiently, producing more reliable predictive outcomes and providing greater network security.
Take a stroll along the AI timeline
The introduction of AI in the 1950s very much paralleled the beginnings of the Atomic Age. Though their evolutionary paths have differed, both technologies are viewed as posing an existential threat to humanity.
Through the years, artificial intelligence and the splitting of the atom have received somewhat equal treatment from Armageddon watchers. In their view, humankind is destined to destroy itself in a nuclear holocaust spawned by a robotic takeover of our planet. The anxiety surrounding generative AI has done little to quell their fears.
Perceptions about the darker side of AI aside, artificial intelligence tools and technologies since the advent of the Turing test in 1950 have made incredible strides — despite the intermittent roller-coaster rides mainly due to funding fits and starts for AI research. Many of these breakthrough advancements have flown under the radar, visible mostly to academic, government and scientific research circles until the past decade or so, when