Tehachapi's Online Community News & Entertainment Guide

AI or ML, your cameras, microphones, location

Tech Talk

The technologies and features we attribute to Artificial Intelligence (AI) are based on Machine Learning (ML). We currently don't have the technology to produce an artificial brain that thinks and understands the way our organic brains do. That kind of AI is called Artificial General Intelligence (AGI). The keyword here is general. A true AGI can "think" about multiple things and apply what it "thinks" about to different areas than it started with. Your organic brain can think about paying the electric bill while wondering if it's time to change the oil in your car.

Siri, Alexa, and autocorrect are the products of Machine Learning (ML), not AI. ML produces programs that can "learn" to perform narrow, specific tasks.

Imagine showing a program a picture and telling it that the image contains a dog. You program it to get a point for every image with a dog in it. Do that a million times, and then teach your program to guess whether or not there's a dog in a picture. Every time the computer guesses wrong, it learns from the mistake. Do this for another few million cycles, and you'll have a program that can accurately guess when there's a dog in a picture. Your shiny new program has no idea what a dog is, but it knows when there's one in an image.

There's the rub, as it were. Computers "know" things but don't "understand" anything. Our wet, squishy brains win every time on that scorecard.

We don't understand how our brains work, so we create models called neural networks and use ML to perform tasks. In the dog image recognition program from two paragraphs ago, the neural network passes the input data through many layers, each layer specializing in a different aspect of the task. Each layer votes, and eventually the network "decides" if the picture has a dog in it. If the picture doesn't contain a dog, the network adjusts a few parameters and tries again. Each time the network makes a guess and gets it right or gets it wrong and has to change, it is called an iteration. After a few million iterations, the network gets extremely good at finding dogs in pictures.

It still doesn't know what a dog is, though. Or how a dog is different from a lawnmower or a couch. Or even what a picture or an image is, I suppose.

We can assign tasks to a program and use ML to find an efficient way to accomplish the task or solve the problem. Because the program doesn't, indeed it can't, understand the job, it can sometimes solve the wrong problem. Let's say you task a program to win at a game it "knows" how to play. The program "knows" that losing is the same as not-winning, so when the game gets to a point where the program predicts it will lose, it pauses the game forever to avoid losing. The program found a way to accomplish the goal but didn't understand what the goal was in the first place.

Computers don't understand. They can predict, based on a particular set of things that have happened billions and billions of times before. And in a sufficiently complex neural network, it can "look" like thinking. But it's not.

Siri, Alexa, autocorrect, and all the rest we call AI these days are ML networks. They're all running extremely fast, looking at billions of points of data, making decisions based on more billions of iterations of the same thing, and end up spitting out a guess that's right more times than not. But it still doesn't know the difference between a dog and a chair. Or the difference between "baked" and "naked" in autocorrect.

Autocorrect-ed

A priest, a rabbit and a minister walk into a bar.

The bartender asks the rabbit, "What can I get you to drink?"

The rabbit says, "I have no idea. I'm only here because of autocorrect."

Do you have a computer or technology question? Greg Cunningham has been providing Tehachapi with on-site PC and network services since 2007. Email Greg at [email protected].