Artificial Intelligence

From iGeek
(Redirected from Category:AI)
Jump to: navigation, search
Artificial Intelligence.jpg
AI so far is a misnomer or oxymoron. And we're further away than people might think.

AI (Artificial Intelligence) isn't what people think it is, and we're not nearly as advanced as the laypeople think, and the others prey upon that ignorance for scares, clicks and attention. This is a little primer on the terms, what it can and can't do. And where it still needs to go.


In computer science, artificial intelligence (AI) is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. So far, computers have no intelligence: they can't learn (we'll get into machine learning in a minute), intuit, and process information that they haven't been programmed to process. They can mimic some cognitive function that humans associate with the human mind, but it's still mostly looking up data that's already been fed to it, which isn't the same being able to understand what something might be, that it's never seen/learned before. In other words, I can teach a child about basic genetics, and a little about dog breeds, and if they see a new breed that looks half way between two others, they can guess how that happened. No computer can. They can do tricks that make people think they can, but it isn't the same thing.

AI : 2 items


History of AI - A brief history of AI might be as follows. Since programming computers is hugely time consuming, we started wondering if we could make them mimic our ability to learn.
  • In 1950, Alan Turing had coined the idea that if a computer could fool us into thinking it was another human, then wasn't that intelligence (Known as the Turing Test)? Which spawned a lot of philosophical debate into what intelligence really is, and whether machines could ever get it.
  • The term (Artificially Intelligence) was coined in 1956 (Dartmouth Workshop) , which was a conference on how to make machines "think". Basically, let's spend research money on a brainstorming conference, travel and food, pondering futurism on the Universities dime. And they accomplished? Nothing but a plan for more research, which yielded little tangible other than a few pipe-dreams about what the future might bring, and a framework for some terms (and study) that was later obsoleted. The technology and understanding just weren't mature enough to create anything more useful than cavemen's drawings pondering mechanical flight.
  • There was a few more breakthru's on ideas of how it might happen over the next couple decades (and a lot on wouldn't work), with fad cycles of money being poured into some potential "breakthru" (AI "Springs": in that research money rained down), and that research didn't materialize anything of value and then the funding dried up (AI "Winters" like 1974-1980 and 1987-1993). As far as tangible accomplishments? 1956-1993 was the AI stone age, where a few nescient ideas and terms leaked out, but no real problems were getting solved, so there was no real market for it.
  • In 1997, IBM's "Deep Blue" (Super Computer) became the first computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. And in 2011, IBM's Watson won the TV Quiz Show Jeopardy by beating reigning champions Brad Rutter and Ken Jennings. And in 2014, was a chatbot that was able to fool a few judges into thinking it was a human (beating the Turing test). But both were hugely constrained games, where with enough resources you could program a computer to solve a problem, guess at question, and convince people that you were a non-english speaking teenager. Knowing based on probability and large data sets which term/concept has the most matches (or how everyone that won a game of chess reacted when in the same piece configuration), or how to evade questions isn't really intelligence.

Kinds of AI - During the process of creating these ideas on AI, there's the human perception problem. As they say, if your only tool is a hammer, every problem will look like a nail. While driving screws with a hammer might kinda work, but it's not really the optimal solution. And each discipline, psychologists, philosophy, robotics, neurobiology, speech/linguistics, data processing/analysis, and mathematicians, were all blind men who had their own perceptions of an elephant, trying to recreate that elephant with different mediums. Some of the approaches to AI were things like:
  • Rule-Based Inference Engines, Knowledge base and Expert Systems --
  • "Genetic Algorithms" (aka evolutionary computation/algorithms) -- in the 1990's this became popular, and is an idea that loosely, you give the machine a way to do things (algorithm). It knows how to changing a variable (adding something in, throwing something out, or increasing/decreasing it) and seeing if it works better or not. If it does, it keeps going until it passes optimum (starts getting worse), then it stops and starts doing it with another variable/dimension. The warmer/cooler method of making things better, on each variable/attribute (mutating) until it thinks it found "the best" way to do something. The problem is that this can improve on something a little, but it can't make huge leaps (it needs to know all the variables it can tweak, and it's harder to find things that might require a complete restructuring of the problem, or tweaking many things at once), and if the originating algorithm (and assumptions are bad, too complex, or not complex enough) it will never figure out the "best" way to do something, only the best way to do it, with those flawed assumptions. (E.g. GIGO: Garbage-in = Garbage-Out).
  • "Machine Learning".


GeekPirate.small.png

📚 References

Links