An Overview of AI for Humans: A Brief History of Modern AI (Part 1)

Do repost and rate:

Up until this point, I've covered Artificial intelligence (AI) from a human perspective (including how AI does or doesn't correspond with human intelligence, simulated or otherwise).

A Brief History of Modern AI

The desire to create intelligent machines that work for us and free us from drudgery (including robots, or idols and golems in ancient times) is likely as old as humanity itself. Of course, it is not possible to cover the entirety of this aspect of human history (full as it is with technologies from the primitive stone age to the space age and beyond) in a single writing, so this will necessary be condensed and incomplete due to its limited scope. Besides, people generally don't care for the history of any particular technology, being impatient to get to grips with the technology as it stands now. (Only developers whom inherit a project read its change logs.) In order to keep this history brief, it will cover only modern/recent developments in AI, beginning with relatively primitive machines, through expert systems, LISP machines and AI labs circa the 1950s-70s up to the present (or at least late 2010s, the publication date of the primary source on which this is based).

Symbolic Logic at Dartmouth College

Moving on from mechanical devices like Babbage's difference engine, the earliest computers were little more than sophisticated calculators. (In fact, the average scientific calculator used in a high school setting has more power and sophistication than they did.) The ability to reason logically was a later enhancement and it provided such seemingly basic (but fundamental) operations as comparisons (less than, greater than or equal to). While this was definitely progress, it didn't absolve human operators of the steps required to create/define algorithms for computations, provide the necessary data in the right format (probably on punched cards or ticker tape) and then interpret the result(s), refine their algorithms and repeat the slow-going and error-prone process.

In the year 1956, various scientists (and perhaps some other academics) aiming for something more attended a workshop or two held at Dartmouth College. They supposed that they would be able to get computers to reason at the same (or similar) level as humans do. Unfortunately for them, they were mistaken and their efforts largely came to naught, for various reasons. Only recently have their ideas started to be implemented and come to fruition, thanks to various advances in computing (and likely a lot of hindsight). Even now, computers must master (or at least be highly competent at/in) an additional six types of intelligence (discussed previously) in order to match (never mind rival) human intelligence. That's simply not possible with current technology and is unlikely to be for the foreseeable future (if ever).

The Dartmouth College era/period of attempts at creating AI had one main stumbling block: Hardware and processing capability of the time just wasn't capable of providing the vast amount(s) of power that AI requires. It was only part of the problem, one which still plagues us today: For all our intelligence, observations and recorded data, humans don't really *understand* ourselves well enough to get computers to reliably *replicate* (not merely simulate) human-like (or at least primate-like) intelligence; we're very complex and sophisticated, not to mention variable. Until we reach that point, AI will only ever be partially effective, regardless of how much power hardware provides and how many cycles can be performed per second. (The limitation is no longer physical; it's now mental/psychological. One of the biggest problems in developing algorithms isn't a lack of computing resources to run it or the best language in which to code it. It is that even the best developers do not always fully understanding the problem an algorithm is meant to solve and how to convey that in the correct sequence of steps a computer can follow. This is one of the reasons why applications contain logic/semantic errors. Adding more developers tends to complicate matters, since it results in more communication channels and thus potential for miscommunication and misunderstanding.)

The biggest problem faced by AI developers today is that we don't understand human reasoning well enough to even hope to create a serviceable and accurate/precise model for computers to follow (assuming that's even possible), even with the input of psychologists and neuroscientists. (To go back to the flight analogy, the Wright brothers didn't achieve manned flight by simulating birds, but by understanding the process and dynamics of they fly.) If someone says that the next big innovation in AI is right around the corner or a few years away, but fails to produce compelling evidence of it's going to be achieved, don't believe that person. (Remember back in Maths and Science classes when you had to show the logic/working behind how you arrived at an answer to get the full points for it, even if it was correct? It's like that.)

Expert Systems

The field of early AI that was known as Expert Systems first appeared in the late 1970s and existed into the 1980s. (They were largely rule-based or set-based and are mainly responsible for the joke that AI consists of little more than a long branch of `if ... then ... else ...` code.) While expert systems were successful in many areas, they failed in others, due to their limitations (on of which was an overreliance on rules and probability theory). However, despite being the first successful attempt at AI, they were expensive and didn't really catch on outside of academic institutions with AI labs. Creating and maintaining them is no easy task, particularly since many of them are coded in some form of List Processing Language (LISP) or Prolog. (I've tried learning both. LISP is somewhat easier, but it's still a mindfuck if you've not learned a functional programming paradigm or language beforehand.)

To give you some idea, here's the Fizzbuzz example in Common LISP, by Adam Petersen (turned into a gist on GitHub by CodeForkJeff):

;; "Write a program that prints the numbers from 1 to 100. But for;; multiples of three, print “Fizz” instead of the number and for the;; multiples of five print “Buzz”. For numbers which are multiples of;; both three and five, print “FizzBuzz”."(defun is-mult-p (n multiple)(= (rem n multiple) 0))(defun fizzbuzz (&optional n)(let ((n (or n 1)))(if (> n 100)nil(progn(let ((mult-3 (is-mult-p n 3))(mult-5 (is-mult-p n 5)))(if mult-3(princ "Fizz"))(if mult-5(princ "Buzz"))(if (not (or mult-3 mult-5))(princ n))(princ #\linefeed)(fizzbuzz (+ n 1)))))))

I understand how to solve the fizzbuzz problem in a conventional procedural or OOP language, but the above goes into the category of "WTF, bruh?". It's all Greek to me, but at least it isn't APL (not that it makes much practical difference, really). Still, those who learn to code in LISP apparently love it for reasons non-apparent.

Although no longer called expert systems (from about 1990 onward), the paradigm (and algorithms using it) still exist(s). For example, most spelling and grammar checkers are heavily — almost exclusively — rule-based. Thanks to advances in computing power and storage capacity, such expert systems as succeeded and existed independently were mostly merged into the applications they supported. (The spelling and grammar checkers used by your office suite aren't separate/standalone applications any more, but libraries that your slide presentation, word-processor and spreadsheet editor use.) This has given rise to the inaccurate perception that expert systems generally failed and were discontinued. They didn't. They're just no longer distinct/separate entities.

You see, this is why it's important to learn and revise History. it gives you new perspectives and corrects mistaken ones. (It's how I learned that Multics didn't actually , despite what many a Unix history teaches; Bell Labs just lost interest and pulled out during a difficult/slow phase of its development.)

The AI Winters of Our Discontent

Similar to a crypto winter, an AI winter refers to a period of minimal/reduced interest and investment/funding in the field of AI development. As with most areas of technological development, proponents of AI have typically overstated capabilities and possibilities, to wealthy investors with little to no technical knowledge or sense (often by managers similarly afflicted). Failure to meet expectations is followed by criticism and jaded skepticism, then a reduction in funding/investment. This has occurred a number of times, in cycles. Does any of this seem familiar to you? (If you work in software or blockchain development or are no stranger to the cryptosphere, the answer is likely "yes". It's one of the reasons I don't trust managers, particularly at an executive level and hope to never be demoted to such a position.) As always, the results can be debilitating, if not devastating, to progress.

The things that are currently driving the AI hype machine are primarily deep learningmachine learning, technologies that aid computers in learning from data sets (often massive ones in the age of the Internet, known as Big Data). In doing this, computers derive/determine their tasks directly from examples of how they should behave, rather than relying on developers to set them. While this has often been a boon for developers, it has proven to be a double-edged sword for consumers/users (particularly of Big Tech companies like Amazon, Farceborg, Micro$loth, Scroogle and Twatter, which have taken advantage of AI to turn users into emotionally and psychologically manipulated puppets/products by mining their data and using it against them). The pitfall of machine learning is that it can learn how to do things the way, just as any self-taught developer can produce some truly horrendous code. (Does anybody remember the post I wrote about the algorithm that determined if a photo was of a wolf or Husky based on the presence/absence of ? It turns out it was purposely trained wrong to illustrate the problem of how blindly trusting black box machines is a bad idea. It's the AI equivalent of Wimp-Lo.)

On that bombshell, I have to cut this short and get to bed. Unlike a computer, I need to sleep in order to do my best work. To be continued ...

Regulation and Society adoption

Ждем новостей

Нет новых страниц

Следующая новость