The first series
1966 on NBC
and ran for
Actor Leonard Nimoy recalled that when he obeyed a director's advice to "be detached" when speaking the line 'Fascinating', "a big chunk of the character was born right there". He liked Spock's logical nature, observing that the character is "struggling to maintain a Vulcan attitude [...] opposing what was fighting him internally, which was human emotion"
album, with its Russian
Constructivist / robotic
cover, inspired our
"The Robots" piece with its singing robot.
This is the album for those who love the allure of electronic dance music on a dance floor crammed of fun-loving warm bodies being bathed in shimmering strobes and pulsating colored spotlights. Overall, the album has widely captured attention and consciousness to pass the test of time.
The Industrial Revolution
was when humans over-
came the limitations of
our muscle power. We’re
now in the early stages
of doing the same thing
to our mental capacity.
Erik Brynjolfsson: We call it the great paradox of the Second Machine Age. Even though we have record productivity, US median incomes are lower now than they were in the 1990s. There’s no law that everybody’s going to benefit from technology.
A new SpareTag.com tutorial explaining how to make an Artificial Intelligence, from data warehousing to deep learning to attention and consciousness.
All visuals were selected, edited, animated and colored by SpareTag.com to form the following sequences of our original 90-second-short video:
The word "computer" was first use in 1613 by Richard Braithwait, an English writer, to describe a person, a highly skilled mathematician, able to perform impressive calculations. This is not without irony that, in the brief history of computer, fast computing machines may become intelligent virtual persons able to operate independently with their own attention and consciousness.
The first concept of a computer machine emerged in the 19th century. It was the Analytical Engine, conceived – but never built – by English engineer Charles Babbage (1791–1871). The design had (i) an arithmetical logic unit (the "mill", now called ALU) to perform all four arithmetic operations and comparison, (ii) a control unit to interpret instructions (from “punched cards”, now called programs) allowing conditional branching and loops, and (iii) the ability to memorize 1,000 numbers of 40 digits each using physical wheels (the “store,” now called RAM).
Still, it took another century before Alan Turing laid out in his 1936 paper: On Computable Numbers the same key principles for machines to perform detailed computations from memorized sets of instructions. The need for machine to help decipher encoded messages during WWII was instrumental to the development of the technology that resulted in 1946 in the first general–purpose digital computer: the Electronic Numerical Integrator and Computer (ENIAC). It is said that lights dim in Philadelphia when it was turned on for the first time.
From there, the technology improved exponentially, each generation building faster and faster on the previous achievements, starting with the first commercial use in 1951 (Universal Automatic Computer, or UNIVAC 1); replacement of transistors by vacuum tubes in 1955; invention of integrated circuits in late 50’s; Intel’s first single-chip microprocessor in 1971. Then came IBM’s personal computer (PC) for home and office use in the 80’s, running the new Microsoft’s MS-Dos Operating System, subsequently replaced by Windows in the 90’s.
There doesn’t seem to be an end to the technological progress during our brief history of computer, leading unavoidably to the best Artificial Intelligence. It is just a question of time, i.e. is AI centuries or decades away? Like hardware, applications of artificial intelligence will evolve from successive waves of innovations. But what is artificial intelligence? Here is our introduction to machine learning.
Today, computers have become ubiquitous in all areas of life, at work or at home. This is because computers are useful tools that are much better than most humans involved in information-processing jobs (i.e. 65 percent of the American workforce).
In addition to surpassing the human mental capacity for calculation and processing tasks, computers have made possible the global interconnection of people and information sharing through the internet. Combined with data warehousing capability, this has brought universal knowledge to our connected doors.
Since the development of powerful computers and cheap / fast access memory for in the late 1970s and early 1980s, machines have been instrumental in collecting, cleaning-up, classifying and data warehousing to gain knowledge. The logical next step is the introduction to machine learning with computers able to draw lessons from the stored data.
What is artificial intelligence today? Computers are currently able to recognize behaviors and patterns in databases to predict outcome. For example, many tech companies are developing solutions to assess which customers are most likely to buy specific products. This information is then used to decide the best marketing / distribution channels. Without the computers’ analytics, decisions are made based on expert judgment, which are biased toward status quo, i.e. toward the outcome most favorable to the expert interests, an outcome usually worse than relying purely on data.
Following the human brain neural network example, computers can already apprehend facts by themselves. In practice, data are filtered through successive layers of processing and grouping to, at the end, extract the essential characteristics of the original cloud of points. All concepts are learned from scratch, by analyzing and classifying the relations in the non-structured data, without supervision or specific programs. This is done with a new generation of graphics processing units (GPU), derived from video games, and particularly suited to recognize patterns in large volume of heterogeneous data. Interestingly, computers achieve 98% success rate in image recognition, whereas humans are wrong in 5% of the cases.
Soon, deep learning tutorial will allow a machine to understand a language and get not only the general sense but also the context, irony, metaphors, jokes, even intonation and silence. At first, the sentences will not be comprehensible but after many comparisons and reclassifications of words and group of words, a true meaning will emerge.
For the moment, machines can only identify patterns but lack the tools to put things in perspective, prioritize and recommend a course of action. To be able to make decisions, a computer would need to be aware of the context, to have attention and consciousness.
Somehow, the brain also processes information but in a more subjective manner, through memories, senses and emotions, which provides context and awareness. To gain this level of attention and consciousness, a machine would need to dispose of:
- surrounding “object" blueprints, i.e. generic vector representations of objects or fact patterns relevant to the situation. Through the deep learning technology (see above), today’s computers are able to build some of these representation by themselves.
- its own computer “body" blueprint, i.e. its virtual self-representation, including characteristics of its physical shape, personality traits (behavior), and past performance (memories). This should be carefully done to prevent any future devious behaviors, like killing all humans, feared by many.
- an “attention scheme,” i.e. a description of the complex relationship between the “body” and the “objects”, some kind of practical map providing information about where the machine is, where it wants to go and the various paths to get there. The attention scheme allows the machine to concentrate its resources and focus on objects and tasks.
Without an attention scheme, the machine is not able to explain how it relates to anything in the world. With one, the machine is able to affirm its awareness of the object by reference to the defined scheme … the same way a human has attention and consciousness, without being able to explain where it comes from.
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it" Eliezer Yudkowsky
"But if the technological Singularity can happen, it will" Vernor Vinge