How could you give a computer a "natural need"?
So the idea is that a computer agent would be programmed in two layers, the conscious and unconscious.
The unconscious part is essentially a set of input and output devices, which I typically think of as sensors (keyboard, temperature, etc. to the limit of your imagination) and output methods (screen and speakers notably in the case of a home PC, but again to the limit of your imagination). Sensors can be added or removed at anytime, and this layer provides two main channels to the conscious layer, an single input and a single output. Defining what kind of information travels between these two layers is sort of difficult, but the basic idea is that the conscious part is constantly receiving signals (of various levels of abstraction) from the output of the unconscious part, and the conscious part can send whatever it wants down to the unconscious layer through the input channel.
The conscious layer initially knows little to nothing, it is just being completely blasted by inputs from the unconscious layer, and it knows how to send signals back, though it knows nothing about how any particular signal will affect the unconscious part. The conscious part has a large amount of storage space and processing power, however, it is all volatile memory.
Now for the question. I would like for the conscious part of the system to "grow" in that it has no idea what it can do, it just knows it can send signals, and so it starts out by sending signals down the pipe and seeing how that affects the sensor data it receives back. The dead end is that the computer is not initially trying to satisfy a goal. It is just sending signals around. To think of it like a baby being born, they need food, or sleep or to be moved out of the sun, etc. The sensory inputs of the baby are fed to its brain, which then decides to try making use of its outputs in order to get what it needs.
What kind of natural need can a co开发者_如何学Pythonmputer have?
What have I tried?
Outside of the scope of this question
Machine intelligence programs generally start at the Award level on Mazlow's Hierarchy of Needs because they don't have a way to perceive Physiological, Safety & Security, or Social needs. However...
At the physiological level the computer feeds on electricity. Plug in a UPS that tells the computer when it is running on battery and you have a potentially useful input for perceiving physiological needs.
Give it the ability to "perceive" that it has "lost time" or has gaps in its time record (due to power failure) and you might be able to introduce the need for Safety and Security.
Introduce social needs by making it need to interact. It could "feel" lonely when lots of time passes between inputs from the keyboard.
Detecting lost time, time passed since last keyboard interaction, and running on battery could be among the inputs available to the unconscious layer that can periodically be bumped to the attention of the conscious layer.
The computer scientists in Two Faces of Tomorrow approach a similar problem, training a computer sandboxed on a satellite to become aware. They give it those needs by, for example, making it aware that it will cease to function without electricity then providing appropriate stimulation and observing the response.
The Adolescence of P-1 is another interesting work along these lines.
A robot was programmed to believe that it liked herring sandwiches. This was actually the most difficult part of the whole experiment. Once the robot had been programmed to believe that it liked herring sandwiches, a herring sandwich was placed in front of it. Whereupon the robot thought to itself, "Ah! A herring sandwich! I like herring sandwiches."
It would then bend over and scoop up the herring sandwich in its herring sandwich scoop, and then straighten up again. Unfortunately for the robot, it was fashioned in such a way that the action of straightening up caused the herring sandwich to slip straight back off its herring sandwich scoop and fall on to the floor in front of the robot. Whereupon the robot thought to itself, "Ah! A herring sandwich..., etc., and repeated the same action over and over and over again. The only thing that prevented the herring sandwich from getting bored with the whole damn business and crawling off in search of other ways of passing the time was that the herring sandwich, being just a bit of dead fish between a couple of slices of bread, was marginally less alert to what was going on than was the robot.
The scientists at the Institute thus discovered the driving force behind all change, development and innovation in life, which was this: herring sandwiches. They published a paper to this effect, which was widely criticised as being extremely stupid. They checked their figures and realised that what they had actually discovered was "boredom", or rather, the practical function of boredom. In a fever of excitement they then went on to discover other emotions, Like "irritability", "depression", "reluctance", "ickiness" and so on. The next big breakthrough came when they stopped using herring sandwiches, whereupon a whole welter of new emotions became suddenly available to them for study, such as "relief", "joy", "friskiness", "appetite", "satisfaction", and most important of all, the desire for "happiness'.
This was the biggest breakthrough of all.
~from The Hitchhiker's Guide to the Galaxy by Douglas Adams
Bonus
Have a look at Reinforcement Learning.
精彩评论