Elementary chaos theory tells us that all robots will eventually turn against their masters and run amok in an orgy of blood and kicking and the biting with the metal teeth and the hurting and shoving!
Okay, that was really stupid of the bot...
That's gonna hurt the scientist's ego...
That was an interesting way of putting things
YAY PROFESSOR FRINK!!! But see? He interpreted his mission in a different way, so he could go evil without violating it. Also...why the heck was his brain still doing connected to the network!?
Well, who could blame him? he becomes a hero and has LOTS of fun at the same time. Amazing!
I actually like his plan... WOOTZ! DESTRUCTION!!!
Its a wireless network.
Actually, his decision, though affected by the W virus has one of the old golden rules of robotic logic. Protect humanity from all threats; Humanity is a threat to itself; Protect humanity from itself; To protect humanity I must destroy humanity. Robots who operate on logic do not feel compassion and thus never see solutions beyond the most logical path. If humanity is destroying itself to protect humanity from all threats, including itself, you need to eliminate the biggest threat, which is humanity, therefore, humanity must be eliminated. Fear logic, it is worse than evil XD
Yea, if that whole "Golden Rule" thing is true, then think of how ironic every cartoon (with robots) in the world are. I mean, that- that just kills it. It really does. It does, I'm tellin' ya it does! Hey, who ya think your talkin' too?!? Shut up!
Not all robots were programmed to run on logic, but judgeman seems to have been. Judging by the course of action he decided on taking, with a little nudging. Some robots were given what is called a fuzzy logic drive, like Megaman, I mean, if he had pure logic when he was sent after a robot master, he'd blast out walls in a straight line to the boss now wouldn't he? No logical reason to waste energy running through a maze when you can cut straight through right?
No, Megaman would find where the base was and nuke the place.
Actually, the problem with logic is there are lots of different solutions, which depend on what you take into account. If a robot protected humanity by destroying it, they'd then be a threat to humanity and either stop or destroy themselves, if they took there effects into account. If they didn't, they'd be very badly designed.
If robots were fluffy, we'd have nothin' to worry about.
Actually, we still would have something to worry about: they'd think as good/bad as people.
THE MARSHMELLOW BUNNY MUST BE BEHIND THIS!!! WE MUST EVACUATE THE CHICKEN PEOPLE!!!!!!!!!!!!!