Talk:R.U.R. (Rossum's Universal Robots)
From Wikipedia, the free encyclopedia
Wouldn't "Rossum's Universal Robots" be a less clumsy title for this article? And does it have a different name in Czech, does anyone know...? -- Oliver P. 01:30 Mar 7, 2003 (UTC)
- Hmm - my copy of the play has on the cover the title of this article, ie "R.U.R. (Rossum's Universal Robots)". Are we sure that this isn't the "official" name of play (whatever might constitute the "official" name)? I'd be hesitant about moving it, though a redirect from R. U. R. can't do any harm (we already have redirects from R.U.R. and Rossum's Universal Robots). --Camembert
- ISBN 0486419266 calls it merely "Rossum's Universal Robots", published by Dover Books. FWIW. -- Zoe
- Hmm. I should get myself a copy of the play, and then I'd be able to comment about such things in a more knowledgeable way... :) -- Oliver P. 03:26 Mar 7, 2003 (UTC)
- Incidentally, an Esperanto translation is available as a free e-book from the eLibrejo (titled as "R. U. R. - Rossumaj Universal-Robotoj"). I'm not sure what the Czech original title was, alas it's not cited. --Brion 03:49 Mar 7, 2003 (UTC)
-
- Ha, bone! Eble mi legos tion. :) -- Oliver P. 04:01 Mar 7, 2003 (UTC)
- While the play was written in Czech, "Rossum's Universal Robots" is the name of a company within the play, and the company name is in English. I scanned the cover of an edition of the play published in Prague in 1920. http://jerz.setonhill.edu/resources/RUR/. You can see the title is in English. RUR (with no periods) is part of a graphic, and beneath it (without parentheses) is the subtitle ROSSUM'S UNIVERSAL ROBOTS. I suppose an alternative title might be "RUR: Rossum's Universal Robots. --[1] DGJ 14 Jul 2003
- I've run across multiple copies of the play, listed as both R.U.R. (Rossum's Universal Robots), and Rossum's Universal Robots (R.U.R.). I've spent a lot of time working with this play, and it seems to me that, given it's rather confusing publication history, both names should be considered accurate. Anecdotally, for the purposes of internal paperwork when directing a production of this about a year ago, we referred to the play as "RUR" and the company as "R.U.R.." --StarX 21:55, 7 November 2006 (UTC)
[edit] Roboti in Czech?
Doesn't roboti simply mean "worker" in Czech? I.e., I thought the title meant "Rossum's Artifical Workers". --Dbenbenn 00:26, 20 Dec 2004 (UTC)
NO. Workers are "dělníci" and "robotníci" are socmen/sokemen (or workers in Slovak). Robot is a word invented by Capek. The title -as perceived by a then Czech - could be translated 100% literally as "Mr. Reassons Artificial Workies (or Workians)".Juro 03:22, 20 Dec 2004 (UTC)
- Or 'the slaves of reason'?. --GwydionM 18:45, 20 March 2006 (UTC)
[edit] Not only the origin of the term...
does anyone else notice that Karel's play is the basis for every robot story since? iRobot, Blade Runner, Ghost in the Shell, Hal 9000 etc. the idea that a if a robot is, as Tyrell puts it, More human than human, it will become human, and as Karel says Nash its teeth and throw down its work from time to time (paraphrased). What are you crazy? arent they all basing this on a concept of robots developed in in the 1920s? Aren't robots, however complex, utterly limited to what the programmers put into them. And even if a programmer developed an AI that could reason and fear and hate, what in the whole world would compell them to put it in machines that do labor, or have the power to kill. Sure, I may be missing the point, that it's an exploration of human nature, but why are we exploring human nature with machines? That seems like exploring the nature of plants by examining a solar panel, in some facets they may be similar, but for gods sakes they're not at all comparable on any deeper level. still, it does make for a great story.
- I know that "robot" has come to mean "mechanical person" in its most common interpretation, but the robots in Capek's play seem a lot more like human clones to me, both in their descriptions and in the hints about how they are produced. Which is particularly interesting because DNA wasn't discovered until thirty years later. -- Cranston Lamont 01:37, 18 July 2006 (UTC)
(Sorry to be picky, Cranston, but DNA was discovered in 1869. Sure, it wasn't shown to be the stuff of genes until 1943, but clearly you are thinking of the work of Watson and Crick - they did not discover DNA, but they did elucidate its structure, in 1953.SpikeMolec 17:45, 6 April 2007 (UTC))
-
- You have to remember that these Robots (always with a capital "R" in the play, denoting them as a distinct race), are created by a process very much reminiscent of the heavy industrial manufacturing that fueled the major economies of the time. Their material is a synthetic chemical mixed in vats, spun on spindles, and then inserted in the proper order. Capek's Robots are assembled on a line, not grown in a tank. Modern understanding of clones, "replicants" as in Do Androids Dream of Electric Sheep and otherwise organic AI (as the Cylons in the new Battlestar Gallactica) simply do not apply. The manufacture is 100% industrial, hence the characters in the play should have no more regard for the Robots than they would anything that popped off an assembly line. Just one director's opinion. --starX 20:31, 6 April 2007 (UTC)
- "What are you crazy? arent they all basing this on a concept of robots developed in in the 1920s?"
-
- You obviously don't read enough science fiction, fantasy, mythology or folklore (or the Wikipedia pages or history books on any of them, either), or, you know, the article you're on the Talk page for, or you'd know you're more than a little off base. On multiple points.
-
- First: The concept of automatons goes back even further than the 1920s (I think there were even a few in actual, real existance in the 1800s, weren't there?), as does, believe it or not, the concept of a mechanical slave or protector; there are legends in Greek myth that the ancient people of Crete had a bronze giant that perpetually waded through the water surrounding the island, ready to ward off invaders (so says Isaac Asimov in at least one of the introductions to his books - "Robot Visions", I think, though I'm not sure as I don't have it with me right now - and believe me, I trust Asimov to know his stuff, as - in case you hadn't realized this - he's also largely responsible for popularizing fictional robots in this century, especially under the more current and commonplace definition of robots as complex machines. Also, he invented the Three Laws of Robotics, which I'll use in a moment to refute another odd claim of yours).
-
- Additionally, the so-called "robots" of this play are actually very little like the modern definition of the term (see: robot), despite the play being the very origin of the word, because they were apparently primarily biological, not mechanical. Therefore, they're more akin to clones (as you can see in the introduction), doppelgangers, or possibly (I'm surprised it didn't mention this in the intro before I got here, come to think of it) homunculi. In fact, at the time of the play's writing, the term android had already been in common use amongst writers for years, describing artificially-created beings or creatures (regardless of biological or mechanical status) in human-like form; a concept that had already been around for at least a few centuries, generally referred to under the term homunculus. There's also the concept of the "golem", a sort of automaton slave (sometimes made of stone or mud) common in fantasy and apparently with origins in folklore... in fact, the article at current makes the claim that the writer of the play had (consciously or not) based it on or written it similarly to old folklore about the golem! So there goes your reasoning that "everybody else ripped of R.U.R", I suppose, since the core idea wasn't even all that original to begin with (not even the part where they turn on their masters; that was common in stories about homunculi, for one!).
-
- Also, most "robots" in the things you listed either aren't referred to as robots (Blade Runner refers to them as "Replicants"; the short story it was based on is titled "Do Androids Dream of Electric Sheep?", which both implies that they're mechanical/electronic, and obviously, shows they're not simply referred to as "robots" there, either), or they're not biological like these ones (I, Robot involved very complexly-programmed mechanical robots modeled on the basic human shape, and Ghost In The Shell's only biologically-based ones that I'm aware of are cyborgs, and thus fits neither this play's definition nor the modern working definition of "robot"). And I haven't seen (or read) 2001:A Space Odyssey, however, I was under the impression that HAL 9000 was a computer, not a robot by any definition, let alone this play's definition.
-
- Yes, a lot of these play off of a lot of really common, older ideas. But so did R.U.R.. "Influential" doesn't mean "absolutely inspired everything else that came after it in the same genre," you know.
-
- "Aren't robots, however complex, utterly limited to what the programmers put into them."
-
- First: how exactly does this support your argument? I was under the impression that your argument thus far was that everyone was ripping off this play, but I thought the point of the "robots" of this play was that they aren't actually limited by their programming (whatever programming they actually have, if any)?
-
- Anyway. Not neccesarily - at least, you're putting it somewhat poorly, at any rate. They are at current limited to specific, programmed instructions, however, programmers, bioengineers and what have you are all working on artificial intelligence and have been for years. The whole idea behind AI is that the program or robot that runs the program will be able to learn. In short, they are only limited by how well they can be programmed (or taught) to learn, which is really not so different from human beings or animals, when you really think about it (in fact, some of the things that differentiate us from most other species is the ability to learn to use tools, as well as language and the ability to ponder our own existence on a "Meaning of Life/Why am I here/Does everyone else even exist?" kind of way. All of which, by the way, there is some evidence if not a large body of evidence to support it being at least partially genetic... and if genetics aren't the biological equivalent of programming, I don't know what is! What else would you call something that tells something how to construct itself? A program, but of course, written, in the case of the genome, in the code of DNA and RNA and the like, as proteins).
-
- "And even if a programmer developed an AI that could reason and fear and hate, what in the whole world would compell them to put it in machines that do labor, or have the power to kill. "
-
- The answer: curiousity, human fallibility and the need for a good plot, but of course. What in the world would compel somebody to clone a T-Rex and show it off for touristss, seperating the two only by an electric fence, either? Only unadulterated greed and stupidity - yet this didn't stop Micheal Crichton from writing Jurassic Park, did it? Human fallibility is HUGE theme in science fiction, fantasy, hell, almost any kind of fiction. Otherwise, there'd be no crisis or conflict in most stories, and the story would be probably boring as all hell the majority of the time.
-
- But of course, that's assuming they did it on purpose. Or that they were cruel enough to leave an AI with the ability and desire to learn and experience the world, but no method to explore it with. Also, (and if you actually deigned to read the commonly-considered-great works of this supposedly derivitive and unoriginal genre, you would know this already) Asimov actually addressed this, seeing the exact same logical disconnect (albeit decades sooner) that you did. That's why all of his robot stories revolved around the ramifications of the Three Laws of Robotics - one of the laws, coincidentally (which was irrevocably built into the programming of every Asimovian robot), in fact, the one no robot could ever NOT follow, is the First Law, which is that they cannot hurt a human being, no matter how much they want to. I still recall the chilling-but-relieving ending of one of the stories from I, Robot, wherein a particularly advanced robot, having just been outwitted by a human scientist, attempted to strangle her in his fury - or rather, tried to attempt to strangle her, but failed, because his First Law programming made him short circuit the instant he decided to do a human being harm. There was also a story in that same novel (really, it's a short story collection with some connecting narrative, to be technical) wherein a(n accidentally-made) telepathic robot attempted to lie in order not to do emotional harm to humans, but due to the humans' conflicting desires, also short-circuited because he had, technically, hurt people (by hurting their feelings).
-
- Most modern robot stories seem to take off where Asimov started, with the basic assumption that, as a product of safety-minded engineers, robots WOULD have built-in programming features to keep them from doing harm (or, in addition to that, not doing what they're ordered to).
-
- And as for " but why are we exploring human nature with machines? That seems like exploring the nature of plants by examining a solar panel, in some facets they may be similar, but for gods sakes they're not at all comparable on any deeper level."? If you had actually read some Asimov (I recommend the Robot Novels, especially "The Caves of Steel"), or watched Ghost in the Shell, or watched Blade Runner, you'd KNOW - because it's another method which, when done well (and all of the titles I mentioned are done well, even if some of them have technology that's now oddly out of date), can do that. You might as well ask what the reasoning behind any serious written work. Human nature - especially curiosity, lonliness, etc. - has a dark side, and also a not-so-dark side. GitS examines that by showing a future (all-too-possible, actually) where the line between "human" and "machine" becomes so blurred that it causes societal backlash and considerable self-questioning (again, since this is actually realistic given certain trends in technology, asking these questions now isn't exactly a bad idea); Asimov wrote of the inherent difficulties in building tools that were that complex, but also useful, benign and beneficial (but also, again, the societal backlash comes into play, especially in "Caves of Steel", where a lot of people start to worry that - as today! - robots will take over their jobs, leaving them nothing), and also more than once pointed out that the Three Laws of Robotics are, in a weird sort of way, what many people would see as the ideal for human behavior (don't hurt/kill people, for starters, is a rule that, though frequently bent or altered, is common to pretty much every religion and set of laws in existance); Blade Runner, IIRC, questioned what it means to be an individual, and how important sentience (real sentience, not that weird Star Trek definition) is to people, and how important our memories are to us. These stories may or may not be your cup of tea, but how can you still ask "why use robots?". That's a silly thing to ask outside of a thesis. It's just another level of symbolism; is it any less dumb than Frankenstein being a Promethean story of science misused and humanity gone unacknowledged (I speak here of the book, not the movies)? Seriously. Just because you don't like it (I'm guessing without really trying it, or else you wouldn't I think be so quick to dismiss an entire, massive subgenre of science fiction, let alone one that's produced so many works that are considered to be of great sophistication and quality), doesn't mean it's not a perfectly legitimate way of exploring or portraying human nature. The fact that so many people still want to read and even buy new ones of these kinds of stories alone tells us that they do resonate still with a lot of people, and aren't as unoriginal or ineffectual at it all as you're making them out to be. Technology has always played a huge part in the role of humanity's survival (or non-survival, as the case may be), and as such, it's hardly surprising that science fiction uses it - including robots - so often, especially in the cases I stated above. Also, there's the matter of allegory or allusion, as well, you know. And why create robot characters that can feel emotions? Well, you might as well ask why we create aliens with improbably humanoid appearance, or why we often created Gods in the image of humans, or why fairies and the like are usually at least somewhat human in appearance; it's because "human" says to some extent "sympathetic; like me; more than an animal". And you might as well ask why we create aliens with human-like emotions; it's simply because that's what's in our experience, that's what we find easy to identify with (in fact, machines or aliens or creatures that use a human appearance to do evil, or that are otherwise evil and have an inhuman appearance, are quite common in science fiction and fantasy, because they play on old fears of the unknown and of being misled or tricked or lured away from safety and security. And the stories of insufficiently human-looking creatures - such as Frankenstein's monster - who end up misunderstood and hated are often an allegory for how humans often irrationally hate and shun those that are "different", which happens quite often every day on this planet, and always has).
-
- This is a prime example of why people shouldn't deride (or else give a damn good impression of deriding, despite the "still it does make for a great story" tacked on to the end there) something when they have no actual understanding what they're talking about. :\ 4.238.30.35 22:31, 14 September 2006 (UTC)